Azure

all my Azure posts

#CASBAN6: Implementing the API endpoints with Azure Functions

#CASBAN6: Implementing the API endpoints with Azure Functions

After my last post, we have the base implementation ready to be used for our endpoints. As we already know, our endpoints will feature the CRUD pattern to interact with the endpoints. Please note: the code on GitHub is already a few steps ahead and may look a bit different from what I am posting here (mostly due to the OpenApi attributes, but you will be able to follow along).

I will use the AuthorFunction to demonstrate the implementation. All other function implementations besides the BlogFunction follow the same pattern.

Let’s dive in

First, we create a new class that derives from our base class. The constructor initializes the EF context as well as the ILogger for the author function. On top, we are defining the Route template that our functions are going to use as constant, so we can refer it in the function’s attributes. You should now have something similar to this:

public class AuthorFunction : BlogFunctionBase
{
    private const string Route = "blog/{blogId}/author";

    public AuthorFunction(BlogContext blogContext, ILoggerFactory loggerFactory) : base(blogContext) =>
        Logger = loggerFactory.CreateLogger<AuthorFunction>();
}

Override the Create method

The Create function is obviously responsible for creating a new entry in our database. First, we check if the blogId query parameter was specified and if it is parsable as a Guid. This step is the same for all endpoints. If these checks succeed, we are moving on to the next step, in this case deserializing the submitted Author DTO.

We are using then the CreateFrom mapping extension method to transform the DTO into the EntityModel.Author object (read my post on DTOs and mappings here). The latter one can then be added to the context’s authors list and saved. If all goes well, we create a 201 Created response indicating the direct API url to read the newly created author. In all other cases, we have some error handling in place.

[Function($"{nameof(AuthorFunction)}_{nameof(Create)}")]
public override async Task<HttpResponseData> Create([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = Route)] HttpRequestData req, string blogId)
{
    try
    {
        if (string.IsNullOrWhiteSpace(blogId) || Guid.Parse(blogId) == default)
            return await req.CreateResponseDataAsync(HttpStatusCode.BadRequest, "Required parameter 'blogId' (GUID) is not specified or cannot be parsed.");


        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();

        Author? author = JsonConvert.DeserializeObject<Author>(requestBody);

        if (author != null)
        {

            EntityModel.Author? newAuthorEntity = author.CreateFrom(Guid.Parse(blogId));

            EntityEntry<EntityModel.Author> createdAuthor =
                BlogContext.Authors.Add(newAuthorEntity);

            await BlogContext.SaveChangesAsync();

            return await req.CreateNewEntityCreatedResponseDataAsync(createdAuthor.Entity.AuthorId);
        }
        else
        {
            return await req.CreateResponseDataAsync(HttpStatusCode.BadRequest, "Submitted data is invalid, author cannot be created.");
        }
    }
    catch (Exception ex)
    {
        Logger.LogError(ex, "Error creating author object on blog with Id {BlogId}", blogId);
        return await req.CreateResponseDataAsync(HttpStatusCode.InternalServerError, "An internal server error occured. Error details logged.");
    }
}

Override the GetList method

We are using the GetList method to retrieve a list of entities. This method employs also the count and skip parameters, which is a simple way of implementing paging. We are using the ToDto method on the EnityModel.Author list to return the entities to the caller.

If something is going wrong during the function call, we have some error handling in place.

[Function($"{nameof(AuthorFunction)}_{nameof(GetList)}")]
public override async Task<HttpResponseData> GetList([HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = Route)] HttpRequestData req, string blogId)
{
    try
    {
        Logger.LogInformation("Trying to get authors...");

        if (string.IsNullOrWhiteSpace(blogId) || Guid.Parse(blogId) == default)
            return await req.CreateResponseDataAsync(HttpStatusCode.BadRequest, "Required parameter 'blogId' (GUID) is not specified or cannot be parsed.");

        (int count, int skip) = req.GetPagingProperties();

        List<EntityModel.Author> entityResultSet = await BlogContext.Authors.
                                                                     Include(author => author.UserImage).
                                                                     ThenInclude(media => media.MediumType).
                                                                     Where(author => author.BlogId == Guid.Parse(blogId)).
                                                                     Skip(skip).
                                                                     Take(count).
                                                                     ToListAsync();

        List<Author> resultSet = entityResultSet.Select(entity => entity.ToDto()).ToList();

        return await req.CreateOkResponseDataWithJsonAsync(resultSet, JsonSerializerSettings);
    }
    catch (Exception ex)
    {
        Logger.LogError(ex, "Error getting author list for blog with Id \'{Id}\'", blogId);
        return await req.CreateResponseDataAsync(HttpStatusCode.InternalServerError, "An internal server error occured. Error details logged.");
    }
}

Override the GetSingle method

The GetSingle endpoint needs also the id of the desired entity to be executed. This call is for getting just a single author from the database. Also here we have our error handling in place.

[Function($"{nameof(AuthorFunction)}_{nameof(GetSingle)}")]
public override async Task<HttpResponseData> GetSingle([HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = Route + "/{id}")] HttpRequestData req, string blogId, string id)
{
    try
    {
        if (string.IsNullOrWhiteSpace(blogId) || Guid.Parse(blogId) == default)
            return await req.CreateResponseDataAsync(HttpStatusCode.BadRequest, "Required parameter 'blogId' (GUID) is not specified or cannot be parsed.");
        
        if (!string.IsNullOrWhiteSpace(id))
        {
            Logger.LogInformation("Trying to get author with Id: {Id}...", id);

            EntityModel.Author? existingAuthor =
                await BlogContext.Authors.
                                  Include(author => author.UserImage).
                                  ThenInclude(media => media.MediumType).
                                  SingleOrDefaultAsync(author => author.BlogId == Guid.Parse(blogId) &&
                                                                 author.AuthorId == Guid.Parse(id));

            if (existingAuthor == null)
            {
                Logger.LogWarning("Author with Id {Id} not found", id);
                return req.CreateResponse(HttpStatusCode.NotFound);
            }

            return await req.CreateOkResponseDataWithJsonAsync(existingAuthor.ToDto(), JsonSerializerSettings);
        }
        else
        {
            return await req.CreateResponseDataAsync(HttpStatusCode.BadRequest, "Submitted data is invalid, must specify BlogId");
        }
    }
    catch (Exception ex)
    {
        Logger.LogError(ex, "Error getting author with Id '{AuthorId}' for blog with Id \'{BlogId}\'", id, blogId);
        return await req.CreateResponseDataAsync(HttpStatusCode.InternalServerError, "An internal server error occured. Error details logged.");
    }
}

Override the Update method

The structure of the Update function should be no surprise. First check the blog’s id, then read the submitted Author DTO. If the author already exists, update the information of the Author in the database. Otherwise, tell the caller there is no such author.

[Function($"{nameof(AuthorFunction)}_{nameof(Update)}")]
public override async Task<HttpResponseData> Update([HttpTrigger(AuthorizationLevel.Anonymous, "put", Route = Route + "/{id}")] HttpRequestData req, string blogId, string id)
{
    try
    {
        if (string.IsNullOrWhiteSpace(blogId) || Guid.Parse(blogId) == default)
            return await req.CreateResponseDataAsync(HttpStatusCode.BadRequest, "Required parameter 'blogId' (GUID) is not specified or cannot be parsed.");

        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();

        Author? authorToUpdate = JsonConvert.DeserializeObject<Author>(requestBody);

        if (authorToUpdate != null)
        {
            EntityModel.Author? existingAuthor =
                await BlogContext.Authors.
                                  Include(author => author.UserImage).
                                  ThenInclude(media => media.MediumType).
                                  SingleOrDefaultAsync(author => author.BlogId == Guid.Parse(blogId) &&
                                                                 author.AuthorId == Guid.Parse(id));

            if (existingAuthor == null)
            {
                Logger.LogWarning("Author with Id {Id} not found", id);
                return req.CreateResponse(HttpStatusCode.NotFound);
            }

            existingAuthor.UpdateWith(authorToUpdate);

            await BlogContext.SaveChangesAsync();

            return req.CreateResponse(HttpStatusCode.Accepted);
        }
        else
        {
            return await req.CreateResponseDataAsync(HttpStatusCode.BadRequest, "Submitted data is invalid, author cannot be modified.");
        }
    }
    catch (Exception ex)
    {
        Logger.LogError(ex, "Error updating author with Id '{AuthorId}' for blog with Id \'{BlogId}\'", id, blogId);
        return await req.CreateResponseDataAsync(HttpStatusCode.InternalServerError, "An internal server error occured. Error details logged.");
    }
}

Override the Delete method

Last but not least, we sometimes need to delete entities for whatever reason. This is where the Delete function comes into play. This function requires both the blog’s id and the entity’s id to be executed. As with all other functions, there is some error handling in place.

[Function($"{nameof(AuthorFunction)}_{nameof(Delete)}")]
public override async Task<HttpResponseData> Delete([HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = Route + "/{id}")] HttpRequestData req, string blogId, string id)
{
    try
    {
        if (string.IsNullOrWhiteSpace(blogId) || Guid.Parse(blogId) == default)
            return await req.CreateResponseDataAsync(HttpStatusCode.BadRequest, "Required parameter 'blogId' (GUID) is not specified or cannot be parsed.");

        EntityModel.Author? existingAuthor = await BlogContext.Authors.
                                                               Include(author => author.UserImage).
                                                               SingleOrDefaultAsync(author => author.BlogId == Guid.Parse(blogId) &&
                                                                                              author.AuthorId == Guid.Parse(id));

        if (existingAuthor == null)
        {
            Logger.LogWarning("Author with Id {Id} not found", id);
            return req.CreateResponse(HttpStatusCode.NotFound);
        }

        BlogContext.Authors.Remove(existingAuthor);

        await BlogContext.SaveChangesAsync();

        return req.CreateResponse(HttpStatusCode.OK);
    }
    catch (Exception ex)
    {
        Logger.LogError(ex, "Error deleting author with Id '{AuthorId}' from blog with Id \'{BlogId}\'", id, blogId);
        return await req.CreateResponseDataAsync(HttpStatusCode.InternalServerError, "An internal server error occured. Error details logged.");
    }
}

Helper methods

You might have noticed that there are some extensions methods in the code samples above you haven’t seen so far. I got you, here they are.

Paging properties

To get the paging properties (I use simple paging here), we have the query parameters count and skip. To extract them from the request, we do some parsing on the parameters that the HttpRequestData provides us.

private static Dictionary<string, string> GetQueryParameterDictionary(this HttpRequestData req)
{
    Dictionary<string, string> result = new Dictionary<string, string>();

    string queryParams = req.Url.GetComponents(UriComponents.Query, UriFormat.UriEscaped);

    if (!string.IsNullOrWhiteSpace(queryParams))
    {
        string[] paramSplits = queryParams.Split('&');

        if (paramSplits.Any())
        {
            foreach (string split in paramSplits)
            {
                string[] valueSplits = split.Split('=');

                if (valueSplits.Any() && valueSplits.Length == 2)
                    result.Add(valueSplits[0], valueSplits[1]);
            }
        }
    }
    return result;
}

public static (int count, int skip) GetPagingProperties(this HttpRequestData req)
{
    Dictionary<string, string> queryParams = req.GetQueryParameterDictionary();

    int count = 10;
    int skip = 0;

    if (queryParams.Any(p => p.Key == nameof(count)))
        _ = int.TryParse(queryParams[nameof(count)], out count);

    if (queryParams.Any(p => p.Key == nameof(skip)))
        _ = int.TryParse(queryParams[nameof(skip)], out skip);

    return (count, skip);
}

HttpResponseData helpers

Our function will run in an isolated process. Besides having a bunch of advantages like easier dependency injection, it brings also some syntax changes. As you can see from the docs, we are now responding with a HttpResponseData object. For easier creation of these objects, I wrote these extensions:

public static async Task<HttpResponseData> CreateResponseDataAsync(this HttpRequestData req, HttpStatusCode statusCode, string? message)
{
    HttpResponseData response = req.CreateResponse(statusCode);

    if (string.IsNullOrWhiteSpace(message))
        message = statusCode.ToString();

    await response.WriteStringAsync(message);

    return response;
}

public static async Task<HttpResponseData> CreateNewEntityCreatedResponseDataAsync(this HttpRequestData req, Guid createdResourceId)
{
    HttpResponseData response = req.CreateResponse(HttpStatusCode.Created);
    response.Headers.Add("Location", $"{req.Url}/{createdResourceId}");

    await response.WriteStringAsync("OK");

    return response;
}

public static async Task<HttpResponseData> CreateOkResponseDataWithJsonAsync(this HttpRequestData req, object responseData, JsonSerializerSettings? settings)
{
    string json = JsonConvert.SerializeObject(responseData, settings);
    
    HttpResponseData response = req.CreateResponse(HttpStatusCode.OK);
    response.Headers.Add("Content-Type", "application/json; charset=utf-8");

    await response.WriteStringAsync(json);

    return response;
}

The first one creates a response with the specified HttpStatusCode and an optional message. The second one is for the 201 Created responses, while the last one is for the 200 OK responses.

The BlogFunction

The BlogFunction implmentation is the only one not deriving from the base class. As the blog is the root entity, there are some differences from the pattern above.

The Create method in this function works without the blog’s id, but otherwise is the same as for all other Create methods.

The GetBlogList method features count properties for its child entities (authors, posts, tags, media) and does also not need the blog’s id. Details of child entities should be loaded via their function implementations.

The GetBlog method tries to load a blog completely with all child entities. This may result in a very large data set and should be used with caution and for exports only.

The Update and Delete methods are once again following the pattern of the other functions, except they just need the blog’s id.

You can have a look at the BlogFunction right here on GtiHub.

Anonymous authorization

If you are wondering why all the functions have the AuthorizationLevel set to Anonymous, I got you. Once the function is deployed to Azure, we will use the Azure Active Directory to force a login to a Microsoft account (others may follow) to call our functions. We have a strong protection this way without big efforts.

Conclusion

In this post, I showed you how to use the base class we created in the last post of the series, and also showed you how the BlogFunction differs from that. In the next post, we will have a look at how to add Swagger to our Functions that will make API testing a lot easier as we advance with our project. With each post, we are getting closer to deploy the API functions to Azure, so stay tuned for the next post(s)!

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Dev Stories, 0 comments
#CASBAN6: Function base class (and an update to the DTO models)

#CASBAN6: Function base class (and an update to the DTO models)

After we have been setting up our Azure Function project last time, we are now able to create a base class for our Azure functions. The main goal is to achieve a common configuration for all functions to make our life easier later on.

CRUD defintion

Our API endpoints should provide us a CRUD (Create-Read-Update-Delete) interface. Our implementation will reflect this pattern as follows:

  • A creation endpoint
  • Two read endpoints – one for list results (like a list of posts) and one for receiving details of an entity
  • An update endpoint to change existing entities
  • A delete endpoint

As all of our entities are tied to a single blog’s Id, we will use this base class for all entities besides the Blog entity itself.

The base class

    public abstract class BlogFunctionBase
    {
        internal readonly BlogContext BlogContext;
        internal ILogger? Logger;
        internal JsonSerializerSettings? JsonSerializerSettings;

        protected BlogFunctionBase(BlogContext blogContext)
        {
            BlogContext = blogContext ?? throw new ArgumentNullException(nameof(blogContext));

            CreateNewtonSoftSerializerSettings();
        }

        private void CreateNewtonSoftSerializerSettings()
        {
            JsonSerializerSettings = NewtonsoftJsonObjectSerializer.CreateJsonSerializerSettings();

            JsonSerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();

            JsonSerializerSettings.NullValueHandling = NullValueHandling.Ignore;
            JsonSerializerSettings.Formatting = Formatting.Indented;
            JsonSerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;
            JsonSerializerSettings.DateFormatHandling = DateFormatHandling.IsoDateFormat;
            JsonSerializerSettings.DateParseHandling = DateParseHandling.DateTimeOffset;
        }

        public virtual Task<HttpResponseData> Create([HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = null)] HttpRequestData req,
            string blogId) =>
            throw new NotImplementedException();

        public virtual Task<HttpResponseData> GetList([HttpTriggerAttribute(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequestData req, string blogId) =>
            throw new NotImplementedException();

        public virtual Task<HttpResponseData> GetSingle([HttpTriggerAttribute(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequestData req, string blogId, string id) =>
            throw new NotImplementedException();

        public virtual Task<HttpResponseData> Update([HttpTriggerAttribute(AuthorizationLevel.Anonymous, "put", Route = null)] HttpRequestData req, string blogId, string id) =>
            throw new NotImplementedException();

        public virtual Task<HttpResponseData> Delete([HttpTriggerAttribute(AuthorizationLevel.Anonymous, "delete", Route = null)] HttpRequestData req, string blogId, string id) =>
            throw new NotImplementedException();

    }

Knowing the definition of our API endpoints, there shouldn’t be any surprises with that implementation. As a base class, the definition is of course abstract.

In the constructor, I am setting up how JSON objects will be handled. I am following common practices in formatting. The only thing that is different from using JSON.NET directly is the fact we need to explicitly use the NewtonsoftJsonObjectSerializer.CreateJsonSerializerSettings method to create an instance of the JsonSerializerSettings property.

We also have an internal ILogger? property, which will be used in the derived class to create a typed instance for logging purposes. Last but not least, I am enforcing passing in a BlogContext instance from our Entity Framework implementation.

If you look at the method declaration, you’ll see the authorization level is set to Anonymous. As I am using Azure Active Directory, I am handling the authorization on a separate layer (there will be a post on that topic as well). Besides that, there is nothing special. All methods need a blogId and those endpoints that interact with a resource need a resource id as well.

The blog entity has a similar structure, with some differences in the parameter definitions. To keep things simple, I decided to let it be different from the base class definition above. We will see this in the next post of this series.

Update to the DTO models

While the blog series is ongoing, it still lags a bit behind on what I am currently working on (you can follow the dev branch on GitHub for an up-to-date view). As I am currently working on the administration client for our blog, I started to implement an SDK that can be used by all clients (the blog’s website will also just be a client). See the original post on DTOs here.

To be able to create a generic implementation of the calls to the API endpoints, I needed to create also a base class for the DTO models. It is a very simple class, as you can see:

public abstract class DtoModelBase
{
    public virtual Guid? BlogId { get; set; }

    public virtual Guid? ResourceId { get; set; }
}

This base class allows me to specify a Type that derives from it in the SDK API calls – which was my main goal. Second, I have now a common ResourceId property instead of the Id being named after the class name of the DTO. Both properties are virtual to allow me to specify the Required attributes in the derived classes as needed. You can see these changes on GitHub.

The reason I am writing about the change already today is that it will have impact on how the functions are implemented, as both the API functions and the Client SDK use the same DTO classes.

Conclusion

In this post, we had a look at the base class for our Azure functions we will use as an API and on the updated DTO models. With this prerequisite post in place, we will have a look at the function implementations in the next post.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Dev Stories, MAUI, 0 comments
#CASBAN6: Setting up an Azure Functions project for the API

#CASBAN6: Setting up an Azure Functions project for the API

Now that we have our Entity Framework project in place and our DTO mappings ready, it is finally time to create the API for our blogging engine. I am using Azure Functions for this.

Out-of-Process vs. In-Process

Azure Functions can be run in-process or in an isolated process. The isolated project decouples the function app from the underlying process, which enables additional features like custom middleware and Dependency Injection. Besides that, it allows you to run non-LTS versions, which can be helpful sometimes. These were the main reasons for choosing the Out-of-Process model. If you want to learn more about that topic, I recommend reading the docs.

Create the project

As you may have noticed, I recently became kind of a fan of JetBrains’ Rider IDE. Some steps may be different if done in Visual Studio, but you will be able to follow along.

First, make sure you have the Azure plugin installed. Go to the Settings, and select Plugins on the list at the left-hand side. Search for ‘Azure’ and install the Azure Toolkit for Rider. You will need to restart the application.

JetBrains Rider Plugin Settings

Once you have the plugin installed, open your solution and create a new project in it (I made it in a separate folder). Select the Azure Functions template on the left.

JetBrains Rider New Project dialog with Azure Functions selected.

I named the project BlogFunctions. Select the Isolated worker runtime option, and as Framework, we keep it on .NET 6 for the time being.

NuGet packages and project references

To enable all the functionalities we are going to use and add, we need some NuGet packages:

  • Microsoft.Azure.Functions.Worker, Version=1.10.0
  • Microsoft.Azure.Functions.Worker.Sdk, Version=1.7.0
  • Microsoft.Azure.Functions.Worker.Extensions.Http, Version=3.0.13
  • Microsoft.Azure.Functions.Worker.Extensions.ServiceBus, Version=5.7.0
  • Microsoft.Extensions.DependencyInjection, Version=6.0.1
  • Microsoft.Azure.Core.NewtonsoftJson, Version=1.0.0

Please note that I use the latest version that support .NET 6 and not the .NET 7. We also need to reference the projects we already created before, as you can see in this picture:

Project and Package references in the Function app

Program.cs

Now we finally can have a look at our Program.cs file. I am not using top-level statements here, but feel free if you want to, the code doesn’t change, just the surroundings.

To make our application running, we need to create a new HostBuilder object. I still prefer Newtonsoft.Json over System.Text.Json, so let’s add that one first:

IHost? host = new HostBuilder().ConfigureFunctionsWorkerDefaults(worker => worker.UseNewtonsoftJson()).Build();

host.Run();

In order to be able to use our Entity Framework project we created already earlier, we need to add a ConnectionString and also configure the application to instantiate our DBContext. Update the code above as follows:

string? sqlConnectionString = Environment.GetEnvironmentVariable("SqlConnectionString");

IHost? host = 
	new HostBuilder().
		ConfigureFunctionsWorkerDefaults(worker => worker.UseNewtonsoftJson()).              
		ConfigureServices(services =>
		{
			if (!string.IsNullOrWhiteSpace(sqlConnectionString))
				services.AddDbContext<BlogContext>(options =>
					options.UseSqlServer(sqlConnectionString));	              
		}).
		Build();

host.Run();

The connection string will be read from the local.settings.json file locally and from the Function app’s configuration on Azure. For the moment, just add your local ConnectionString:

{
  "IsEncrypted": false,
    "Values": {
        "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
        "AzureWebJobsStorage": "UseDevelopmentStorage=true",
        "SqlConnectionString": "Data Source=localhost;Initial Catalog=localDB;User ID=sa;Password=thisShouldB3Stronger!",
        
    }
}

Side note: If you have a look into the GitHub repo, you will see that there are some entries for OpenAPI in both files. The OpenAPI integration will get its own blog post later in this blog series, but for this post, they are not important.

Conclusion

In this post, we had a look at how to set up an Azure Functions app with Newtonsoft.Json and our Entity.Framework DbContext. In the next post, we will have a look at some Extensions that will be helpful, as well as the base class implementation of most functions within the app. As always, I hope this post was helpful for some of you.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Dev Stories, 2 comments
#CASBAN6: the DTOs and mappings

#CASBAN6: the DTOs and mappings

We already have created our database and our entities, so let’s have a look at how we bring the data to our API consuming applications.

If we recap, our entity models contain all the relations and identifiers. This could lead to some issues like circular references during serialization and unnecessary data repetition. Luckily for us, there is already a solution for this—it’s called data transfer object (DTO). The main purposes of a DTO is to serve data while being serializable (see also Wikipedia).

The DTO project

If you have been following along, you might already have guessed that I have created a separate project for the DTO model classes. The overall structure is similar to what you have already seen in my last post, where I showed you the entity model.

Implementation

Let’s have an exemplary look at the Medium entity class:

using System;
using System.Collections.Generic;

namespace MSiccDev.ServerlessBlog.EntityModel
{
    public class Medium
    {
        public Guid MediumId { get; set; }

        public Uri MediumUrl { get; set; }

        public string AlternativeText { get; set; }

        public string Description { get; set; }

        public Guid MediumTypeId { get; set; }
        public MediumType MediumType { get; set; }

        public Guid BlogId { get; set; }
        public Blog Blog { get; set; }

        public ICollection<Post> Posts { get; set; }
        public ICollection<Author> Authors { get; set; }

        public List<PostMediumMapping> PostMediumMappings { get; set; }

    }
}

The entity contains all relationships on the database. Our API will constrain a lot of them already down (we will see in a later post how), for example by requiring the BlogId for every call as primary identifier. There are a lot of other connection points, but we also want to be able to use the Medium endpoint just for managing media.

Here is the Medium DTO:

using System;
namespace MSiccDev.ServerlessBlog.DtoModel
{
    public class Medium
    {
        public Guid MediumId { get; set; }

        public Uri MediumUrl { get; set; }

        public string AlternativeText { get; set; }

        public string Description { get; set; }

        public MediumType MediumType { get; set; }

        public bool? IsPostImage { get; set; } = null;
    }
}


The class contains all the information we need. With this DTO, we will be able to manage media files alone but also in its usage context (which is mostly within posts of a blog).

Mapping helpers

To convert entity objects to data transfer objects and vice versa, we are using mappings. Mappings are converters that bring the data into the desired shape. On the contrary to our model classes, mappings are allowed to modify data during the conversion.

No library this time

If you are wondering why I am not using one of the established libraries for mappings, there are several reasons. When I came to the point of DTO implementation in the developing process, I evaluated the options for the mappings.

All of them had quite a learning curve, in the end, I was faster writing my own mappings. On bigger systems like shops or similar projects, I would probably have chosen the other path. There is also a small chance I change my mind one day, which would result in a refactoring session then.

Both mapping helper classes are, once again, in their own project.

Converting entities to DTOs

As you can see in the EntityToDtoMapExtensions class, I created extension methods for all entity objects. To remain on the Medium class, here are the particular implementations (there should be no surprise):

public static DtoModel.Medium ToDto(this EntityModel.Medium entity)
{
    return new DtoModel.Medium()
    {
        MediumId = entity.MediumId,
        MediumType = entity.MediumType.ToDto(),
        MediumUrl = entity.MediumUrl,
        AlternativeText = entity.AlternativeText,
        Description = entity.Description
    };
}

public static DtoModel.MediumType ToDto(this EntityModel.MediumType entity)
{
    return new DtoModel.MediumType()
    {
        MediumTypeId = entity.MediumTypeId,
        MimeType = entity.MimeType,
        Name = entity.Name,
        Encoding = entity.Encoding
    };
}

You may have noticed that I am not setting the IsPostImage property from within the extension. The information is only important in the context of a post, which is why the ToDto method for the post is setting it to true or false. Otherwise, it will be null and can be omitted in the API response.

Converting DTOs to entities

There are two scenarios where we need to convert DTOs to entities: one is the creation of new entities, the other is updating existing entities. Being very creative with the names, I implemented a CreateFrom and an UpdateWith method for each DTO type.

You can have a look at all implementations on Github, like above, here we are focusing on the Medium DTO extensions:

public static EntityModel.Medium CreateFrom(this DtoModel.Medium dto, Guid blogId)
{
    return new EntityModel.Medium()
    {
        BlogId = blogId,
        MediumId = dto.MediumId,
        MediumTypeId = dto.MediumType?.MediumTypeId ?? default,
        MediumUrl = dto.MediumUrl,
        AlternativeText = dto.AlternativeText,
        Description = dto.Description,
    };
}

public static EntityModel.Medium UpdateWith(this EntityModel.Medium existingMedium, DtoModel.Medium updatedMedium)
{
    if (existingMedium.MediumId != updatedMedium.MediumId)
        throw new ArgumentException("MediumId must be equal in UPDATE operation.");

    if (existingMedium.AlternativeText != updatedMedium.AlternativeText)
        existingMedium.AlternativeText = updatedMedium.AlternativeText;

    if (existingMedium.Description != updatedMedium.Description)
        existingMedium.Description = updatedMedium.Description;

    if (existingMedium.MediumTypeId != updatedMedium.MediumType.MediumTypeId)
        existingMedium.MediumTypeId = updatedMedium.MediumType.MediumTypeId;

    if (existingMedium.MediumUrl != updatedMedium.MediumUrl)
        existingMedium.MediumUrl = updatedMedium.MediumUrl;

    return existingMedium;
}

Once again, there should be no surprise in the implementation. If you have a look at the other methods, you will find them implemented similarly.

Conclusion

In this post, I explained why we need DTOs and showed you how I implemented them. We also had a look at the mapping extensions to convert the entities to data transfer objects and vice versa. Now that we have them in place, we are able to start implementing our Azure Functions, which is where we are heading to next in the #CASBAN6 blog series.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 2 comments
#CASBAN6: Implementing the data model using EntityFramework Core (separate libraries)

#CASBAN6: Implementing the data model using EntityFramework Core (separate libraries)

EntityModel library

When I started the project, I started with creating the classes for all the tables that I need for my blog engine. I have put them into their own library to keep things clean.

The classes also use ICollection references for relationships whenever required. Let’s have a look at the Blog class:

public class Blog 
{
    public Guid BlogId { get; set; }

    public string Name { get; set; }

    public string Slogan { get; set; }

    public Uri LogoUrl { get; set; }

    public ICollection<Post> Posts { get; set; }

    public ICollection<Author> Authors { get; set; }

    public ICollection<Tag> Tags { get; set; }

    public ICollection<Medium> Media { get; set; }

}

The other classes are implemented similarly to reflect the data model I showed you in my last post. You can have a look at the other class implementations in the GitHub repo (folder: EntityModel).

EFCore library

The EFCore library has three main components:

  1. BlogContext
  2. Configurations
  3. Seed extension

BlogContext

The BlogContext is straight forward and follows the pattern described here in the documentation for using a factory (spoiler: we will do that later):

public sealed class BlogContext : DbContext
{
    public BlogContext(DbContextOptions<BlogContext> options) : base(options)
    {

    }

    public DbSet<Blog> Blogs { get; set; }
    public DbSet<Post> Posts { get; set; }
    public DbSet<Author> Authors { get; set; }
    public DbSet<MSiccDev.ServerlessBlog.EntityModel.Medium> Media { get; set; }
    public DbSet<MediumType> MediaTypes { get; set; }
    public DbSet<Tag> Tags { get; set; }
}

The class is declaring a constructor that uses the DBContextOptions<DBContext> parameter that allows our factory to configure the context later on for the migrations. Of course, we need references to all the possible DbSets as well to be able to access them via the BlogContext instance.

Configurations

In Entity Framework, we can configure our tables with configurations. By implementing the IEntityTypeConfiguration interface for all of our models in a separate file for each, we are continuing to keep our code clean and easily maintainable. Here is how the implementation for the Blog table:

public class BlogConfiguration : IEntityTypeConfiguration<Blog>
{
    public void Configure(EntityTypeBuilder<Blog> builder)
    {
        builder.Property(nameof(Blog.BlogId)).
            IsRequired();

        builder.Property(nameof(Blog.BlogId)).
            ValueGeneratedOnAdd();

        builder.HasKey(blog => blog.BlogId).
            HasName($"PK_{nameof(Blog.BlogId)}");

        builder.Property(nameof(Blog.Name)).
            HasMaxLength(255).
            IsRequired();

        builder.Property(nameof(Blog.Slogan)).
            HasMaxLength(255).
            IsRequired();

        builder.Property(nameof(Blog.LogoUrl)).
            IsRequired();
    }
}

Most of the fluent implementations above are self-explaining. The other classes of the project are implemented in a similar way, you can find them here in the Github repository.

I implemented my configurations by following the docs, which I absolutely recommend reading:

To apply the configurations, we need to override the OnModelCreating method:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.ApplyConfiguration(new BlogConfiguration());
    modelBuilder.ApplyConfiguration(new MediumypeConfiguration());
    modelBuilder.ApplyConfiguration(new MediumConfiguration());
    modelBuilder.ApplyConfiguration(new AuthorConfiguration());
    modelBuilder.ApplyConfiguration(new TagConfiguration());
    modelBuilder.ApplyConfiguration(new PostConfiguration());
}

Seed extension

To verify our configurations are working, we need some test data. This is where the seeding feature of EF Core comes in handy, and it helped me to improve my data model a lot. I am using the extension method approach here to implement a blog with three test posts, including all relations, constraints, and property configurations. You can find the full implementation here in the Github repository.

Besides the docs on EF Core data seeding, these links helped me to understand and write my seed implementation:

With this extension method in place, applying the seed is just one line of code at the end of the OnModelCreating override away:

modelBuilder.Seed();

Now we have everything together, we finally can turn to actually migrate our code to database.

EFCore.DesignDummy library

Because I am running this whole thing on a Mac, I need to use the CLI tools for all migrations. To keep also this step in its own library, I created a DesignDummy library which I use for pushing the migrations to my local database.

The library requires a reference to the EFCore library as well as to the EntityModel library. On top of that, we need the Microsoft.EntityFrameworkCore.Design NuGet package.

Now that our dependencies are in place, we just need to create an implementation of the IDesignTimeDbContextFactory interface as described here in the docs:

public class BlogContextFactory : IDesignTimeDbContextFactory<BlogContext>
{
    public BlogContext CreateDbContext(string[] args)
    {
        BlogContext? instance = null;

        var optionsBuilder = new DbContextOptionsBuilder<BlogContext>();

        optionsBuilder.UseSqlServer(dbContextBuilder =>
            dbContextBuilder.MigrationsAssembly("EFCore.DesignDummy")).
            EnableSensitiveDataLogging();

        instance = new BlogContext(optionsBuilder.Options);

        return instance;
    }
}

Now let’s create our first migration and push it to the database (find the docs here). In your terminal window, change to the folder of your dummy project. Once you’re in the correct folder, create a new migration with the add command:

dotnet ef migrations add {MigrationName}

To push the migration you just created to the database, use the update command with the connection parameter:

dotnet ef database update --connection 'Data Source=localhost;Initial Catalog=localDB;User ID=sa;Password=thisShouldB3Stronger!'

If all goes well, you should now be able to view your database with the seeded data:

database seeded

Conclusion

In this post, I showed you how to create the model for the database and their matching IEntityTypeConfiguration implementations. We learned how to create a IDesignTimeDbContextFactory and how to add migrations and push them to the database. The full code is on GitHub for your further exploration.

As always, I hope this post is helpful for some of you.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 3 comments
#CASBAN6: the data model explained

#CASBAN6: the data model explained

Preface

Initially, this post should have been about the direct implementation and design of the data model for my serverless blog engine. As the data model became a bit more complex, I decided to split the data model post into two posts. The first aims to explain the data model, while the second post is for the implementation with Entity Framework Core.

The data model

A picture is worth a thousand words, they say. So here is a complete picture of the data model:

CASBAN6 data model

I will go through the model table by table and tell you a sentence or two on each of it for the rest of this post.

The tables

__EFMigrationsHistory

This table just stores all the MigrationIds and is handled by the Entity Framework. I recommend you to not touch this table.

Blogs

In theory, the database could hold more than one blog. By adding a new row to this table, you are creating a new blog in the database. The BlogId is essential for a range of other tables.

Authors

To be able to issue blog posts, our blog needs at least one author. As you may have more than one person to fill your blog, a collection of authors can be saved within this table. One author can only be assigned to one blog (at least for now, this is intentional).

Posts

The content fuelling our blog is in the posts table. One post can only be on one blog. The published date will be set on insert, all subsequent changes will modify the LastModified column. Also, one post can only have one author. The author can be replaced on updating the row in the table.

The slug can be used to create a human-readable URL for the post (instead of the PostId, which is a GUID). The slug must be unique across all blogs.

Tags

In order to group and categorize posts, I decided to go with a tags-only approach (unlike other platforms, which allow both categories and tags). Tags are unique to a blog, but can be used in multiple posts.

Media

Of course, our blog should support also media content like images, videos and other types. I opted in for a URL-based approach, which is making it easier to add content from other platforms (like videos hosted on dedicated platforms). A medium can be used in several posts.

MediaTypes

To make it a bit easier to determine a medium’s type, I added the MediaTypes table. It holds information about the MIME-Type and possibly also the encoding of a file. The uniqueness is based off the MIME-Type.

Mapping tables

As we learned already above, both tags and media can be used in multiple posts. At the same time, posts can have multiple media and also multiple tags. To cover this many-to-many relationships, I use two mapping-tables with a composite primary key to ensure their uniqueness across the blog.

Conclusion

In this post, I outlined the data model of my serverless blog engine. In the next post, I will show you the implementation of this model with Entity Framework Core.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 2 comments
#CASBAN6: How to set up a local Microsoft SQL database on macOS

#CASBAN6: How to set up a local Microsoft SQL database on macOS

Microsoft’s SQL Server cannot be installed directly on macOS, like on Windows machines. Luckily, there is a not so complicated solution using a Docker container – provided by Microsoft themselves.

Install Docker

Obviously, the first step is to download Docker and install it on your Mac. Just head over to the Docker website and download the appropriate version. Install the app by opening the disk image and follow the instructions.

Install SQL Server

After installing the Docker desktop client, head over to the docker hub of Microsoft’s SQL Server. You can choose between SQL Server 2017, 2019 and 2022, with the latter one being preview (as of publishing time of this post). To download the image, we need to open a terminal and download it with the pull command:

sudo docker pull mcr.microsoft.com/mssql/server:2019-latest

I am selecting 2019 here as it is closest to what Azure SQL databases uses as of publishing time of this post.

Create a server instance

Microsoft makes it quite easy to create a server instance, we just need to copy the appropriate run command from the docker hub website. I am using SQL Server Express for my testing purposes:

 docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=thisShouldB3Stronger!' -e 'MSSQL_PID=Express' -p 1433:1433 --name mssql  -d mcr.microsoft.com/mssql/server:2019-latest

Once you run this command without any error, type in docker ps to verify the image is up and running. If all goes well, you should see something like this:

Create a database

Now that we have a running server instance, we can finally create a database for our purposes. We are using this terminal command to achieve our goal:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'thisShouldB3Stronger!' -Q 'CREATE DATABASE localDB'

We are logging into our server with this and send the command to create our database. Alternatively, we could already connect using DBeaver(link below) to create the Database. In both cases, we have our local database up and running by now.

Connect

There are several ways to connect to this database. The one we are going to use with Entity Framework Core is the good old connection string:

//template: Data Source=localhost;Initial Catalog=<database>;User ID=sa;Password=<password>

Data Source=localhost;Initial Catalog=localDB;User ID=sa;Password=thisShouldB3Stronger!

If you want to access your database with a GUI, I recommend using either Visual Studio Code with the Azure and SQL workload installed or the Community Edition of DBeaver.

DBeaver Community screenshot

Visual Studio allows connecting on a database level, while DBeaver can be used to connect at server level as well. Both of them also support access to Azure SQL databases, which will be helpful later on.

Conclusion

Microsoft’s SQL Server is not available for macOS. Nonetheless, we are able to quickly set up a Docker container that runs MS SQL and set up a local database for testing. I wrote this post for completeness of the series.

Useful links

Until the next post – happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 3 comments
#CASBAN6: Creating A Serverless Blog on Azure with .NET 6 (new series)

#CASBAN6: Creating A Serverless Blog on Azure with .NET 6 (new series)

Motivation

I was planning to run my blog without WordPress for quite some time. For one, because WordPress is really blown up as a platform. The second reason is more of a practical nature – this project gives me lots of stuff to improve my programming skills. I already started to move my developer website away from WordPress with ASP.NET CORE and Razor Pages. Eventually I arrived at the point where I needed to implement a blog engine for the news section. So, I have two websites (including this one here) that will take advantage of the outcome of this journey.

High Level Architecture

Now that the ‘why’ is clear, let’s have a look at the ‘how’:

There are several layers in my concept. The data layer consists of a serverless MS SQL instance on Azure, on which I will work with the help of Entity Framework Core and Azure Functions for all the CRUD operations of the blog. I will use the powers of Azure API Management, which will allow me to provide a secure layer for the clients – of course, an ASP.NET CORE Website with RazorPages, flanked by a .NET MAUI admin client (no web administration). Once the former two are done, I will also add a mobile client for this blog. It will be the next major update for my existing blog reader that is already in the app stores.

For comments, I will use Disqus. This way, I have a proven comment system where anyone can use his/her favorite account to participate in discussions. They also have an API, so there is a good chance that I will be able to implement Disqus in the Desktop and Mobile clients.

Last but not least, there are (for now) two open points – performance measuring/logging and notifications. I haven’t decided yet how to implement these – but I guess there will be an Azure based implementation as well (until there are good reasons to use another service).

Open Source

Most of the software I will write and blog about in this series will be available publicly on GitHub. You can find the repository already there, including stuff for the next two upcoming blog posts already in there.

Index

I will update this blog post regularly with a link new entries of the series.

Additional note

Please note that I am working on this in my spare time. This may result in delays between the blog posts and the updates committed into the repository on GitHub.

Until the next post – happy coding, everyone!


Title Image by Roman from Pixabay

Posted by msicc in Android, Azure, Dev Stories, iOS, MAUI, Web, 2 comments
Sending push notifications to your Xamarin app from WordPress with Azure, Part II – the Function

Sending push notifications to your Xamarin app from WordPress with Azure, Part II – the Function

First, let’s have a look at the lineup of this series once again:

  • Preparing your WordPress (blog/site)
  • Preparing the Azure Function and connect the Webhook (this post)
  • Preparing the Notification Hub
  • Send the notification to Android
  • Send the notification to iOS
  • Adding in Xamarin.Forms

Creating a new Azure Function in Visual Studio

The most simple approach to create a new Azure Function (if you already have an Azure account) is adding a new project to your Xamarin solution:

After the project is loaded, double click on the .csproj file in the Solution Explorer to open the file for editing it. Make sure you have the following two PropertyGroup entries:

  <PropertyGroup>
    <TargetFramework>net461</TargetFramework>
    <AzureFunctionsVersion>v1</AzureFunctionsVersion>
  </PropertyGroup>
  <ItemGroup>
    <!--DO NOT UPDATE THE AZURE PACKAGES, IT WILL BREAK EVERYTHING!!!!-->
    <PackageReference Include="Microsoft.Azure.NotificationHubs" Version="1.0.9" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.NotificationHubs" Version="1.3.0" />
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.31" />
    <PackageReference Include="Newtonsoft.Json" Version="9.0.1" />
  </ItemGroup>

You may notice that I made an all caps comment into the second PropertyGroup entry. As I am using a v1 Function, these are the latest packages that I am able to use. They are doing their job, and allow us to use an easy way to bind the Function to the Azure NotificationHub , which we are going to implement in the next post. I delayed updating the whole setup to use a v2 function intentionally at this point.

Processing the Webhook payload

In order to be able to process the payload (remember, we are getting a JSON string) from our WordPress Webhook, we need to deserialize it. Let’s create the class that holds all information about it:

using Newtonsoft.Json;

namespace NewPostHandler
{
    public class PublishedPostNotification
    {
        [JsonProperty("id")]
        [JsonConverter(typeof(StringToLongConverter))]
        public long Id { get; set; }

        [JsonProperty("title")]
        public string Title { get; set; }

        [JsonProperty("status")]
        public string Status { get; set; }

        [JsonProperty("featured_media")]
        public string FeaturedMedia { get; set; }
    }
}

The class gets it pretty straight forward, we will use this implementation as-is for the payload we are sending to Android later on. The use of the StringToLongConverter is optional. For completeness, here is the implementation:

using Newtonsoft.Json;
using System;

namespace NewPostHandler
{
    public class StringToLongConverter : JsonConverter
    {
        public override bool CanConvert(Type t) => t == typeof(long) || t == typeof(long?);

        public override object ReadJson(JsonReader reader, Type t, object existingValue, JsonSerializer serializer)
        {
            if (reader.TokenType == JsonToken.Null) return default(long);
            var value = serializer.Deserialize<string>(reader);
            if (long.TryParse(value, out var l))
            {
                return l;
            }

            return default(long);
        }

        public override void WriteJson(JsonWriter writer, object untypedValue, JsonSerializer serializer)
        {
            if (untypedValue == null)
            {
                serializer.Serialize(writer, null);
                return;
            }
            var value = (long)untypedValue;
            serializer.Serialize(writer, value.ToString());
            return;
        }

        public static readonly StringToLongConverter Instance = new StringToLongConverter();
    }
}

Now that we prepared our data transferring object, it is time to finally have a look at the processor code.

[FunctionName("HandleNewPostHook")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
{
    _log = log;
    _log.Info("arrived at 'HandleNewPostHook' function trigger.");

    //ignoring any query parameters, only using POST body

    PublishedPostNotification result = null;

    try
    {
        _jsonSerializerSettings = new JsonSerializerSettings()
        {
            MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
            DateParseHandling = DateParseHandling.None,
            Converters =
            {
                new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                StringToLongConverter.Instance
            }
        };

        _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

        using (var stream = await req.Content.ReadAsStreamAsync())
        {
            using (var reader = new StreamReader(stream))
            {
                using (var jsonReader = new JsonTextReader(reader))
                {
                    result = _jsonSerializer.Deserialize<PublishedPostNotification>(jsonReader);
                }
            }
        }

        if (result == null)
        {
            _log.Error("There was an error processing the request (serialization result is NULL)");
            return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing the post body");
        }

       //subject of the next post
      //await TriggerPushNotificationAsync(result);
    }
    catch (Exception ex)
    {
        _log.Error("There was an error processing the request", ex);
        return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing the post body");
    }

    if (result.Id != default)
    {
        _log.Info($"initiated processing of published post with id {result.Id}");
        return req.CreateResponse(HttpStatusCode.OK, "Processing new published post...");
    }
    else
    {
        _log.Error("There was an error processing the request (cannot process result Id with default value)");
        return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing (postId not valid)");
    }
}

Let’s go through the code. The first thing I want to know is if we ever enter the Function, so I log the entrance. The second step is setting up the JsonSerializer to deserialize the payload into the DTO class I created before.

There are several scenarios that I am handling and returning different responses. Ideally, we would run through and arrive at the TriggerPushNotificationAsync call, followed by a jump the ‘OK‘- response if the post id received from our Webhook is valid. During testing, however, I ran into other situations as well, where I return a ‘Bad Request‘ response with a hint that something went wrong.

The implementation of the TriggerPushNotificationAsync method is not shown in this post as it will be subject of the next post in this series.

Testing the code locally

One of the reasons I chose to start the Function in Visual Studio is its ability to debug it locally. If you don’t have the necessary tools installed, Visual Studio will prompt you to do so. After installing them, you’ll be able to follow along.

Once the service is running, we will be able to test our function. If you haven’t already heard about it, Postman will be the easiest tool for that. Copy the function url and paste it into the url field in Postman. Next, add a sample JSON payload to the body (settings: raw, JSON) and hit the ‘Send’ button:

If all goes well, Postman will give you a success response:

The Azure CLI will also write some output:

As you can see, all of our log entries were written to the CLI, plus some additional information from Azure itself. Don’t worry for the moment about the anonymous authorization state, this is just because we are running locally. In theory, we could already publish the function to Azure now. As we know that we will extend the Function in the next post, however, we will not do this right now.

Conclusion

As you can see, writing an Azure Function isn’t as complicated as it sounds. Visual Studio brings all the tools you need to get started pretty fast. The ability to test the Function code locally is another big advantage that comes with Visual Studio.

In the next post, we will configure the NotificationHub on Azure and extend our Function to call into it and fire the notifications.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 1 comment
Sending push notifications to your Xamarin app from WordPress with Azure, Part I [new series]

Sending push notifications to your Xamarin app from WordPress with Azure, Part I [new series]

Overview

Choosing the “right” solution for sending push notifications isn’t easy if you have a WordPress blog. There are quite a bunch of options to choose from, and the right one for you might differ from my decision. I am using the most generic solution – a Webhook that triggers an Azure Function, which triggers the notification via Azure Notification Hubs. This series will grow as follows:

  • Preparing your WordPress (blog/site) (this post)
  • Preparing the Azure Function and connect the Webhook
  • Preparing the Notification Hub
  • Send the notification to Android
  • Send the notification to iOS
  • Adding in Xamarin.Forms

The app implementations are very platform-specific, but it is quite easy to integrate the post notifications in a Xamarin.Forms app (which will be the last post in this series). If you want to see the whole integration already in action, feel free to download my blog reader app:

WordPress plugins for the win

If you have a self-hosted blog like I do, you may know that the plugin ecosystem is there to help you with a lot of things that WordPress hasn’t out of the box. While a WordPress-hosted site as Webhook integration without an additional plugin, we need one to create such a Webhook on a self-hosted WordPress blog. The plugin I am using is simply called “Notification” and can be found here.

To install the plugin, follow these simple steps:

  • choose “Plugins” on your WordPress dashboard
  • select “Add New” and type in “notification”
  • Hit the “Install Now” button
  • Activate the plugin

Exploring the options

Once you have installed and activated the plugin, you will have a new option in the dashboard menu. Let’s have a look at the options.

  • Notifications – this shows you a list of your currently active notifications
  • Add New Notification – lets you create a new notification
  • Extensions – the plugin allows you to extend your notifications with external services like Slack, Twitter or SendGrid to engage even more users. We do not need these for the webhook, however.
  • Settings – the control panel for the plugin – this is where we will be for the rest of this blog

Enabling the Webhook

On the Settings page, select the “CARRIERS” option. The plugin uses so-called carriers to send out the notifications. By default, the Email carrier is active. I do not need this one for the moment, so I deactivated it an activated the Webhook carrier instead:

Setting Post Triggers

The next step is to verify we have the trigger for posts active:

You can modify the other triggers as well, but for the moment, I am focusing just on posts. I am thinking about integrating comments in the future, which will allow even more interaction from within my app.

Add a new notification

Let’s create our first notification. Select the “Add New Notification” action, which will bring up this page:

Select the “Add New Carrier” option and add the Webhook carrier:

Next, select the Trigger for the Webhook. During development, I am using the saving draft option as it allows me to easily trigger a notification without annoying anyone:

This will enable the “Merge Tags” list on the right-hand side. To create the Webhook payload, we need to add some arguments (using the “Add argument” button). Tip: you can copy the merge tag by just clicking on it and paste it into the “Value” box:

Don’t forget to activate the JSON format – we do not want it to be sent as XML. Make sure the Carrier is enabled and hit the save button on the upper right.

Testing the Webhook

Now that we finished the setup of our Webhook, let’s test it. To do so, go to the “Settings” page again and select “DEBUGGING”. Check the “Enable Notification logging” box and click the “Save Changes” button:

To test the notification, just create a new blog post and save the draft. Go back to the “DEBUGGING” setting, where you will be presented a new Notification log entry. Expanding this log entry, you will see some common data about the notification:

If you scroll a bit further down, you will see the payload of the Webhook. Sadly, you won’t get the raw JSON string, but a structured overview of the payload:

Verify that the payload contains all the data you need and adjust the settings if necessary. Once that is done, we are ready to go to the next blog post (coming soon).

Conclusion

In this post, I showed you how to create a Webhook that will trigger our upcoming Azure Function. Thanks to the “Notification” plugin, the process is pretty straight forward. In the next post, we will have a look at the Azure Function that will handle the Webhook.

Until the next post, happy coding, everyone!

Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 1 comment