Azure

all my Azure posts

#CASBAN6: Implementing the data model using EntityFramework Core (separate libraries)

#CASBAN6: Implementing the data model using EntityFramework Core (separate libraries)

EntityModel library

When I started the project, I started with creating the classes for all the tables that I need for my blog engine. I have put them into their own library to keep things clean.

The classes also use ICollection references for relationships whenever required. Let’s have a look at the Blog class:

public class Blog 
{
    public Guid BlogId { get; set; }

    public string Name { get; set; }

    public string Slogan { get; set; }

    public Uri LogoUrl { get; set; }

    public ICollection<Post> Posts { get; set; }

    public ICollection<Author> Authors { get; set; }

    public ICollection<Tag> Tags { get; set; }

    public ICollection<Medium> Media { get; set; }

}

The other classes are implemented similarly to reflect the data model I showed you in my last post. You can have a look at the other class implementations in the GitHub repo (folder: EntityModel).

EFCore library

The EFCore library has three main components:

  1. BlogContext
  2. Configurations
  3. Seed extension

BlogContext

The BlogContext is straight forward and follows the pattern described here in the documentation for using a factory (spoiler: we will do that later):

public sealed class BlogContext : DbContext
{
    public BlogContext(DbContextOptions<BlogContext> options) : base(options)
    {

    }

    public DbSet<Blog> Blogs { get; set; }
    public DbSet<Post> Posts { get; set; }
    public DbSet<Author> Authors { get; set; }
    public DbSet<MSiccDev.ServerlessBlog.EntityModel.Medium> Media { get; set; }
    public DbSet<MediumType> MediaTypes { get; set; }
    public DbSet<Tag> Tags { get; set; }
}

The class is declaring a constructor that uses the DBContextOptions<DBContext> parameter that allows our factory to configure the context later on for the migrations. Of course, we need references to all the possible DbSets as well to be able to access them via the BlogContext instance.

Configurations

In Entity Framework, we can configure our tables with configurations. By implementing the IEntityTypeConfiguration interface for all of our models in a separate file for each, we are continuing to keep our code clean and easily maintainable. Here is how the implementation for the Blog table:

public class BlogConfiguration : IEntityTypeConfiguration<Blog>
{
    public void Configure(EntityTypeBuilder<Blog> builder)
    {
        builder.Property(nameof(Blog.BlogId)).
            IsRequired();

        builder.Property(nameof(Blog.BlogId)).
            ValueGeneratedOnAdd();

        builder.HasKey(blog => blog.BlogId).
            HasName($"PK_{nameof(Blog.BlogId)}");

        builder.Property(nameof(Blog.Name)).
            HasMaxLength(255).
            IsRequired();

        builder.Property(nameof(Blog.Slogan)).
            HasMaxLength(255).
            IsRequired();

        builder.Property(nameof(Blog.LogoUrl)).
            IsRequired();
    }
}

Most of the fluent implementations above are self-explaining. The other classes of the project are implemented in a similar way, you can find them here in the Github repository.

I implemented my configurations by following the docs, which I absolutely recommend reading:

To apply the configurations, we need to override the OnModelCreating method:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.ApplyConfiguration(new BlogConfiguration());
    modelBuilder.ApplyConfiguration(new MediumypeConfiguration());
    modelBuilder.ApplyConfiguration(new MediumConfiguration());
    modelBuilder.ApplyConfiguration(new AuthorConfiguration());
    modelBuilder.ApplyConfiguration(new TagConfiguration());
    modelBuilder.ApplyConfiguration(new PostConfiguration());
}

Seed extension

To verify our configurations are working, we need some test data. This is where the seeding feature of EF Core comes in handy, and it helped me to improve my data model a lot. I am using the extension method approach here to implement a blog with three test posts, including all relations, constraints, and property configurations. You can find the full implementation here in the Github repository.

Besides the docs on EF Core data seeding, these links helped me to understand and write my seed implementation:

With this extension method in place, applying the seed is just one line of code at the end of the OnModelCreating override away:

modelBuilder.Seed();

Now we have everything together, we finally can turn to actually migrate our code to database.

EFCore.DesignDummy library

Because I am running this whole thing on a Mac, I need to use the CLI tools for all migrations. To keep also this step in its own library, I created a DesignDummy library which I use for pushing the migrations to my local database.

The library requires a reference to the EFCore library as well as to the EntityModel library. On top of that, we need the Microsoft.EntityFrameworkCore.Design NuGet package.

Now that our dependencies are in place, we just need to create an implementation of the IDesignTimeDbContextFactory interface as described here in the docs:

public class BlogContextFactory : IDesignTimeDbContextFactory<BlogContext>
{
    public BlogContext CreateDbContext(string[] args)
    {
        BlogContext? instance = null;

        var optionsBuilder = new DbContextOptionsBuilder<BlogContext>();

        optionsBuilder.UseSqlServer(dbContextBuilder =>
            dbContextBuilder.MigrationsAssembly("EFCore.DesignDummy")).
            EnableSensitiveDataLogging();

        instance = new BlogContext(optionsBuilder.Options);

        return instance;
    }
}

Now let’s create our first migration and push it to the database (find the docs here). In your terminal window, change to the folder of your dummy project. Once you’re in the correct folder, create a new migration with the add command:

dotnet ef migrations add {MigrationName}

To push the migration you just created to the database, use the update command with the connection parameter:

dotnet ef database update --connection 'Data Source=localhost;Initial Catalog=localDB;User ID=sa;Password=thisShouldB3Stronger!'

If all goes well, you should now be able to view your database with the seeded data:

database seeded

Conclusion

In this post, I showed you how to create the model for the database and their matching IEntityTypeConfiguration implementations. We learned how to create a IDesignTimeDbContextFactory and how to add migrations and push them to the database. The full code is on GitHub for your further exploration.

As always, I hope this post is helpful for some of you.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 0 comments
#CASBAN6: the data model explained

#CASBAN6: the data model explained

Preface

Initially, this post should have been about the direct implementation and design of the data model for my serverless blog engine. As the data model became a bit more complex, I decided to split the data model post into two posts. The first aims to explain the data model, while the second post is for the implementation with Entity Framework Core.

The data model

A picture is worth a thousand words, they say. So here is a complete picture of the data model:

CASBAN6 data model

I will go through the model table by table and tell you a sentence or two on each of it for the rest of this post.

The tables

__EFMigrationsHistory

This table just stores all the MigrationIds and is handled by the Entity Framework. I recommend you to not touch this table.

Blogs

In theory, the database could hold more than one blog. By adding a new row to this table, you are creating a new blog in the database. The BlogId is essential for a range of other tables.

Authors

To be able to issue blog posts, our blog needs at least one author. As you may have more than one person to fill your blog, a collection of authors can be saved within this table. One author can only be assigned to one blog (at least for now, this is intentional).

Posts

The content fuelling our blog is in the posts table. One post can only be on one blog. The published date will be set on insert, all subsequent changes will modify the LastModified column. Also, one post can only have one author. The author can be replaced on updating the row in the table.

The slug can be used to create a human-readable URL for the post (instead of the PostId, which is a GUID). The slug must be unique across all blogs.

Tags

In order to group and categorize posts, I decided to go with a tags-only approach (unlike other platforms, which allow both categories and tags). Tags are unique to a blog, but can be used in multiple posts.

Media

Of course, our blog should support also media content like images, videos and other types. I opted in for a URL-based approach, which is making it easier to add content from other platforms (like videos hosted on dedicated platforms). A medium can be used in several posts.

MediaTypes

To make it a bit easier to determine a medium’s type, I added the MediaTypes table. It holds information about the MIME-Type and possibly also the encoding of a file. The uniqueness is based off the MIME-Type.

Mapping tables

As we learned already above, both tags and media can be used in multiple posts. At the same time, posts can have multiple media and also multiple tags. To cover this many-to-many relationships, I use two mapping-tables with a composite primary key to ensure their uniqueness across the blog.

Conclusion

In this post, I outlined the data model of my serverless blog engine. In the next post, I will show you the implementation of this model with Entity Framework Core.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 1 comment
#CASBAN6: How to set up a local Microsoft SQL database on macOS

#CASBAN6: How to set up a local Microsoft SQL database on macOS

Microsoft’s SQL Server cannot be installed directly on macOS, like on Windows machines. Luckily, there is a not so complicated solution using a Docker container – provided by Microsoft themselves.

Install Docker

Obviously, the first step is to download Docker and install it on your Mac. Just head over to the Docker website and download the appropriate version. Install the app by opening the disk image and follow the instructions.

Install SQL Server

After installing the Docker desktop client, head over to the docker hub of Microsoft’s SQL Server. You can choose between SQL Server 2017, 2019 and 2022, with the latter one being preview (as of publishing time of this post). To download the image, we need to open a terminal and download it with the pull command:

sudo docker pull mcr.microsoft.com/mssql/server:2019-latest

I am selecting 2019 here as it is closest to what Azure SQL databases uses as of publishing time of this post.

Create a server instance

Microsoft makes it quite easy to create a server instance, we just need to copy the appropriate run command from the docker hub website. I am using SQL Server Express for my testing purposes:

 docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=thisShouldB3Stronger!' -e 'MSSQL_PID=Express' -p 1433:1433 --name mssql  -d mcr.microsoft.com/mssql/server:2019-latest

Once you run this command without any error, type in docker ps to verify the image is up and running. If all goes well, you should see something like this:

Create a database

Now that we have a running server instance, we can finally create a database for our purposes. We are using this terminal command to achieve our goal:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'thisShouldB3Stronger!' -Q 'CREATE DATABASE localDB'

We are logging into our server with this and send the command to create our database. Alternatively, we could already connect using DBeaver(link below) to create the Database. In both cases, we have our local database up and running by now.

Connect

There are several ways to connect to this database. The one we are going to use with Entity Framework Core is the good old connection string:

//template: Data Source=localhost;Initial Catalog=<database>;User ID=sa;Password=<password>

Data Source=localhost;Initial Catalog=localDB;User ID=sa;Password=thisShouldB3Stronger!

If you want to access your database with a GUI, I recommend using either Visual Studio Code with the Azure and SQL workload installed or the Community Edition of DBeaver.

DBeaver Community screenshot

Visual Studio allows connecting on a database level, while DBeaver can be used to connect at server level as well. Both of them also support access to Azure SQL databases, which will be helpful later on.

Conclusion

Microsoft’s SQL Server is not available for macOS. Nonetheless, we are able to quickly set up a Docker container that runs MS SQL and set up a local database for testing. I wrote this post for completeness of the series.

Useful links

Until the next post – happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 3 comments
#CASBAN6: Creating A Serverless Blog on Azure with .NET 6 (new series)

#CASBAN6: Creating A Serverless Blog on Azure with .NET 6 (new series)

Motivation

I was planning to run my blog without WordPress for quite some time. For one, because WordPress is really blown up as a platform. The second reason is more of a practical nature – this project gives me lots of stuff to improve my programming skills. I already started to move my developer website away from WordPress with ASP.NET CORE and Razor Pages. Eventually I arrived at the point where I needed to implement a blog engine for the news section. So, I have two websites (including this one here) that will take advantage of the outcome of this journey.

High Level Architecture

Now that the ‘why’ is clear, let’s have a look at the ‘how’:

There are several layers in my concept. The data layer consists of a serverless MS SQL instance on Azure, on which I will work with the help of Entity Framework Core and Azure Functions for all the CRUD operations of the blog. I will use the powers of Azure API Management, which will allow me to provide a secure layer for the clients – of course, an ASP.NET CORE Website with RazorPages, flanked by a .NET MAUI admin client (no web administration). Once the former two are done, I will also add a mobile client for this blog. It will be the next major update for my existing blog reader that is already in the app stores.

For comments, I will use Disqus. This way, I have a proven comment system where anyone can use his/her favorite account to participate in discussions. They also have an API, so there is a good chance that I will be able to implement Disqus in the Desktop and Mobile clients.

Last but not least, there are (for now) two open points – performance measuring/logging and notifications. I haven’t decided yet how to implement these – but I guess there will be an Azure based implementation as well (until there are good reasons to use another service).

Open Source

Most of the software I will write and blog about in this series will be available publicly on GitHub. You can find the repository already there, including stuff for the next two upcoming blog posts already in there.

Index

I will update this blog post regularly with a link new entries of the series.

Additional note

Please note that I am working on this in my spare time. This may result in delays between the blog posts and the updates committed into the repository on GitHub. On top, I also have to split my spare time between my other side project (TwistReader) and this one (3 days a week for each). Whenever necessary, either one of the projects can take precedence over the other, so be aware – and please understand.

Until the next post – happy coding, everyone!


Title Image by Roman from Pixabay

Posted by msicc in Android, Azure, Dev Stories, iOS, MAUI, Web, 1 comment
Sending push notifications to your Xamarin app from WordPress with Azure, Part II – the Function

Sending push notifications to your Xamarin app from WordPress with Azure, Part II – the Function

First, let’s have a look at the lineup of this series once again:

  • Preparing your WordPress (blog/site)
  • Preparing the Azure Function and connect the Webhook (this post)
  • Preparing the Notification Hub
  • Send the notification to Android
  • Send the notification to iOS
  • Adding in Xamarin.Forms

Creating a new Azure Function in Visual Studio

The most simple approach to create a new Azure Function (if you already have an Azure account) is adding a new project to your Xamarin solution:

After the project is loaded, double click on the .csproj file in the Solution Explorer to open the file for editing it. Make sure you have the following two PropertyGroup entries:

  <PropertyGroup>
    <TargetFramework>net461</TargetFramework>
    <AzureFunctionsVersion>v1</AzureFunctionsVersion>
  </PropertyGroup>
  <ItemGroup>
    <!--DO NOT UPDATE THE AZURE PACKAGES, IT WILL BREAK EVERYTHING!!!!-->
    <PackageReference Include="Microsoft.Azure.NotificationHubs" Version="1.0.9" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.NotificationHubs" Version="1.3.0" />
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.31" />
    <PackageReference Include="Newtonsoft.Json" Version="9.0.1" />
  </ItemGroup>

You may notice that I made an all caps comment into the second PropertyGroup entry. As I am using a v1 Function, these are the latest packages that I am able to use. They are doing their job, and allow us to use an easy way to bind the Function to the Azure NotificationHub , which we are going to implement in the next post. I delayed updating the whole setup to use a v2 function intentionally at this point.

Processing the Webhook payload

In order to be able to process the payload (remember, we are getting a JSON string) from our WordPress Webhook, we need to deserialize it. Let’s create the class that holds all information about it:

using Newtonsoft.Json;

namespace NewPostHandler
{
    public class PublishedPostNotification
    {
        [JsonProperty("id")]
        [JsonConverter(typeof(StringToLongConverter))]
        public long Id { get; set; }

        [JsonProperty("title")]
        public string Title { get; set; }

        [JsonProperty("status")]
        public string Status { get; set; }

        [JsonProperty("featured_media")]
        public string FeaturedMedia { get; set; }
    }
}

The class gets it pretty straight forward, we will use this implementation as-is for the payload we are sending to Android later on. The use of the StringToLongConverter is optional. For completeness, here is the implementation:

using Newtonsoft.Json;
using System;

namespace NewPostHandler
{
    public class StringToLongConverter : JsonConverter
    {
        public override bool CanConvert(Type t) => t == typeof(long) || t == typeof(long?);

        public override object ReadJson(JsonReader reader, Type t, object existingValue, JsonSerializer serializer)
        {
            if (reader.TokenType == JsonToken.Null) return default(long);
            var value = serializer.Deserialize<string>(reader);
            if (long.TryParse(value, out var l))
            {
                return l;
            }

            return default(long);
        }

        public override void WriteJson(JsonWriter writer, object untypedValue, JsonSerializer serializer)
        {
            if (untypedValue == null)
            {
                serializer.Serialize(writer, null);
                return;
            }
            var value = (long)untypedValue;
            serializer.Serialize(writer, value.ToString());
            return;
        }

        public static readonly StringToLongConverter Instance = new StringToLongConverter();
    }
}

Now that we prepared our data transferring object, it is time to finally have a look at the processor code.

[FunctionName("HandleNewPostHook")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
{
    _log = log;
    _log.Info("arrived at 'HandleNewPostHook' function trigger.");

    //ignoring any query parameters, only using POST body

    PublishedPostNotification result = null;

    try
    {
        _jsonSerializerSettings = new JsonSerializerSettings()
        {
            MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
            DateParseHandling = DateParseHandling.None,
            Converters =
            {
                new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                StringToLongConverter.Instance
            }
        };

        _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

        using (var stream = await req.Content.ReadAsStreamAsync())
        {
            using (var reader = new StreamReader(stream))
            {
                using (var jsonReader = new JsonTextReader(reader))
                {
                    result = _jsonSerializer.Deserialize<PublishedPostNotification>(jsonReader);
                }
            }
        }

        if (result == null)
        {
            _log.Error("There was an error processing the request (serialization result is NULL)");
            return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing the post body");
        }

       //subject of the next post
      //await TriggerPushNotificationAsync(result);
    }
    catch (Exception ex)
    {
        _log.Error("There was an error processing the request", ex);
        return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing the post body");
    }

    if (result.Id != default)
    {
        _log.Info($"initiated processing of published post with id {result.Id}");
        return req.CreateResponse(HttpStatusCode.OK, "Processing new published post...");
    }
    else
    {
        _log.Error("There was an error processing the request (cannot process result Id with default value)");
        return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing (postId not valid)");
    }
}

Let’s go through the code. The first thing I want to know is if we ever enter the Function, so I log the entrance. The second step is setting up the JsonSerializer to deserialize the payload into the DTO class I created before.

There are several scenarios that I am handling and returning different responses. Ideally, we would run through and arrive at the TriggerPushNotificationAsync call, followed by a jump the ‘OK‘- response if the post id received from our Webhook is valid. During testing, however, I ran into other situations as well, where I return a ‘Bad Request‘ response with a hint that something went wrong.

The implementation of the TriggerPushNotificationAsync method is not shown in this post as it will be subject of the next post in this series.

Testing the code locally

One of the reasons I chose to start the Function in Visual Studio is its ability to debug it locally. If you don’t have the necessary tools installed, Visual Studio will prompt you to do so. After installing them, you’ll be able to follow along.

Once the service is running, we will be able to test our function. If you haven’t already heard about it, Postman will be the easiest tool for that. Copy the function url and paste it into the url field in Postman. Next, add a sample JSON payload to the body (settings: raw, JSON) and hit the ‘Send’ button:

If all goes well, Postman will give you a success response:

The Azure CLI will also write some output:

As you can see, all of our log entries were written to the CLI, plus some additional information from Azure itself. Don’t worry for the moment about the anonymous authorization state, this is just because we are running locally. In theory, we could already publish the function to Azure now. As we know that we will extend the Function in the next post, however, we will not do this right now.

Conclusion

As you can see, writing an Azure Function isn’t as complicated as it sounds. Visual Studio brings all the tools you need to get started pretty fast. The ability to test the Function code locally is another big advantage that comes with Visual Studio.

In the next post, we will configure the NotificationHub on Azure and extend our Function to call into it and fire the notifications.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 1 comment
Sending push notifications to your Xamarin app from WordPress with Azure, Part I [new series]

Sending push notifications to your Xamarin app from WordPress with Azure, Part I [new series]

Overview

Choosing the “right” solution for sending push notifications isn’t easy if you have a WordPress blog. There are quite a bunch of options to choose from, and the right one for you might differ from my decision. I am using the most generic solution – a Webhook that triggers an Azure Function, which triggers the notification via Azure Notification Hubs. This series will grow as follows:

  • Preparing your WordPress (blog/site) (this post)
  • Preparing the Azure Function and connect the Webhook
  • Preparing the Notification Hub
  • Send the notification to Android
  • Send the notification to iOS
  • Adding in Xamarin.Forms

The app implementations are very platform-specific, but it is quite easy to integrate the post notifications in a Xamarin.Forms app (which will be the last post in this series). If you want to see the whole integration already in action, feel free to download my blog reader app:

WordPress plugins for the win

If you have a self-hosted blog like I do, you may know that the plugin ecosystem is there to help you with a lot of things that WordPress hasn’t out of the box. While a WordPress-hosted site as Webhook integration without an additional plugin, we need one to create such a Webhook on a self-hosted WordPress blog. The plugin I am using is simply called “Notification” and can be found here.

To install the plugin, follow these simple steps:

  • choose “Plugins” on your WordPress dashboard
  • select “Add New” and type in “notification”
  • Hit the “Install Now” button
  • Activate the plugin

Exploring the options

Once you have installed and activated the plugin, you will have a new option in the dashboard menu. Let’s have a look at the options.

  • Notifications – this shows you a list of your currently active notifications
  • Add New Notification – lets you create a new notification
  • Extensions – the plugin allows you to extend your notifications with external services like Slack, Twitter or SendGrid to engage even more users. We do not need these for the webhook, however.
  • Settings – the control panel for the plugin – this is where we will be for the rest of this blog

Enabling the Webhook

On the Settings page, select the “CARRIERS” option. The plugin uses so-called carriers to send out the notifications. By default, the Email carrier is active. I do not need this one for the moment, so I deactivated it an activated the Webhook carrier instead:

Setting Post Triggers

The next step is to verify we have the trigger for posts active:

You can modify the other triggers as well, but for the moment, I am focusing just on posts. I am thinking about integrating comments in the future, which will allow even more interaction from within my app.

Add a new notification

Let’s create our first notification. Select the “Add New Notification” action, which will bring up this page:

Select the “Add New Carrier” option and add the Webhook carrier:

Next, select the Trigger for the Webhook. During development, I am using the saving draft option as it allows me to easily trigger a notification without annoying anyone:

This will enable the “Merge Tags” list on the right-hand side. To create the Webhook payload, we need to add some arguments (using the “Add argument” button). Tip: you can copy the merge tag by just clicking on it and paste it into the “Value” box:

Don’t forget to activate the JSON format – we do not want it to be sent as XML. Make sure the Carrier is enabled and hit the save button on the upper right.

Testing the Webhook

Now that we finished the setup of our Webhook, let’s test it. To do so, go to the “Settings” page again and select “DEBUGGING”. Check the “Enable Notification logging” box and click the “Save Changes” button:

To test the notification, just create a new blog post and save the draft. Go back to the “DEBUGGING” setting, where you will be presented a new Notification log entry. Expanding this log entry, you will see some common data about the notification:

If you scroll a bit further down, you will see the payload of the Webhook. Sadly, you won’t get the raw JSON string, but a structured overview of the payload:

Verify that the payload contains all the data you need and adjust the settings if necessary. Once that is done, we are ready to go to the next blog post (coming soon).

Conclusion

In this post, I showed you how to create a Webhook that will trigger our upcoming Azure Function. Thanks to the “Notification” plugin, the process is pretty straight forward. In the next post, we will have a look at the Azure Function that will handle the Webhook.

Until the next post, happy coding, everyone!

Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 1 comment
MSicc’s Blog version 1.6.0 out now for Android and iOS

MSicc’s Blog version 1.6.0 out now for Android and iOS

Here are the new features:

Push Notifications

With version 1.6.0 of the app, you can opt-in to receive push notifications once I publish a new blog post. I use an Azure Function (v1, for the ease of bindings – at least for now), and of course, an Azure NotificationHub. The Function gets called from a WebHook (via a plugin on WordPress), which triggers it to run (the next blog posts I write will be about how I achieved this, btw.)

New Design using Xamarin.Forms Shell

I also overhauled the design of the application. Initially, it was a MasterDetail app, but I never felt happy with that. Using Xamarin.Forms.Shell, I optimized the app to only show the last 30 posts I wrote. If you need older articles, you’ll be able to search for them within the app. The new design is a “v1” and will be constantly improved along with new features.

Bugs fixed in this release
  • fixed a bug where code snippets were not correctly displayed
  • fixed a bug where the app did not refresh posts after cleaning the cache
  • other minor fixes and improvements

I hope some of you will use the application and give me some feedback.

You can download the app using these links:
iOS | Android

Until the next post, happy coding, everyone!

Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 0 comments
Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

As I am moving forward in my current Linux journey, I recently came into a situation where a second user would have been handy. So I tried a few things to create the new user and allow the new user to only log in via SSH.

What the h*** is SSH?

SSH stands for Secure Shell and describes a protocol to connect via encrypted credentials. The security is provided by cryptographic keys, where the server only knows the public key and the client that wants to connect needs the matching private key. The most popular implementation is OpenSSH, which is available as an additional feature on Windows 10 since last fall. If you want to learn more about SSH, read the Wikipedia entry as well.

Create the SSH key pair on Windows

Install the OpenSSH client

First, install the OpenSSH client on your Windows machine. Proceed as follows:

  • open the start menu and type ‘apps’
  • select ‘Apps and features’ settings page
  • select ‘Optional features’
  • click on ‘Add a feature’
  • search ‘OpenSSH client’, click on it and ‘Install’

If you are scrolling down the list of installed features, you should find the entry for the OpenSSH client.

Note: If you are not able to install this feature, it may be the right time to update your Windows installation to the latest version.

Create your SSH key pair

If your user profile does not have a folder called ‘.ssh’, it is time to create it now. Type ‘%USERPROFILE%‘ in the Windows Explorer’s address bar to get to the right folder immediately and create the folder.

Now that we have the folder OpenSSH searches for, we are already able to create our new SSH keypair. Open a command prompt and type this:

ssh-keygen -t rsa -b 4096 -C "newuser@machine.com"

This will initiate the keypair creation. The -C parameter is optional and can be anything. You’ll find these keys often created with <user at address> combinations. After OpenSSH has created your keypair in memory, it will ask for a location to save the file. If you do not enter anything, it’ll save it as id_rsa into the .ssh folder created earlier. If you want to save it to another file name, you can do so:

C:\Users\username\.ssh/id_test

Please note that the file name (without any extension!) is separated by ‘/’, not by ‘\’. If you do not respect this, it will give you an error that the file name does not exist. This happens only if don’t use the default filename. After the files are created, OpenSSH asks you for a passphrase to protect your private key. Nowadays, every single keypair should be password protected (just my 2cts). Once the creation is complete, you’ll see something like this:

Your identification has been saved in C:\Users\msicc\.ssh/id_test.
Your public key has been saved in C:\Users\msicc\.ssh/id_test.pub.
The key fingerprint is:
SHA256:8LK33fWKXMkY5FtcN4uU1v9SyCSUqKNp2T/PurpZCRU newuser@machine.com
The key's randomart image is:
+---[RSA 4096]----+
|          E...   |
|          .o. o  |
|      .  .. o+.oo|
|       oo. oo=.o=|
|      .=S.  o.=.o|
|      =o.. . * o.|
|     .. ..o o * .|
|       . =o+ + o |
|        =o+=* ...|
+----[SHA256]-----+

As you can see, we are able to create SSH keys without any third-party application on Windows. You can now safely close the console window (e.g by typing ‘exit‘).

Deploying the public key to the server

Of course, the whole key pair thing has only sense if we are using it to secure our client/server communication. On Linux, there would be the easy to use command ssh-copy-id to deploy the key. Some Windows tutorials are showing the scp command, but I never got that working with my Azure VM. The only way left was to deploy it manually (which is not that difficult if you know how).

The manual way

After logging in (using Azure CLI), we are going to add a new user to the gang:

sudo useradd -m newuser

This will create a new user and its home directory. If the OS asks you for a password, it is up to you and the OS settings for empty passwords to provide one or not. Next step would be to add it to one or more groups if necessary:

sudo usermod -aG sudo newuser

The -aG parameter of the usermod command adds the specified group(s) to the user’s groups table. If you want to add the user to more than one group, separate them with a comma (no whitespace after the comma!).

To make things a little bit easier for us, we are going to log in as the new user:

sudo su newuser

Note: If you want to proceed without logging in as the new user, you’ll need to change ~ to /home/newuser for the following commands.

To make Linux accept our prior created SSH key only for our new user, we need to create the .ssh folder and the file for allowed keys in the new user’s home directory. Let’s start with the .ssh directory:

sudo mkdir ~/.ssh
sudo chmod 0700 ~/.ssh
sudo chown newuser:newuser ~/.ssh

Let’s break it down. Obviously, the mkdir command creates the .ssh folder. The chmod command with the 0700 parameter gives full access only to our new user. Finally, the chown command makes our new user the owner of the file.

Linux saves the allowed keys in a file called ‘authorized_keys‘, so that’s the next we are going to create. After creation, we change the file’s access rights to 0644, which makes it read-only for all users except our new user. Execute these commands:

sudo touch ~/.ssh/authorized_keys
sudo chmod 0644 ~/.ssh/authorized_keys

We’re getting closer… earlier, OpenSSH created two files in the .ssh folder on Windows. We are going to copy the contents of the .pub file into the authorized_keys file now. You can extract the content of the .pub file with Notepad on Windows. Once you have that one in your clipboard, open the authorized_keys file (using your favorite editor):

nano ~/.ssh/authorized_keys

Paste the content of your .pub file by right-clicking on the Azure CLI window. Save the file and close it. To secure the new user account, we need to apply some additional steps.

The first one is to delete and disable the password of our new user:

sudo passwd -d -l newuser

The -d parameter completely deletes the password. The -l parameter locks the state, preventing the user to set a new password (without using sudo, that is).

Additional security measures

Now that we have our SSH public key on our server, we are save to disable the password-based login in general. To do so, we need to modify the sshd_config file of our server. Open it with the editor of your choice:

sudo nano /etc/ssh/sshd_config

Search for PasswordAuthentication and ChallengeResponseAuthentication. Uncomment these entries if necessary by removing the ‘#‘ in front of the line an set them to ‘no‘. Some tutorials that are floating around the web tell you to even set UsePAM to no, but following this recommendation always disabled the login completely for me on the Azure VM and I always had to reset the SSH config via the Azure CLI.

Save and exit the sshd_config file. To take these changes into effect, we need to restart the SSH service on our service:

sudo service ssh restart

Wait a few seconds and then verify that the service restarted by controlling its status:

systemctl status ssh.service

That’s it, we have deployed our new SSH key to our server and took additional security measures. There’s just one thing left: try to log in via ssh as the new user. This is a pretty easy task. In the Azure CLI (or a local command prompt), enter the following command:

 ssh -i %USERPROFILE%\.ssh\id_test newuser@machine.com

After entering your password, you should be logged in just like you did with your admin user.

Bonus: add a shared directory

More often than not, you might want to create scripts or other files that are available to all users. Follow these simple steps:

sudo groupadd shared
sudo usermod -aG shared newuser

sudo mkdir -p /var/helpers
sudo chgrp -R shared /var/helpers
sudo chmod -R 2775 /var/helpers

The first two lines create a new group and assign our new user to the group. Repeat the second command for every user you want to be in that group.

Then we create the shared folder /helpers in the /var folder of our server. Utilizing the chgrp command, we give the ownership to our shared group. Last but not least, we are modifying the access rights once again for the /var/helper folder. The combination 2775 means that every new file inherits the group from the folder, allowing all members of the group to read, write and execute the file, while users outside the shared group only can read and execute the file.

Conclusion

As you can see, one does not always have to use third-party tools to get things done. Like I said at the beginning of this post, once you know the steps that are needed, it is pretty easy to create a new SSH key pair, create a new user and manually deploy the public key to the server. As always, I hope this post is helpful for some of you.

Helpful links

Title Image Credit (Pixabay)

Posted by msicc in Azure, Linux, 3 comments
Run your own Bitcoin Full Node on an Azure Linux VM

Run your own Bitcoin Full Node on an Azure Linux VM

One year ago, I began my journey in the crypto and blockchain area. Recently, multiple circumstances made me thinking about my own crypto-related server. Of course, I choose Azure for running this server (for the time being). It has been a while since I last touched Linux, so I had quite a bit to refresh and learn. In this post, I’ll show you the steps that are needed for setting up an independent full node to support the Bitcoin network.

Setting up the Virtual Machine on Azure

First, we need to install the Azure CLI on our computer. We will use this one to connect to our virtual machine later on via SSH. Follow the instructions found here.

The second prerequisite is a program to generate SSH keys. You can use either the OpenSSH client shipping with Windows 10 (latest versions), use the Azure CLI or PuTTY (follow these instructions).

Create the VM

Once you have installed the CLI and your SSH keys are created, log into your Azure account. Go to the marketplace, and search for ‘ubuntu‘. Choose Ubuntu Server 18.04 LTS and hit the ‘Create‘ button in the next window.

Fill in the details of your Azure VM on the first page of the creation module:

Do not forget to set the SSH admin user, as adding one afterward is not as easy as it seems and it often also fails (I had a hard time to learn that). Also, we need to allow traffic through the default HTTP (80) and SSH (22) ports. Once you configured everything, go to disks.

I did not select premium disks but instead went with a Standard HDD to save some money. You can change that to your needs. The important part here is to NOT use managed disks as we will need to resize the OS disk after the creation of the VM. Follow the rest of the steps in the creation wizard and create your virtual machine. Once the machine is created, you should create a DNS label (hit the ‘Configure’ link at the VM’s overview page to get to the IP settings).

Log in via SSH

Let’s try if we can log in to our Linux VM via the Azure CLI. Open the ‘Microsoft Azure Command Prompt‘ on your PC. To be able to connect to our virtual machine, we need to log in to Azure first to obtain an access token for our session:

az login 

This will open a new browser tab, where you need to log in to your account again. After that, we will be redirected back to the CLI. Once that happened, go back to the overview page of your VM and click on ‘Connect’. This will open a pane where we will see the RDP and SSH connection option. Select SSH and copy the text below ‘Login using VM local account‘:

Paste it into the Azure Command Prompt window and provide your password (you should never use a password-less SSH key). If your screen looks now similar to this, you have successfully logged in:

Now that we have verified that we are able to log in via SSH, type exit to log out again as we have one step left to perform on the virtual machine.

Resizing the OS disk

A Bitcoin full node needs to download and verify the whole blockchain. The current size of the blockchain is around 250 GB, which would never fit in the default size of the OS disk of our VM. Luckily, it is pretty easy to resize the OS disk by running some commands in the Azure CLI.

First, stop the virtual machine:

az vm stop --resource-group YourResourceGroupName --name YourVMName 

Resizing the OS disk needs the VM to be deallocated (this may take some minutes):

az vm deallocate --resource-group YourResourceGroupName --name YourVMName

Once deallocation has finished, we are able to resize the OS disk with this command:

az vm update --resource-group YourResourceGroupName --name YourVMName --set storageProfile.osDisk.diskSizeGB=1024

Once you see the new size in the returned response from Azure, we can start the VM again:

az vm start --resource-group YourResourceGroupName --name YourVMName 

Depending on the distribution you are using, you may have to perform additional steps. Ubuntu, however, mounts the new disk size without any additional action. Verifying the new disk size is pretty easy (after logging back in via SSH), as the System Information displayed after login should already reflect the change (like in the screen above).

Preparing the Bitcoin node

After completing all the steps above, we are finally able to move on with the preparations for the Bitcoin node.

Bitcoin service user

As we will run the node as a service, we need an unprivileged service account:

sudo useradd -m -s /dev/null bitcoin

Great, we just created the account, including the creation of the home directory for the service user and default shell entry. If you want to see the system’s response, just leave out the /dev/null part out when running the command.

As we are going to restrict the access to the service to the bitcoin user (and its default group, which is also bitcoin), we need to add our admin user to the group. Run the following command to do so:

sudo usermod -a -G bitcoin [admin-user]

In order to make these changes (especially the group add) persistent, we need to perform a restart before we move on. You can not only use the Azure portal or CLI, but also use this command to make the VM restart immediately:

sudo shutdown -r now

This will log you out of the current SSH session. After waiting one minute or two, just log back in to continue the preparation of the full node.

Downloading and Verifying Bitcoin binaries

The next step involves downloading the bitcoin core package and its valid signature files (as we don’t trust, but verify). Run these two commands (you will need to hit enter a second time after the first download finished). I also created a temp directory for the download and other stuff.

mkdir ~/temp
cd ~/temp
btc_version="0.18.1"
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/SHA256SUMS.asc

According to Bitcoin.org the latest releases are signed with Wladimir J. van der Laan’s releases key, which has the fingerprint we are going to verify the downloaded binaries. It is recommended to do this for all crypto-related binaries you’re downloading, no matter on which OS.

Let’s try to import the key of Mr. van der Laan:

gpg --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

You may get an error message telling you there was a server failure. In this case, run the following command to import the key:

gpg --keyserver hkp://keyserver.ubuntu.com:80 --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

If you still get errors, there might be some missing packages or other reasons for the server failure. I managed to come through with the second command more often than the first, but in the end, I had the key in my local key store.

Now let’s verify the downloaded files:

gpg --verify SHA256SUMS.asc
sha256sum --ignore-missing -c SHA256SUMS.asc

If the command tells you that the signature is good and indeed from Mr. van der Laan, everything is fine with the hash file. The second command verifies the archive we downloaded earlier and should result in ‘OK’. If not, you should immediately delete those files as they might contain malware.

Note: I have read quite a few comments on reddit and other sites that we can safely ignore those warnings …

Installing Bitcoin binaries

Now that we have our bitcoin service user and verified the bitcoin binaries, we are finally at the point to install them:

tar zxf bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
pushd bitcoin-$btc_version/bin; sudo cp bitcoind bitcoin-cli /usr/bin; popd;
bitcoind --version

After getting the install verification via the version string, it is a good practice to remove both the binaries and the hash file. If you need to reinstall, perform the steps above again to verify the authenticity of the files. Run these commands to remove those files:

rm bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
rm SHA256SUMS.asc

Create a service config file

Now that we have Bitcoin installed, we need to prepare a .conf file for our upcoming service:

vi bitcoin.conf  
or
nano bitcoin.conf

This will open a text editor on Linux. Enter the base config to get the RPC server of the Bitcoin daemon (needed for bitcoin-cli) activated and save the file to the temp disc (press ‘ESC’ + ‘:’ and write ‘wq‘ if you used vi:, ‘CTRL’+’x’ followed by ‘y’ on nano):

testnet=0
regstest=0
mainnet=1
# Global Options
server=1 #activating rpc
rpcconnect=127.0.0.1 #default
rpcport=8332 #default
rpcallowip=127.0.0.1/32 #default
rpcbind=127.0.0.1  #default
disablewallet=1 #keeping wallet off (atm)
daemon=1
# Options only for mainnet
[main]
# Options only for testnet
[test]
# Options only for regtest
[regtest]

Now we just need to copy that file into the /etc/bitcoin folder. If this folder does not yet exist, create it with the following command:

sudo mkdir -p /etc/bitcoin

Next, copy the bitcoin.conf file to it:

sudo cp bitcoin.conf /etc/bitcoin

Assign ownership to the bitcoin service user and make it readable for all users:

sudo chown bitcoin:bitcoin /etc/bitcoin/bitcoin.conf
sudo chmod 0664 /etc/bitcoin/bitcoin.conf

The last step is to create a directory for the daemon where we will find the PID (Process ID) file after starting the service:

sudo mkdir -p /run/bitcoind/
sudo chmod 0755 /run/bitcoind/
sudo chown bitcoin:bitcoin /run/bitcoind/

Create the Bitcoin service

Now we are able to set up the core service of our node, the Bitcoin daemon service. In order to start, just copy and paste the one found at Bitcoin’s Github account into a new file on your VM. Once you have that file in your temp folder, copy it over to the system’s services folder:

sudo cp bitcoind.service /lib/systemd/system

Now we need to enable the service to make it automatically starting up on reboot:

sudo systemctl enable bitcoind

And finally, we need to start the service (or reboot the machine if you want to test that part):

sudo systemctl start bitcoind

After some time, you should be able to use the bitcoin-cli to get some info of your local blockchain copy:

sudo bitcoin-cli -rpccookiefile=/var/lib/bitcoind/.cookie -datadir=/var/lib/bitcoind -getinfo

Please note you need to use sudo because the owner of the service and the files is our bitcoin user we created earlier. If you don’t want to always pass the authentication cookie path, you can create a symbolic link to your current user’s .bitcoin folder:

sudo ln -s /var/lib/bitcoind/.cookie ~/.bitcoin/.cookie

Now we are able to just call sudo bitcoin-cli -getinfo. Another way of checking if the service and the daemon are running is to read the log that gets generated:

sudo tail /var/lib/bitcoind/debug.log -f 

You should see the log scrolling through as it writes new entries. The most entries will look like this:

2019-09-07T14:54:51Z UpdateTip: new best=000000000000004fa323c7ee57b4b22272c7ea757a6a5bdb53dbda73572f559d height=239240 version=0x00000002 log2_work=70.203272 tx=18819570 date='2013-06-02T08:32:35Z' progress=0.041989 cache=675.7MiB(5032783txo)

If you arrived at this point

Congratulations! You are running a Bitcoin full node (without wallet for the time being, though). It will take some time to download and verify the whole blockchain, but you are now effectively helping and securing the Bitcoin network.

This is just the first post about my journey with Bitcoin and my own node. Make sure to follow for future blog posts. As always, I hope this post will be helpful for some of you.

Helpful links

Note: this post was reposted on my Trybe.one account.

Posted by msicc in Azure, Crypto&Blockchain, Linux, 0 comments
Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Internal libraries

Writing (or copy and pasting) the same code over and over again is one of those things I try to avoid when writing code. For quite some time, I already organize such code in libraries. Until last year, this required quite some work managing all libraries for each Xamarin platform I used. Luckily, the MSBuild SDK Extras extensions showed up and made everything a whole lot easier, especially after James Montemagno did a detailed explanation on how to get the most out of it for Xamarin plugins/libraries.

Getting started

Even if I repeat some of the steps of James’ post, I’ll start from scratch on the setup part here. I hope to make the whole process straight forward for everyone – that’s why I think it makes sense to show each and every step. Please make sure you are using the new .csproj type. If you need a refresh on that, you can check my post about migrating to it (if needed).

MSBuild.Sdk.Extras

The first step is pulling in MSBuild.Sdk.Extras, which will enable us to target multiple platforms in one single library. For this, we need a global.json file in the solution folder. Right click on the solution name and select ‘Open Folder in File Explorer‘, then just add a new text file and name it appropriately.

The next step is to define the version of the MSBuild.SDK.Extras library we want to use. The current version is 1.6.65, so let’s define it in the file. Just click the ‘Solution and Folders‘ button to find the file in Visual Studio:

switch to folder view

Add these lines into the file and save it:

{
  "msbuild-sdks": {
    "MSBuild.Sdk.Extras": "1.6.65"
  }
}

Modifying the project file

Switch back to the Solution view and right click on the .csproj file. Select ‘Edit [ProjectName].csproj‘. Let’s modify and add the project definitions. We’ll start right in the first line. Replace the first line to pull in the MSBuild.Sdk.Extras:

<Project Sdk="MSBuild.Sdk.Extras">

Next, we’re separating the Version tag. This will ensure that we’ll find it very quickly in future within the file:

  <!--separated for accessibility-->
  <PropertyGroup>
    <Version>1.0.0.0</Version>
  </PropertyGroup>

Now we are enabling multiple targets, in this case our Xamarin platforms. Please note that there are two separated versions – one that includes UWP and one that does not. I thought I would be fine to remove the non-UWP one if I include UWP and was precent with some strange build errors that where resolved only by re-adding the deleted line. I do not remember the reason, but I made a comment in my template to not remove it – so let’s just keep it that way.

  <!--make it multi-platform library!-->
  <PropertyGroup>
    <UseFullSemVerForNuGet>false</UseFullSemVerForNuGet>
    <!--we are handling compile items ourselves below with a custom naming scheme-->
    <EnableDefaultCompileItems>false</EnableDefaultCompileItems>
    <KEEP ALL THREE IF YOU ADD UWP!-->
    <TargetFrameworks></TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' == 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;uap10.0.16299;</TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' != 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;</TargetFrameworks>
  </PropertyGroup>

Now we will add some default NuGet packages into the project and make sure our file get included only on the correct platform. We follow a simple file naming scheme (Xamarin.Essentials uses the same):

[Class].[platform].cs

This way, we are able to add all platform specific code together with the shared entry point in a single folder. Let’ start with shared items. These will be available on all platforms listed in the PropertyGroup above:

  <!--shared items-->
  <ItemGroup>
    <!--keeping this one ensures everything goes smooth-->
    <PackageReference Include="MSBuild.Sdk.Extras" Version="1.6.65" PrivateAssets="All" />

    <!--most commonly used (by me)-->
    <PackageReference Include="Xamarin.Forms" Version="3.4.0.1029999" />
    <PackageReference Include="Xamarin.Essentials" Version="1.0.1" />

    <!--include content, exclude obj and bin folders-->
    <None Include="**\*.cs;**\*.xml;**\*.axml;**\*.png;**\*.xaml" Exclude="obj\**\*.*;bin\**\*.*;bin;obj" />
    <Compile Include="**\*.shared.cs" />
  </ItemGroup>

The ‘**\‘ part in the Include property of the Compile tag ensures MSBuild includes also classes in subfolders. Now let’s add some platform specific rules to the project:

  <ItemGroup Condition=" $(TargetFramework.StartsWith('netstandard')) ">
    <Compile Include="**\*.netstandard.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('uap10.0')) ">
    <PackageReference Include="Microsoft.NETCore.UniversalWindowsPlatform" Version="6.1.9" />
    <Compile Include="**\*.uwp.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('MonoAndroid')) ">
    <!--need to reference all those libs to get latest minimum Android SDK version (requirement by Google)... #sigh-->
    <PackageReference Include="Xamarin.Android.Support.Annotations" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Compat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Core.Utils" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.CustomTabs" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v4" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Design" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.AppCompat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.CardView" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.Palette" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.MediaRouter" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Core.UI" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Fragment" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Media.Compat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.RecyclerView" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Transition" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="28.0.0.1" />
    <Compile Include="**\*.android.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('Xamarin.iOS')) ">
    <Compile Include="**\*.ios.cs" />
  </ItemGroup>

Two side notes:

  • Do not reference version 6.2.2 of the Microsoft.NETCore.UniversalWindowsPlatform NuGet. There seems to be bug in there that will lead to rejection of your app from the Microsoft Store. Just keep 6.1.9 (for the moment).
  • You may not need all of the Xamarin.Android packages, but there are a bunch of dependencies between them and others, so I decided to keep them all

If you have followed along, hit the save button and close the .csproj file. Verifying everything went well is pretty easy – your solution structure should look like this:

multi-targeting-project

Before we’ll have a look at the NuGet creation part of this post, let’s add some sample code. Just insert this into static partial classes with the appropriate naming scheme for every platform and edit the code to match the platform. The .shared version of this should be empty (for this sample).

     public static partial class Hello
    {
        public static string Name { get; set; }

        public static string Platform { get; set; }

        public static  void Print()
        {
            if (!string.IsNullOrEmpty(Name) && !string.IsNullOrEmpty(Platform))
                System.Diagnostics.Debug.WriteLine($"Hello {Name} from {Platform}");
            else
                System.Diagnostics.Debug.WriteLine($"Hello unkown person from {Device.Android}");
        }
    }

Normally, this would be a Renderer or other platform specific code. You should get the idea.

Preparing NuGet package creation

We will now prepare our solution to automatically generate NuGet packages both for DEBUG and RELEASE configurations. Once the packages are created, we will push it to a local (or network) file folder, which serves as our local NuGet-Server. This will fit for most Indie-developers – which tend to not replicate a full blown enterprise infrastructure for their DevOps needs. I will also mention how you could push the packages to an internal NuGet server on a sideline (we are using a similar setup at work).

Adding NuGet Push configurations

One thing we want to make sure is that we are not going to push packages on every compilation of our library. That’s why we need to separate configurations. To add new configurations, open the Configuration Manager in Visual Studio:

In the Configuration Manager dialog, select the ‘<New…>‘ option from the ‘Active solution configuration‘ ComboBox:

Name the new config to fit your needs, I just use DebugNuget which will signal that we are pushing the NuGet package for distribution. I am copying the settings from the Debug configuration and let Visual Studio add the configurations to project files within the solution. Repeat the same for Release configuration.

The result should look like this:

Modifying the project file (again)

If you head over to your project file, you will see the Configurations tag has new entries:

  <PropertyGroup>
    <Configurations>Debug;Release;DebugNuget;ReleaseNuget</Configurations>
  </PropertyGroup>

Next, add the properties of your assembly and package:

    <!--assmebly properties-->
  <PropertyGroup>
    <AssemblyName>XamarinNugets</AssemblyName>
    <RootNamespace>XamarinNugets</RootNamespace>
    <Product>XamarinNugets</Product>
    <AssemblyVersion>$(Version)</AssemblyVersion>
    <AssemblyFileVersion>$(Version)</AssemblyFileVersion>
    <NeutralLanguage>en</NeutralLanguage>
    <LangVersion>7.1</LangVersion>
  </PropertyGroup>

  <!--nuget package properties-->
  <PropertyGroup>
    <PackageId>XamarinNugets</PackageId>
    <PackageLicenseUrl>https://github.com/MSiccDevXamarinNugets</PackageLicenseUrl>
    <PackageProjectUrl>https://github.com/MSiccDevXamarinNugets</PackageProjectUrl>
    <RepositoryUrl>https://github.com/MSiccDevXamarinNugets</RepositoryUrl>

    <PackageReleaseNotes>Xamarin Nugets sample package</PackageReleaseNotes>
    <PackageTags>xamarin, windows, ios, android, xamarin.forms, plugin</PackageTags>

    <Title>Xamarin Nugets</Title>
    <Summary>Xamarin Nugets sample package</Summary>
    <Description>Xamarin Nugets sample package</Description>

    <Owners>MSiccDev Software Development</Owners>
    <Authors>MSiccDev Software Development</Authors>
    <Copyright>MSiccDev Software Development</Copyright>
  </PropertyGroup>

Configuration specific properties

Now we will add some configuration specific PropertyGroups that control if a package will be created.

Debug and DebugNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Debug' ">
    <DefineConstants>DEBUG</DefineConstants>
    <!--making this pre-release-->
    <PackageVersion>$(Version)-pre</PackageVersion>
    <!--needed for debugging!-->
    <DebugType>full</DebugType>
    <DebugSymbols>true</DebugSymbols>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(Configuration)'=='DebugNuget' ">
    <DefineConstants>DEBUG</DefineConstants>
    <!--enable package creation-->
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <!--making this pre-release-->
    <PackageVersion>$(Version)-pre</PackageVersion>
    <!--needed for debugging!-->
    <DebugType>full</DebugType>
    <DebugSymbols>true</DebugSymbols>
    <GenerateDocumentationFile>false</GenerateDocumentationFile>
    <!--this makes msbuild creating src folder inside the symbols package-->
    <IncludeSource>True</IncludeSource>
    <IncludeSymbols>True</IncludeSymbols>
  </PropertyGroup>

The Debug configuration enables us to step into the Debug code while we are referencing the project directly during development, while the DebugNuget configuration will also generate a NuGet package including Source and Symbols. This is helpful once you find a bug in the NuGet package and allows us to step into this code also if we reference the NuGet instead of the project. Both configurations will add ‘-pre‘ to the version, making these packages only appear if you tick the ‘Include prerelease‘ CheckBox in the NuGet Package Manager.

Release and ReleaseNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Release' ">
    <DefineConstants>RELEASE</DefineConstants>
    <PackageVersion>$(Version)</PackageVersion>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(Configuration)'=='ReleaseNuget' ">
    <DefineConstants>RELEASE</DefineConstants>
    <PackageVersion>$(Version)</PackageVersion>
    <!--enable package creation-->
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <!--include pdb for analytic services-->
    <DebugType>pdbonly</DebugType>
    <GenerateDocumentationFile>true</GenerateDocumentationFile>
  </PropertyGroup>

The relase configuration goes well with less settings. We do not generate a separated symbols-package here, as the .pdb-file without the source will do well in most cases.

Adding Build Targets

We are close to finish our implementation already. Of course, we want to make sure we push only the latest packages. To ensure this, we are cleaning all generated NuGet packages before we build the project/solution:

  <!--cleaning older nugets-->
  <Target Name="CleanOldNupkg" BeforeTargets="Build">
    <ItemGroup>
      <FilesToDelete Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></FilesToDelete>
    </ItemGroup>
    <Delete Files="@(FilesToDelete)" />
    <Message Text="Old nupkg in $(ProjectDir)$(BaseOutputPath)$(Configuration) deleted." Importance="High"></Message>
  </Target>

MSBuild provides a lot of options to configure. We are setting the BeforeTargets property of the target to Build, so once we Clean/Build/Rebuild, all old packages will be deleted by the Delete command. Finally, we are printing a message to confirm the deletion.

Pushing the packages

After completing all these steps above, we are ready to distribute our packages. In our case, we are copying the packages to a local folder with the Copy command.

  <!--pushing to local folder (or network path)-->
  <Target Name="PushDebug" AfterTargets="Pack" Condition="'$(Configuration)'=='DebugNuget'">
    <ItemGroup>
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg"></PackageToCopy>
    </ItemGroup>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>
  </Target>

  <Target Name="PushRelease" AfterTargets="Pack" Condition="'$(Configuration)'=='ReleaseNuget'">
    <ItemGroup>
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></PackageToCopy>
    </ItemGroup>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>
  </Target>

Please note that the local folder could be replaced by a network path. You have to ensure the availability of that path – which adds in some additional work if you choose this route.

If you’re running a full NuGet server (as often happens in Enterprise environments), you can push the packages with this command (instead of the Copy command):

<Exec Command="NuGet push "$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg" [YourPublishKey] -Source [YourNugetServerUrl]" />

The result

If we now select the DebugNuget/ReleaseNuget configuration, Visual Studio will create our NuGet package and push it to our Nuget folder/server:

Let’s have a look into the NuGet package as well. Open your file location defined above and search your package:

As you can see, the Copy command executed successfully. To inspect NuGet packages, you need the NuGet Package Explorer app. Once installed, just double click the package to view its contents. Your result should be similar to this for the DebugNuGet package:

As you can see, we have both the .pdb files as well as the source in our package as intended.

Conclusion

Even as an Indie developer, you can take advantage of the DevOps options provided with Visual Studio and MSBuild. The MSBuild.Sdk.Extras package enables us to maintain a multi-targeting package for our Xamarin(.Forms) code. The whole process needs some setup, but once you have performed the steps above, extending your libraries is just fast forward.

I planned to write this post for quite some time, and I am happy with doing it as my contribution to the #XamarinMonth (initiated by Luis Matos). As always, I hope this post is helpful for some of you. Feel free to clone and play with the full sample I uploaded on Github.

Until the next post, happy coding, everyone!

Helpful links:

Title image credit

P.S. Feel free to download the official app for my blog (that uses a lot of what I am blogging about):
iOS | Android | Windows 10

Posted by msicc in Azure, Dev Stories, iOS, UWP, Xamarin, 3 comments