msicc

What I’ve learned from porting my first app ever to Android and iOS with Xamarin

What I’ve learned from porting my first app ever to Android and iOS with Xamarin

What’s the app about?

The app is about fishing knots. It sounds boring for most people, but for me, this app made me becoming a developer. So I have a somewhat emotional connection to it. It was back in time when Windows Phone 7 was new and hot. A new shiny OS from Microsoft, but clearly lacking the loads of apps that were available on iOS and Android. At that time, I also managed to get my fishing license in Germany.

As I had a hard time to remember how to tie fishing knots, I searched the store and found… nothing. I got very angry over that fact, partly because it meant I had to use one of the static websites back then, but more about the fact that there was this damn app gap (WP7 users will remember). So I finally decided to learn how to write code for Windows Phone and wrote my first app ever after some heavy self-studying months.

Why porting it?

Writing code should soon become my favorite spare-time activity, effectively replacing fishing. And so the years went on, I made some more apps (most of them for Windows Phone) and also managed to become employed as a developer. After some time, S. Nadella became the CEO of Microsoft, and Windows for mobile phones was dead. So I had created all my “babies”, and they were now set to die as Windows Phone/Mobile did. Not accepting this, I started to create a plan to port my apps over to the remaining mobile platforms. After Facebook effectively killed my most successful app (UniShare – that’s another story, though), I stopped porting that one and started with Fishing Knots +.

Reading your own old code (may) hurt

When I was starting to analyze which parts of the code I could reuse, I was kind of shocked. Of course, I knew that there was this code that I wrote when I didn’t know it better, but I refused to have a look into it and refactor it (for obvious reasons from today’s perspective). I violated a lot of best practices back then, the most prominent ones

  • No MVVM
  • Repeating myself over and over again
  • Monster methods with more than 100 lines within

In the end, I did the only right thing and did not reuse any line of my old code.

Reusing the concept without reusing old code

After I took the right decision to not use my old codebase, I needed to abstract the concept from the old app and translate it into what I know now about best practices and MVVM. I did not immediately start with the implementation, however.

The first thing I did was drawing the concept on a piece of paper. I used a no-code language in that sketch and asked my family if they understand the idea behind the app (you could also ask your non-tech friends). This approach helped me to identify the top 3 features of the app:

  • Controllable animation of each knot
  • Easy-to-follow 3-step instructions for each knot
  • Read-Aloud function of the instructions

Having defined the so-called “Minimum Viable Product“, I was ready to think about the implementation(s).

The new implementation

Finding the right implementation isn’t always straight forward. The first thing I wrote was the custom control that powers the controllable animation behind the scenes. I wrote it out of the context in a separated solution as I packed it into a NuGet package after finishing. It turned out to be also the most complex part of the whole app. It uses a common API in Xamarin.Forms, and custom renderers for Android and iOS. I had to go that route because of performance reasons – which is one of the learnings I took away from the porting.

It was also clear that I will use the MVVM pattern in the new version. So I was setting up some basic things using my own Nuget packages that I wrote during working on other Xamarin based projects.

When it came to the overall structure of my app, I thought a Master/Detail implementation would be fine. However, somehow this never felt right, and so I turned to Shell (which was pretty new, so I tried to give it a shot). In the end, I went with a more custom approach. The app uses a TabbedPage with 3 tabs, one being for the animation, the second for the 3-Step tutorial, and last but not least the Settings/About page. The first two pages share a custom top-down menu implementation, bound to the same ViewModel for its items and selection.

More Xamarin.Forms features I learned (to love)

Xamarin and Xamarin.Forms itself are already powerful and have matured a lot since the time I used it to write my first Xamarin app for Telefonicá Germany. Here is a (high level) list of features I started to use:

  • Xamarin.Essentials – the one library that kickstarts your application – seriously!
  • Xamarin Forms Animations – polish the appearance of your app with some nice looking visual activity within the UI
  • Xamarin Forms Effects – easily modify/enhance existing controls without creating a full-blown custom renderer
  • Xamarin Forms VisualStateManager – makes it (sometimes) a whole lot easier to change the UI based on property changes
  • Xamarin.Forms Triggers – alternative approach to modify the UI based on property changes (but not limited to that)

The three musketeers

Because of Xamarin and Xamarin.Forms are such powerful tools, you may run into the situation of needing help/more information. My three musketeers to get missing information, implementation help or solution ideas:

  • Microsoft Xamarin Docs – the docs for Xamarin are pretty extensive and by reading them (even again), I often had one of these “gotcha!”- moments
  • Github – if the docs don’t help, Github may. Be it in the issues of Xamarin(.Forms) or studying the renderers, Github has been as helpful as the docs to me.
  • Web Search – chances are high that someone had similar problems/ideas solved and wrote a blog about it. I don’t blindly copy those solutions. First I read them, then I try to understand them and finally, I implement my own abstraction of them. This way, I am in a steady learning process.

Learn to understand native implementations

I guarantee you will run into a situation where the musketeers do not help when focusing solely on Xamarin. Accept the situation that Xamarin is sitting on top of the native code of others and does the heavy conversion for us. Learn to read Objective-C, Swift, Java and Kotlin code and translate it into C# code. Once you found possible solutions in one of the native samples, blog posts or docs, you will see that most of them are easy to translate into Xamarin code. Do not avoid this part of Xamarin development, it will help you in future, trust me.

Conclusion

Porting over my first app ever to Android and iOS has provided me not only a lot of fun but also huge learnings/practicing. Some of them are of behavioral nature, some of them are code implementations. This post is about the behavioral part – I will write about some of the implementations in my upcoming blog posts.

I hope you enjoyed reading this post. If you have questions or have similar experiences and want to discuss, feel free to leave a comment on this post or connect to me via social media.

Until the next post, happy coding!

Helpful links:

Posted by msicc in Dev Stories, Xamarin, 0 comments

How to work around IDE freezes in Visual Studio 2019 when switching between Build configurations

After a lot of project unloading, deleting our custom configurations back and forth and project file modifications, we finally found a workaround to continue our work by browsing other/similar issues to ours in the VS feedback forums. The freeze of the solution is caused by a feature that is meant to fasten up solution loading: parallel project initialization.

It seems to be an issue that some others already had with 16.2.x versions, so it may be a regression (as it is flagged as fixed).

The workaround

The workaround is pretty easy, just follow these simple steps:

  • Open Tools / Options in Visual Studio 2019 and find the Projects and Solutions node
  • Unselect ‘Allow parallel project initialization’
  • Click ‘OK’ and close the solution

After that, we need to delete the .vs folder of Visual Studio 2019 within the local solution folder. This folder contains a SQLite DB that corresponds with some behind the scenes stuff for the parallel project initialization (and more).

Now open the solution – switching between Build configurations should now work again.

As always, I hope this post is helpful for some of you.

Until the next, happy coding!

Posted by msicc in Dev Stories, 0 comments
[Updated] #XfEffects: Xamarin.Forms Effect to change the TintColor of ImageButton’s image – (new series)

[Updated] #XfEffects: Xamarin.Forms Effect to change the TintColor of ImageButton’s image – (new series)

The documentation recommends using Effects when we just want to change properties on the underlying native control. I have begun to love Effects as they make my code more readable. With Effects, I always know that there is a platform-specific implementation attached, while that is not obvious when using a custom renderer. Nowadays, I always try to implement an Effect before a Renderer.


Update: I updated the sample project to also respect changes in the ImageButton‘s source. I recently ran into the situation to change the Source (Play/Pause) via my ViewModel based on its state and realized that the effect needs to be reapplied in this case. Please be aware.


The basics

Effects work in a similar way to Renderers. You implement the definition in the Xamarin.Forms project, which attaches it to the control that needs the change. The PlatformEffect implementation needs to be exported to be compiled into the application. Like in a Renderer, the platform implementation also supports property changes. In this new series #XfEffects, I am going to show you some Effects that have been useful for me.

Effect declaration

Let’s turn over to this post’s Effect. We will change the TintColor of an ImageButton to our needs. Let’s start with creating the class for our Effect:

    public class ImageButtonTintEffect : RoutingEffect
    {
        public ImageButtonTintEffect() : base($"XfEffects.{nameof(ImageButtonTintEffect)}")
        {
        }
    }

All Xamarin.Forms Effect implementations need to derive from the RoutingEffect class and pass the Effect‘s name to the base class’ constructor. That’s pretty much everything we have to do for the Effect class itself.

Effect extension for passing parameters

The easiest way for us to get our desired TintColor to the platform implementation is an attached BindableProperty. To be able to attach the BindableProperty, we need a static class that provides the container for the attached property:

public static class ImageButtonTintEffectExtensions
{
    public static readonly BindableProperty TintColorProperty = BindableProperty.CreateAttached("TintColor", typeof(Color), typeof(ImageButtonTintEffectExtensions), default, propertyChanged: OnTintColorPropertyChanged);
    public static Color GetTintColor(BindableObject bindable)
    {
        return (Color)bindable.GetValue(TintColorProperty);
    }
    public static void SetTintColor(BindableObject bindable, Color value)
    {
        bindable.SetValue(TintColorProperty, value);
    }
    private static void OnTintColorPropertyChanged(BindableObject bindable, object oldValue, object newValue)
    {
    }
}

Of course, we want to set the TintColor property as Xamarin.Forms.Color as this will make it pretty easy to control the color from a Style or even a ViewModel.

We want our effect to only being invoked if we change the default TintColor value. This makes sure we are running only code that is necessary for our application:

private static void OnTintColorPropertyChanged(BindableObject bindable, object oldValue, object newValue)
{
    if (bindable is ImageButton current)
    {
        if ((Color)newValue != default)
        {
            if (!current.Effects.Any(e => e is ImageButtonTintEffect))
                current.Effects.Add(Effect.Resolve(nameof(ImageButtonTintEffect)));
        }
        else
        {
            if (current.Effects.Any(e => e is ImageButtonTintEffect))
            {
                var existingEffect = current.Effects.FirstOrDefault(e => e is ImageButtonTintEffect);
                current.Effects.Remove(existingEffect);
            }
        }
    }
}

Last but not least in our Xamarin.Forms application, we want to use the new Effect. This is pretty easy:

<!--import the namespace-->
xmlns:effects="clr-namespace:XfEffects.Effects"
<!--then use it like this-->
<ImageButton Margin="12,0,12,12"
    effects:ImageButtonTintEffectExtensions.TintColor="Red"
    BackgroundColor="Transparent"
    HeightRequest="48"
    HorizontalOptions="CenterAndExpand"
    Source="ic_refresh_white_24dp.png"
    WidthRequest="48">
    <ImageButton.Effects>
        <effects:ImageButtonTintEffect />
    </ImageButton.Effects>
</ImageButton>

We are using the attached property we created above to provide the TintColor to the ImageButtonTintEffect, which we need to add to the ImageButton‘s Effects List.

Android implementation

Let’s have a look at the Android-specific implementation. First, let’s add the platform class and decorate it with the proper attributes to export our effect:

using Android.Content.Res;
using Android.Graphics;
using System;
using System.ComponentModel;
using Xamarin.Forms;
using Xamarin.Forms.Platform.Android;
using AWImageButton = Android.Support.V7.Widget.AppCompatImageButton;
[assembly: ResolutionGroupName("XfEffects")]
[assembly: ExportEffect(typeof(XfEffects.Droid.Effects.ImageButtonTintEffect), nameof(XfEffects.Effects.ImageButtonTintEffect))]
namespace XfEffects.Droid.Effects
{
    public class ImageButtonTintEffect : PlatformEffect
    {
        protected override void OnAttached()
        {
            
        }
        protected override void OnDetached()
        {
        }
        protected override void OnElementPropertyChanged(PropertyChangedEventArgs args)
        {
        }
    }
}

Remember: the ResolutionGroupName needs to be defined just once per app and should not change. Similar to a custom Renderer, we also need to export the definition of the platform implementation and the Forms implementation to make our Effect working.

Android controls like buttons have different states. Properties on Android controls like the color can be set based on their State attribute. Xamarin.Android implements these states in the Android.Resource.Attribute class. We define our ImageButton‘s states like this:

static readonly int[][] _colorStates =
{
    new[] { global::Android.Resource.Attribute.StateEnabled },
    new[] { -global::Android.Resource.Attribute.StateEnabled }, //disabled state
    new[] { global::Android.Resource.Attribute.StatePressed } //pressed state
};

Good to know: certain states like ‘disabled‘ are created by just adding a ‘-‘ in front of the matching state defined in the OS states list (negating it). We need this declaration to create our own ColorStateList, which will override the color of the ImageButton‘s image. Add this method to the class created above:

private void UpdateTintColor()
{
    try
    {
        if (this.Control is AWImageButton imageButton)
        {
            var androidColor = XfEffects.Effects.ImageButtonTintEffectExtensions.GetTintColor(this.Element).ToAndroid();
            var disabledColor = androidColor;
            disabledColor.A = 0x1C; //140
            var pressedColor = androidColor;
            pressedColor.A = 0x24; //180
            imageButton.ImageTintList = new ColorStateList(_colorStates, new[] { androidColor.ToArgb(), disabledColor.ToArgb(), pressedColor.ToArgb() });
            imageButton.ImageTintMode = PorterDuff.Mode.SrcIn;
        }
    }
    catch (Exception ex)
    {
        System.Diagnostics.Debug.WriteLine(
            $"An error occurred when setting the {typeof(XfEffects.Effects.ImageButtonTintEffect)} effect: {ex.Message}\n{ex.StackTrace}");
    }
}

This code works above the Android SDK 23, as only then the ability to modify the A-channel of the defined color was added. The Xamarin.Forms ImageButton translates into a AppCompatImageButton on Android. The AppCompatImageButton has the ImageTintList property. This property is of type ColorStatesList, which uses the states we defined earlier and the matching colors for those states.

Last but not least, we need to set the composition mode. If you want to get a deeper understanding of that, a great starting point is this StackOverflow question. To make things not too complicated, we are infusing the color into the image. The final step is to call the method in the OnAttached override as well as in the OnElementPropertyChanged override.

The result based on the sample I created looks like this:

iOS implementation

Of course, also on iOS, we have to attribute the class, similar to the Android version:

using System;
using System.ComponentModel;
using UIKit;
using Xamarin.Forms;
using Xamarin.Forms.Platform.iOS;
[assembly: ResolutionGroupName("XfEffects")]
[assembly: ExportEffect(typeof(XfEffects.iOS.Effects.ImageButtonTintEffect), nameof(XfEffects.Effects.ImageButtonTintEffect))]
namespace XfEffects.iOS.Effects
{
    public class ImageButtonTintEffect : PlatformEffect
    {
        protected override void OnAttached()
        {
        }
        protected override void OnDetached()
        {
        }
        protected override void OnElementPropertyChanged(PropertyChangedEventArgs args)
        {
        }
    }
}

The underlying control of the Xamarin.Forms ImageButton is a default UIButton on iOS. The UIButton control has an UIImageView, which can be changed with the SetImage method. Based on that knowledge, we are going to implement the UpdateTintColor method:

private void UpdateTintColor()
{
    try
    {
        if (this.Control is UIButton imagebutton)
        {
            if (imagebutton.ImageView?.Image != null)
            {
                var templatedImg = imagebutton.CurrentImage.ImageWithRenderingMode(UIImageRenderingMode.AlwaysTemplate);
                //clear the image on the button
                imagebutton.SetImage(null, UIControlState.Normal);
                imagebutton.ImageView.TintColor = XfEffects.Effects.ImageButtonTintEffectExtensions.GetTintColor(this.Element).ToUIColor();
                imagebutton.TintColor = XfEffects.Effects.ImageButtonTintEffectExtensions.GetTintColor(this.Element).ToUIColor();
                imagebutton.SetImage(templatedImg, UIControlState.Normal);
            }
        }
    }
    catch (Exception ex)
    {
        System.Diagnostics.Debug.WriteLine($"An error occurred when setting the {typeof(ImageButtonTintEffect)} effect: {ex.Message}\n{ex.StackTrace}");
    }
}

Let’s review these code lines. The first step is to extract the already attached Image as a templated Image from the UIButton‘s UIImageView. The second step is to clear the Image from the Button, using the SetImage method and passing null as UIImage parameter. I tried to leave out this step, but it does not work if you do so.

The next step changes the TintColor for the UIButton‘s UIImageView as well as for the button itself. I was only able to get the TintColor changed once I changed both TintColor properties.

The final step is to re-attach the extracted image from step one to the UIImageButton‘s UIImageView, using once again the SetImage method – but this time, we are passing the extracted image as UIImage parameter.

Of course, also here we need to call this method in the OnAttached override as well as in OnElementPropertyChanged.

The result should look similar to this one:

Conclusion

It is pretty easy to implement extended functionality with a Xamarin.Forms Effect. The process is similar to the one of creating a custom renderer. By using attached properties you can fine-tune the usage of this code and also pass property values to the platform implementations.

Please make sure to follow along for the other Effects I will post as part of this new series. I also created a sample app to demonstrate the Effects (find it on Github). As always, I hope this post will be helpful for some of you.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Dev Stories, iOS, Xamarin, 0 comments
Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

As I am moving forward in my current Linux journey, I recently came into a situation where a second user would have been handy. So I tried a few things to create the new user and allow the new user to only log in via SSH.

What the h*** is SSH?

SSH stands for Secure Shell and describes a protocol to connect via encrypted credentials. The security is provided by cryptographic keys, where the server only knows the public key and the client that wants to connect needs the matching private key. The most popular implementation is OpenSSH, which is available as an additional feature on Windows 10 since last fall. If you want to learn more about SSH, read the Wikipedia entry as well.

Create the SSH key pair on Windows

Install the OpenSSH client

First, install the OpenSSH client on your Windows machine. Proceed as follows:

  • open the start menu and type ‘apps’
  • select ‘Apps and features’ settings page
  • select ‘Optional features’
  • click on ‘Add a feature’
  • search ‘OpenSSH client’, click on it and ‘Install’

If you are scrolling down the list of installed features, you should find the entry for the OpenSSH client.

Note: If you are not able to install this feature, it may be the right time to update your Windows installation to the latest version.

Create your SSH key pair

If your user profile does not have a folder called ‘.ssh’, it is time to create it now. Type ‘%USERPROFILE%‘ in the Windows Explorer’s address bar to get to the right folder immediately and create the folder.

Now that we have the folder OpenSSH searches for, we are already able to create our new SSH keypair. Open a command prompt and type this:

ssh-keygen -t rsa -b 4096 -C "newuser@machine.com"

This will initiate the keypair creation. The -C parameter is optional and can be anything. You’ll find these keys often created with <user at address> combinations. After OpenSSH has created your keypair in memory, it will ask for a location to save the file. If you do not enter anything, it’ll save it as id_rsa into the .ssh folder created earlier. If you want to save it to another file name, you can do so:

C:\Users\username\.ssh/id_test

Please note that the file name (without any extension!) is separated by ‘/’, not by ‘\’. If you do not respect this, it will give you an error that the file name does not exist. This happens only if don’t use the default filename. After the files are created, OpenSSH asks you for a passphrase to protect your private key. Nowadays, every single keypair should be password protected (just my 2cts). Once the creation is complete, you’ll see something like this:

Your identification has been saved in C:\Users\msicc\.ssh/id_test.
Your public key has been saved in C:\Users\msicc\.ssh/id_test.pub.
The key fingerprint is:
SHA256:8LK33fWKXMkY5FtcN4uU1v9SyCSUqKNp2T/PurpZCRU newuser@machine.com
The key's randomart image is:
+---[RSA 4096]----+
|          E...   |
|          .o. o  |
|      .  .. o+.oo|
|       oo. oo=.o=|
|      .=S.  o.=.o|
|      =o.. . * o.|
|     .. ..o o * .|
|       . =o+ + o |
|        =o+=* ...|
+----[SHA256]-----+

As you can see, we are able to create SSH keys without any third-party application on Windows. You can now safely close the console window (e.g by typing ‘exit‘).

Deploying the public key to the server

Of course, the whole key pair thing has only sense if we are using it to secure our client/server communication. On Linux, there would be the easy to use command ssh-copy-id to deploy the key. Some Windows tutorials are showing the scp command, but I never got that working with my Azure VM. The only way left was to deploy it manually (which is not that difficult if you know how).

The manual way

After logging in (using Azure CLI), we are going to add a new user to the gang:

sudo useradd -m newuser

This will create a new user and its home directory. If the OS asks you for a password, it is up to you and the OS settings for empty passwords to provide one or not. Next step would be to add it to one or more groups if necessary:

sudo usermod -aG sudo newuser

The -aG parameter of the usermod command adds the specified group(s) to the user’s groups table. If you want to add the user to more than one group, separate them with a comma (no whitespace after the comma!).

To make things a little bit easier for us, we are going to log in as the new user:

sudo su newuser

Note: If you want to proceed without logging in as the new user, you’ll need to change ~ to /home/newuser for the following commands.

To make Linux accept our prior created SSH key only for our new user, we need to create the .ssh folder and the file for allowed keys in the new user’s home directory. Let’s start with the .ssh directory:

sudo mkdir ~/.ssh
sudo chmod 0700 ~/.ssh
sudo chown newuser:newuser ~/.ssh

Let’s break it down. Obviously, the mkdir command creates the .ssh folder. The chmod command with the 0700 parameter gives full access only to our new user. Finally, the chown command makes our new user the owner of the file.

Linux saves the allowed keys in a file called ‘authorized_keys‘, so that’s the next we are going to create. After creation, we change the file’s access rights to 0644, which makes it read-only for all users except our new user. Execute these commands:

sudo touch ~/.ssh/authorized_keys
sudo chmod 0644 ~/.ssh/authorized_keys

We’re getting closer… earlier, OpenSSH created two files in the .ssh folder on Windows. We are going to copy the contents of the .pub file into the authorized_keys file now. You can extract the content of the .pub file with Notepad on Windows. Once you have that one in your clipboard, open the authorized_keys file (using your favorite editor):

nano ~/.ssh/authorized_keys

Paste the content of your .pub file by right-clicking on the Azure CLI window. Save the file and close it. To secure the new user account, we need to apply some additional steps.

The first one is to delete and disable the password of our new user:

sudo passwd -d -l newuser

The -d parameter completely deletes the password. The -l parameter locks the state, preventing the user to set a new password (without using sudo, that is).

Additional security measures

Now that we have our SSH public key on our server, we are save to disable the password-based login in general. To do so, we need to modify the sshd_config file of our server. Open it with the editor of your choice:

sudo nano /etc/ssh/sshd_config

Search for PasswordAuthentication and ChallengeResponseAuthentication. Uncomment these entries if necessary by removing the ‘#‘ in front of the line an set them to ‘no‘. Some tutorials that are floating around the web tell you to even set UsePAM to no, but following this recommendation always disabled the login completely for me on the Azure VM and I always had to reset the SSH config via the Azure CLI.

Save and exit the sshd_config file. To take these changes into effect, we need to restart the SSH service on our service:

sudo service ssh restart

Wait a few seconds and then verify that the service restarted by controlling its status:

systemctl status ssh.service

That’s it, we have deployed our new SSH key to our server and took additional security measures. There’s just one thing left: try to log in via ssh as the new user. This is a pretty easy task. In the Azure CLI (or a local command prompt), enter the following command:

 ssh -i %USERPROFILE%\.ssh\id_test newuser@machine.com

After entering your password, you should be logged in just like you did with your admin user.

Bonus: add a shared directory

More often than not, you might want to create scripts or other files that are available to all users. Follow these simple steps:

sudo groupadd shared
sudo usermod -aG shared newuser

sudo mkdir -p /var/helpers
sudo chgrp -R shared /var/helpers
sudo chmod -R 2775 /var/helpers

The first two lines create a new group and assign our new user to the group. Repeat the second command for every user you want to be in that group.

Then we create the shared folder /helpers in the /var folder of our server. Utilizing the chgrp command, we give the ownership to our shared group. Last but not least, we are modifying the access rights once again for the /var/helper folder. The combination 2775 means that every new file inherits the group from the folder, allowing all members of the group to read, write and execute the file, while users outside the shared group only can read and execute the file.

Conclusion

As you can see, one does not always have to use third-party tools to get things done. Like I said at the beginning of this post, once you know the steps that are needed, it is pretty easy to create a new SSH key pair, create a new user and manually deploy the public key to the server. As always, I hope this post is helpful for some of you.

Helpful links

Title Image Credit (Pixabay)

Posted by msicc in Azure, Linux, 0 comments
Run your own Bitcoin Full Node on an Azure Linux VM

Run your own Bitcoin Full Node on an Azure Linux VM

One year ago, I began my journey in the crypto and blockchain area. Recently, multiple circumstances made me thinking about my own crypto-related server. Of course, I choose Azure for running this server (for the time being). It has been a while since I last touched Linux, so I had quite a bit to refresh and learn. In this post, I’ll show you the steps that are needed for setting up an independent full node to support the Bitcoin network.

Setting up the Virtual Machine on Azure

First, we need to install the Azure CLI on our computer. We will use this one to connect to our virtual machine later on via SSH. Follow the instructions found here.

The second prerequisite is a program to generate SSH keys. You can use either the OpenSSH client shipping with Windows 10 (latest versions), use the Azure CLI or PuTTY (follow these instructions).

Create the VM

Once you have installed the CLI and your SSH keys are created, log into your Azure account. Go to the marketplace, and search for ‘ubuntu‘. Choose Ubuntu Server 18.04 LTS and hit the ‘Create‘ button in the next window.

Fill in the details of your Azure VM on the first page of the creation module:

Do not forget to set the SSH admin user, as adding one afterward is not as easy as it seems and it often also fails (I had a hard time to learn that). Also, we need to allow traffic through the default HTTP (80) and SSH (22) ports. Once you configured everything, go to disks.

I did not select premium disks but instead went with a Standard HDD to save some money. You can change that to your needs. The important part here is to NOT use managed disks as we will need to resize the OS disk after the creation of the VM. Follow the rest of the steps in the creation wizard and create your virtual machine. Once the machine is created, you should create a DNS label (hit the ‘Configure’ link at the VM’s overview page to get to the IP settings).

Log in via SSH

Let’s try if we can log in to our Linux VM via the Azure CLI. Open the ‘Microsoft Azure Command Prompt‘ on your PC. To be able to connect to our virtual machine, we need to log in to Azure first to obtain an access token for our session:

az login 

This will open a new browser tab, where you need to log in to your account again. After that, we will be redirected back to the CLI. Once that happened, go back to the overview page of your VM and click on ‘Connect’. This will open a pane where we will see the RDP and SSH connection option. Select SSH and copy the text below ‘Login using VM local account‘:

Paste it into the Azure Command Prompt window and provide your password (you should never use a password-less SSH key). If your screen looks now similar to this, you have successfully logged in:

Now that we have verified that we are able to log in via SSH, type exit to log out again as we have one step left to perform on the virtual machine.

Resizing the OS disk

A Bitcoin full node needs to download and verify the whole blockchain. The current size of the blockchain is around 250 GB, which would never fit in the default size of the OS disk of our VM. Luckily, it is pretty easy to resize the OS disk by running some commands in the Azure CLI.

First, stop the virtual machine:

az vm stop --resource-group YourResourceGroupName --name YourVMName 

Resizing the OS disk needs the VM to be deallocated (this may take some minutes):

az vm deallocate --resource-group YourResourceGroupName --name YourVMName

Once deallocation has finished, we are able to resize the OS disk with this command:

az vm update --resource-group YourResourceGroupName --name YourVMName --set storageProfile.osDisk.diskSizeGB=1024

Once you see the new size in the returned response from Azure, we can start the VM again:

az vm start --resource-group YourResourceGroupName --name YourVMName 

Depending on the distribution you are using, you may have to perform additional steps. Ubuntu, however, mounts the new disk size without any additional action. Verifying the new disk size is pretty easy (after logging back in via SSH), as the System Information displayed after login should already reflect the change (like in the screen above).

Preparing the Bitcoin node

After completing all the steps above, we are finally able to move on with the preparations for the Bitcoin node.

Bitcoin service user

As we will run the node as a service, we need an unprivileged service account:

sudo useradd -m -s /dev/null bitcoin

Great, we just created the account, including the creation of the home directory for the service user and default shell entry. If you want to see the system’s response, just leave out the /dev/null part out when running the command.

As we are going to restrict the access to the service to the bitcoin user (and its default group, which is also bitcoin), we need to add our admin user to the group. Run the following command to do so:

sudo usermod -a -G bitcoin [admin-user]

In order to make these changes (especially the group add) persistent, we need to perform a restart before we move on. You can not only use the Azure portal or CLI, but also use this command to make the VM restart immediately:

sudo shutdown -r now

This will log you out of the current SSH session. After waiting one minute or two, just log back in to continue the preparation of the full node.

Downloading and Verifying Bitcoin binaries

The next step involves downloading the bitcoin core package and its valid signature files (as we don’t trust, but verify). Run these two commands (you will need to hit enter a second time after the first download finished). I also created a temp directory for the download and other stuff.

mkdir ~/temp
cd ~/temp
btc_version="0.18.1"
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/SHA256SUMS.asc

According to Bitcoin.org the latest releases are signed with Wladimir J. van der Laan’s releases key, which has the fingerprint we are going to verify the downloaded binaries. It is recommended to do this for all crypto-related binaries you’re downloading, no matter on which OS.

Let’s try to import the key of Mr. van der Laan:

gpg --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

You may get an error message telling you there was a server failure. In this case, run the following command to import the key:

gpg --keyserver hkp://keyserver.ubuntu.com:80 --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

If you still get errors, there might be some missing packages or other reasons for the server failure. I managed to come through with the second command more often than the first, but in the end, I had the key in my local key store.

Now let’s verify the downloaded files:

gpg --verify SHA256SUMS.asc
sha256sum --ignore-missing -c SHA256SUMS.asc

If the command tells you that the signature is good and indeed from Mr. van der Laan, everything is fine with the hash file. The second command verifies the archive we downloaded earlier and should result in ‘OK’. If not, you should immediately delete those files as they might contain malware.

Note: I have read quite a few comments on reddit and other sites that we can safely ignore those warnings …

Installing Bitcoin binaries

Now that we have our bitcoin service user and verified the bitcoin binaries, we are finally at the point to install them:

tar zxf bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
pushd bitcoin-$btc_version/bin; sudo cp bitcoind bitcoin-cli /usr/bin; popd;
bitcoind --version

After getting the install verification via the version string, it is a good practice to remove both the binaries and the hash file. If you need to reinstall, perform the steps above again to verify the authenticity of the files. Run these commands to remove those files:

rm bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
rm SHA256SUMS.asc

Create a service config file

Now that we have Bitcoin installed, we need to prepare a .conf file for our upcoming service:

vi bitcoin.conf  
or
nano bitcoin.conf

This will open a text editor on Linux. Enter the base config to get the RPC server of the Bitcoin daemon (needed for bitcoin-cli) activated and save the file to the temp disc (press ‘ESC’ + ‘:’ and write ‘wq‘ if you used vi:, ‘CTRL’+’x’ followed by ‘y’ on nano):

testnet=0
regstest=0
mainnet=1
# Global Options
server=1 #activating rpc
rpcconnect=127.0.0.1 #default
rpcport=8332 #default
rpcallowip=127.0.0.1/32 #default
rpcbind=127.0.0.1  #default
disablewallet=1 #keeping wallet off (atm)
daemon=1
# Options only for mainnet
[main]
# Options only for testnet
[test]
# Options only for regtest
[regtest]

Now we just need to copy that file into the /etc/bitcoin folder. If this folder does not yet exist, create it with the following command:

sudo mkdir -p /etc/bitcoin

Next, copy the bitcoin.conf file to it:

sudo cp bitcoin.conf /etc/bitcoin

Assign ownership to the bitcoin service user and make it readable for all users:

sudo chown bitcoin:bitcoin /etc/bitcoin/bitcoin.conf
sudo chmod 0664 /etc/bitcoin/bitcoin.conf

The last step is to create a directory for the daemon where we will find the PID (Process ID) file after starting the service:

sudo mkdir -p /run/bitcoind/
sudo chmod 0755 /run/bitcoind/
sudo chown bitcoin:bitcoin /run/bitcoind/

Create the Bitcoin service

Now we are able to set up the core service of our node, the Bitcoin daemon service. In order to start, just copy and paste the one found at Bitcoin’s Github account into a new file on your VM. Once you have that file in your temp folder, copy it over to the system’s services folder:

sudo cp bitcoind.service /lib/systemd/system

Now we need to enable the service to make it automatically starting up on reboot:

sudo systemctl enable bitcoind

And finally, we need to start the service (or reboot the machine if you want to test that part):

sudo systemctl start bitcoind

After some time, you should be able to use the bitcoin-cli to get some info of your local blockchain copy:

sudo bitcoin-cli -rpccookiefile=/var/lib/bitcoind/.cookie -datadir=/var/lib/bitcoind -getinfo

Please note you need to use sudo because the owner of the service and the files is our bitcoin user we created earlier. If you don’t want to always pass the authentication cookie path, you can create a symbolic link to your current user’s .bitcoin folder:

sudo ln -s /var/lib/bitcoind/.cookie ~/.bitcoin/.cookie

Now we are able to just call sudo bitcoin-cli -getinfo. Another way of checking if the service and the daemon are running is to read the log that gets generated:

sudo tail /var/lib/bitcoind/debug.log -f 

You should see the log scrolling through as it writes new entries. The most entries will look like this:

2019-09-07T14:54:51Z UpdateTip: new best=000000000000004fa323c7ee57b4b22272c7ea757a6a5bdb53dbda73572f559d height=239240 version=0x00000002 log2_work=70.203272 tx=18819570 date='2013-06-02T08:32:35Z' progress=0.041989 cache=675.7MiB(5032783txo)

If you arrived at this point

Congratulations! You are running a Bitcoin full node (without wallet for the time being, though). It will take some time to download and verify the whole blockchain, but you are now effectively helping and securing the Bitcoin network.

This is just the first post about my journey with Bitcoin and my own node. Make sure to follow for future blog posts. As always, I hope this post will be helpful for some of you.

Helpful links

Note: this post was reposted on my Trybe.one account.

Posted by msicc in Azure, Crypto&Blockchain, Linux, 0 comments
How to host a code file on Github as Gist to use in your application

How to host a code file on Github as Gist to use in your application

What the h*** is a Gist?

In case you never heard of Gist, it is an easy to use way to share code files hosted by Github. Everyone with a user account can use this feature, and now that also the premium features are free (thanks to the acquisition by Microsoft), you can even share them secretly.

Where do I find my Gists?

This one is for the beginners. If you know this already, move on. Once you have logged into your Github account, click on your user name. This will open a menu where you can see an option called ‘Your gists’. Once you clicked that one, you will see a page similar to mine (maybe with no gists in it):

gist_overview_gists

How to create a new Gist?

Well, that’s pretty easy. You just click on the ‘+’-button besides your user avatar in the top right corner:

gist_menu_add_new

This will bring up a new gist window. Enter your description, file name and fill in the content of your file or even add more files and hit the ‘Create public gist’ button to create your new gist. If you intend to host multiple files in your gist, please note that you will need the following steps on every single file you add (as each one has its own url).

gist_add_new

How to use this Gist in my app?

Luckily, both files in Github repos as well as in gists can be viewed in the so called ‘Raw’ view. You will find a corresponding button on every code file in the top right corner. Click on it, and you will see a plain-text representation (here is a sample from the one that led to this block post. It is styled by a browser extension that makes json more readable (every developer should already have one of this type installed, btw)):

gist_raw_view

Now we are close to be able to fetch this file into our applications. If you are sure that this file will never change, just use this file. If you know that this file is subject for future changes, you will need to perform a little trick.

Getting always the latest version of our Gist

If you analyze the url, you will notice that there is a unique id between the ‘raw’ part and the file name:

gist_remove_this_id

This id represents the current revision of your Gist. To make sure we always get the latest version of our gist, we need to remove this id. The url must end with ‘raw/yourfile.extensions‘, as you can see here:

gist_always_latest_revision_url

This way, you can update the file and implement an update mechanism into your app that fetches always the latest revision of that file. To fetch the file content into your app, you just need to perform a GET request against that url, without the overload of using Github’s API.

Conclusion

Instead of hosting configuration or data files on a private web server, one can utilize existing infrastructure like the one of Github. Like always, I hope this post will be helpful for some of you.

Until the next post, happy coding, everyone!
Posted by msicc in Dev Stories, Xamarin, 0 comments
How to avoid a distorted Android camera preview with ZXing.Net.Mobile [Updated]

How to avoid a distorted Android camera preview with ZXing.Net.Mobile [Updated]

Update: The application I used to determine this blog post is portrait only, that’s why I totally missed the landscape part. I have to thank Timo Partl, who pointed me to that fact on Twitter. I updated the solution/workaround part to reflect the orientation as well.


If you need to implement QR-scanning into your Xamarin (Forms) app, chances are high you will be using the ZXing.Net.Mobile library (as it is the most complete solution out there). In one of my recent projects, I wanted to exactly that. I have used the library already before and thought it might be an easy play.

Distorted reality…

Reality hit me hard when I realized that the preview is totally distorted:

distorted camera preview

Even with that distortion, the library detects the QR code without problems. However, the distortion is not something you want your user to experience. So I started to investigate why this distortion happens. As I am not the only one experiencing this problem, I am showing to easily fix that issue and the way that led to the solution (knowing also some beginners follow my blog).

Searching the web …

.. often brings you closer to the solution. Most of the time, you are not the only one that runs into such a problem. Doing so let me find the Github issue above, showing my theory is correct. Sometimes, those Github issues provide a solution  – this time, not. After not being able to find anything helpful, I decided to fork the Github repo and downloaded it into Visual Studio.

Debugging

Once the solution was loaded in Visual Studio, I found that there are some samples in the repo that made debugging easy for me. I found the case that matches my implementation best (using the ZXingScannerView). After hitting the F10 (to move step by step through the code) and F11 (to jump into methods) quite often, I understood how the code works under the hood.

The cause

If you are not telling the library the resolution you want to have, it will select one for you based on this rudimentary rule (view source on Github):

// If the user did not specify a resolution, let's try and find a suitable one
if (resolution == null)
{
    foreach (var sps in supportedPreviewSizes)
    {
        if (sps.Width >= 640 && sps.Width <= 1000 && sps.Height >= 360 && sps.Height <= 1000)
        {
            resolution = new CameraResolution
            {
                Width = sps.Width,
                Height = sps.Height
            };
            break;
        }
    }
}

Let’s break it down. This code takes the available resolutions from the (old and deprecated) Android.Hardware.Camera API and selects one that matches the ranges defined in the code without respect to the aspect ratio of the device display. On my Nexus 5X, it always selects the resolution of 400×800. This does not match my device’s aspect ratio, but the Android OS does some stretching and squeezing to make it visible within the SurfaceView used for the preview. The result is the distortion seen above.

Note: The code above is doing exactly what it is supposed to do. However, it was updated last 3 years ago (according to Github). Display resolutions and aspect ratios changed a lot during that time, so we had to arrive at this point sooner or later.

Solution/Workaround

The solution to this is pretty easy. Just provide a CameraResolutionSelectorDelegate with your code setting up the ZXingScannerView. First, we need a method that returns a CameraResolution and takes a List of CameraResolution, let’s have a look at that one first:

public CameraResolution SelectLowestResolutionMatchingDisplayAspectRatio(List<CameraResolution> availableResolutions)
{            
    CameraResolution result = null;

    //a tolerance of 0.1 should not be visible to the user
    double aspectTolerance = 0.1;
    var displayOrientationHeight = DeviceDisplay.MainDisplayInfo.Orientation == DisplayOrientation.Portrait ? DeviceDisplay.MainDisplayInfo.Height : DeviceDisplay.MainDisplayInfo.Width;
    var displayOrientationWidth = DeviceDisplay.MainDisplayInfo.Orientation == DisplayOrientation.Portrait ? DeviceDisplay.MainDisplayInfo.Width : DeviceDisplay.MainDisplayInfo.Height;

    //calculatiing our targetRatio
    var targetRatio = displayOrientationHeight / displayOrientationWidth;
    var targetHeight = displayOrientationHeight;
    var minDiff = double.MaxValue;

    //camera API lists all available resolutions from highest to lowest, perfect for us
    //making use of this sorting, following code runs some comparisons to select the lowest resolution that matches the screen aspect ratio and lies within tolerance
    //selecting the lowest makes Qr detection actual faster most of the time
    foreach (var r in availableResolutions.Where(r => Math.Abs(((double)r.Width / r.Height) - targetRatio) < aspectTolerance))
    {
            //slowly going down the list to the lowest matching solution with the correct aspect ratio
            if (Math.Abs(r.Height - targetHeight) < minDiff)
            minDiff = Math.Abs(r.Height - targetHeight);
            result = r;                
    }

    return result;
}

First, we are setting up a fixed tolerance for the aspect ratio. A value of 0.1 should not be recognizable for users. The next step is calculating the target ratio. I am using the Xamarin.Essentials API here, because it saves me some code and works both in Xamarin.Android only projects as well as Xamarin.Forms ones.

Before we are able to select the best matching resolution, we need to notice a few points:

  • lower resolutions result in faster QR detection (even with big ones)
  • preview resolutions are always presented landscape
  • the list of available resolutions is sorted from biggest to smallest

Considering these points, we are able to loop over the list of available resolutions. If the current ratio is out of our tolerance range, we ignore it and move on. By setting the minDiff double down with every iteration, we are moving down the list to arrive at the lowest possible resolution that matches our display’s aspect ratio best. In the case of my Nexus 5X 480×800 with an aspect ratio of 1.66666~, which matches the display aspect ratio of 1,66111~ pretty close.

Delegating the selection call

Now that we have our calculating method in place, we need to pass the method via the CameraResolutionSelectorDelegate to our MobileBarcodeScanningOptions.

If you are on Xamarin.Android, your code will look similar to this:

var options = new ZXing.Mobile.MobileBarcodeScanningOptions()
{
   PossibleFormats = new List<ZXing.BarcodeFormat>() { ZXing.BarcodeFormat.QR_CODE },
CameraResolutionSelector = new CameraResolutionSelectorDelegate(SelectLowestResolutionMatchingDisplayAspectRatio)
}

If you are on Xamarin.Forms, you will have to use the DependencyService to get to the same result (as the method above has to be written within the Android project):

var options = new ZXing.Mobile.MobileBarcodeScanningOptions()
{
   PossibleFormats = new List<ZXing.BarcodeFormat>() { ZXing.BarcodeFormat.QR_CODE },
CameraResolutionSelector = DependencyService.Get<IZXingHelper>().CameraResolutionSelectorDelegateImplementation
}

The result

Now that we have an updated resolution selection mechanism in place, the result is exactly what we expected, without any distortion:

matching camera preview

Remarks

In case none of the camera resolutions gets selected, the camera preview automatically uses the default resolution. In my tests with three devices, this is always the highest one. The default resolution normally matches the devices aspect ratio. As it is the highest, it will slow down the QR detection, however.

The ZXing.Net.Mobile library uses the deprecated Android.Hardware.Camera API and Android.FastCamera library on Android. The next step would be to migrate over to the Android.Hardware.Camera2 API, which makes the FastCamera library obsolete and is future proof. I had already a look into that step, as I need to advance in two projects (one personal and one at work) with QR scanning, however, I postponed this change.

Conclusion

For the time being, we are still able to use the deprecated mechanism of getting our camera preview right. After identifying the reason for the distortion, I got pretty fast to a workaround/solution that should fit most use cases. As devices and their specs are evolving, we are at least not left behind. I will do another writeup once I found the time to replace the deprecated API in my fork.

As always, I hope this post will be helpful for some of you.

Until the next post, happy coding, everyone!

Title Image Credit

Posted by msicc in Android, Dev Stories, Xamarin, 11 comments
Crypto and Blockchain projects & tools (April 2019)

Crypto and Blockchain projects & tools (April 2019)

NOTE: All following project and tools reflect my own opinion and my own experience. This post is not an investment advice or advice to use them. Also, your mileage may vary. Please make sure you also read the disclaimer at the end of this post as well. This post contains affiliate/referral links.

For Users

Mass adoption. If you keep following the crypto and blockchain space, you might have heard this term a lot. At the moment, besides gambling dApps (decentralized applications) on different blockchains, there are only a few real-world products floating around. Let’s have a look at some of them.

Presearch

Presearch is a decentralized search engine that has more than 1 million users (as of writing this), which is quite impressive for a beta product, especially in this special area. Presearch tries to disrupt the dominance of Google (which acts as the gate to the internet for more than 70% of all users).

I have been using Presearch for a few months now as my personal gate to the internet, and I enjoy the aggregation of my preferred search engines. If you look at my start page, you can see all the search providers I have chosen. From Bing, Google to Github as well as iTunes and Spotify, everything gets searched with a click. If you open a new Tab with Presearch, you can filter the search once more. I often use the Presearch engine search, which just delivers the best results for me. I am helping the product to evolve with this behavior, as a little bonus, I get 0.25 PRE(search token) for the searches I perform.

Presearch

Presearch has protection against cheaters (AI powered) in place. I am obviously not cheating to help the product, and my honesty is rewarded with an account level of 9. They have a Chrome extension which sets the current search engine (Desktop) just by installing it. Presearch even has a mobile app (beta, iOS), which is currently not able to connect to your account which I use from time to time. However, as happened with Bing and Google before, I am using their mobile website most of the time on my iPhone.

Brave Browser

If you have been following me for some time, you know that I am a fan of Microsoft. Recently, however, Microsoft turned into a more business-oriented company, leaving the consumer to just go with the products of other providers. This led me to search for alternatives of my personal browser – so I ended up with Brave Browser.

Brave aims to be faster and more private than the default choices users have. Websites store all kind of data in cookies and use trackers to collect data from a user while the later one is browsing the web. With Brave, you get back a good amount of control of the data you are willing to share. Over time, your privacy level will grow again just by using Brave. The browser shows you how many trackers, cookies and scripts were blocked while you are browsing.

Since a few weeks, user can receive rewards for watching ads. Users will be prompted to view ads (mostly short videos), and receive some BAT, the native currency of Brave – which they can use again to reward content creators. This is an opt-in feature, so users always have the choice.

If you have a blog, website or YouTube channel, you can help the project to grow. Register your site/channel as a creator, and use your BAT rewards to tip others and invite them to join, too. This way, knowledge about Brave will spread across the web from all sides.

CoinPaprika

CoinPaprikais a market analysis tool, and it is a powerful one. I discovered the project last summer (shortly after they started). I like it “hot and spicy”, so the name alone immediately called my attention. As I have two or three app ideas that need a service like CoinPaprika, I was happy to learn they have an open API as well, with really generous API rate limits. I am contributing to the project in the form of maintaining a .NET library.

coinpaprika_client

CoinPaprika is somewhat different to other providers, as they do not rely on data of CoinMarketCap (which a lot of services in this area do). Instead, they are pulling in data of more than 250 exchanges on their own, resulting in over 2000 currencies they track. On top, they provide a great oversight on each project (click the coin stats links above to see some samples).

They are also working on a mobile app, which will combine their analysis tools with a non-custodial wallet (based on Trust Wallet’s core). You can get a short overview and register for beta notification at
https://coins.coinpaprika.com/.

AtomicPay

AtomicPay is a non-custodial payment service provider. While there are quite some providers of such services floating around, AtomicPay holds the flag of decentralization pretty high. The service just provides the invoicing infrastructure, while the payment itself is only tracked by the system. The effective payment is a direct customer to merchant transaction – the funds move directly into the merchant’s wallet. The whole infrastructure is built around Electrum and its derivations for other currencies.

atomicpay-title-image

By using common implementations like SegWit and HD (Hierarchical Deterministic) wallets as well as no address re-use, AtomicPay ensures security and privacy on a high level with a reasonable amount of initial administrative work. AtomicPay provides several integrations into already existing online shopping systems, an open Rest-API, payment buttons for websites and more.

I am also involved in this project and currently working on the official native apps for Android and iOS. We have already published a .NET SDK, which enables you already today to implement crypto payments into your apps. On our roadmap, we have a Xamarin.Forms ready plugin and SDKs for other programming languages as well.

For Developers

Being a mature programming language, C# has already quite some important and ready-to-use libraries to interact with certain blockchains. Here is a list of projects I am following:

Wallets and Exchanges

If you want to get some cryptocurrencies, you need some starting points:

  • Where to buy?
  • Where to convert/exchange?
  • Where to store HODLs?

While there are several options out there, I have found a combination that works for me.

BitPanda

Buying crypto currencies with bank transfer or credit card is pretty easy on BitPanda, an Austrian provider. They accept deposits in EUR, CHF, USD and GBP. Once your FIAT deposit is in your BitPanda account, you can exchange them for a variety of crypto currencies including Bitcoin, Ethereum, NEO and more. If you want to sign up and get a 10 EUR bonus after your first purchase, you can use my referral link (I am getting the bonus as well).

Binance

Binance is one of the biggest centralized crypto exchanges around. They have a huge amount of tradeable assets and also great liquidity. If a trading pair is available on Binance, chances are high that you will get the best price there. Their apps and website offer a good default set of indicators, so you normally won’t need any additional tools. Binance is also said to be one of the few exchanges to not fake their trading volume. If you haven’t signed up to Binance, feel free to use my referral link.

ChangeNOW

ChangeNOW is a decentralized exchange that is connected to big centralized exchanges to get the best rates. On certain pairs, you have a fixed rate option besides the quick-and-dirty approach that all such services provide. However, if you use that one, you will get a smaller exchange value than with the default option because of lowering the risk. In my experience, the exchange rates reflect the current market prices more often than not. Other providers have a bigger spread.

Transactions are executed as fast as possible, if something goes wrong and you provide your wallet for refunds, your funds will always be safe. Additionally, you can provide your mail address to get informed when the exchange took place (so you won’t need to constantly refresh the website).

KyberSwap

KyberSwapis another decentralized exchange, but just for Ethereum tokens (at least at them moment of writing this). It is based on the Kyber protocol, which allows fast and direct swaps of tokens (for example BAT=>KNC) without the need of exchanging first to Ethereum and then into the desired token. There are already quite a few dApps built on the Kyber protocol, you can see a list here. Recently, KyberSwap launched a nice and easy to use iOS app, which supports also a price alert feature (USD and ETH base pairs only, however).

Trust Wallet

Trust Wallet, the official non-custodial wallet of Binance, has a good reputation when it comes to storing your HODLs. It is available for both Androidand iOS, with a desktop version being worked on. They are constantly adding new coins, and the usage is even for new users pretty easy. Their Telegram support group is always just a message away in case you need some help.

MyEtherWallet and MEWConnect

If you want to perform advanced operations (like cleaning pending transactions or getting your *.eth address), there is a big chance that you will be able to do it with MyEtherwallet.

They recently launched a newer version of their website. However, not all functionality has been ported over (yet). If you cannot find an option on the new site, you will find it for sure at their vintage site. The most secure way to connect to MyEtherWallet is their MEWConnect app (available for iOS and Android), which transforms your phone into a hardware wallet-like device. You need to confirm transactions on the device before you can move on in the browser, adding in an additional security layer.

Etherscan

Etherscan is the best known Ethereum blockchain explorer around. It has tons of features, from viewing transactions or contracts to token details and ENS-Lookup. Whenever I need to verify or search something related to the Ethereum blockchain, this is my first goto-address.

O3 Wallet (NEO)

If you are searching for blockchains built with .NET, sooner or later you will run into NEO. I only recently began to explore the possibilities of the NEO blockchain and its native currency. NEO has a token ecosystem as well. IF you need a cross-platform wallet, I would give the O3 wallet a try. It integrates with Switcheo (another decentralized exchange) and quite a few other dApps (like registering your .neo address via NNS) running on NEO.

New projects

The crypto space never stand still. New projects are showing up almost every day. Here is a list of projects I recently started to discover. It is way to early to write a review on them, but I may do so in a follow up post.

  • WolfpackBot (beta, crypto trading bot running on its own blockchain)
  • Bravo (write reviews, receive crypto)
  • Switcheo (decentralized exchange for NEO and ETH tokens, EOS soon)
  • Electroneum (mobile (cloud) mining on its own blockchain, use 5576A9 to get a 1% bonus on your mining rewards (5% for me))

News Sources

It is always good to know what is going on in the crypto space. Here are some reliable news sources I use:

Conclusion

Besides the hundreds (if not thousands) of gambling dApps across all blockchains, there are also interesting real word projects one can use today. Some of them are more popular than others, but not every project is worth your attention. This post showed some of the projects I am interested in.

If you know a project/tool/dApp that is missing in this list, feel free to leave a comment below or ping me on social networks. Maybe your suggestions will make it into the next post of this kind.

Disclaimer: I am contributing to one or more crypto/blockchain projects with code written by me under the MIT License. Future contributions may contain their own (and different) disclaimer. I am not getting paid for my contributions to those projects at the moment of writing this.

Please note that none of my crypto-related posts is an investment or financial advice. As cryptocurrencies are volatile and risky,  you should only invest as much as you can afford to lose. Always do your own research!

Title Image Credit

Posted by msicc in Crypto&Blockchain, Editorials, 0 comments
Handle AtomicPay’s webhook with Microsoft Azure (Part 3/3) – sending push notifications with Notification Hub

Handle AtomicPay’s webhook with Microsoft Azure (Part 3/3) – sending push notifications with Notification Hub

Series overview

  1. Handle the incoming notification with an Azure Function (first post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (second post)
  3. Send a push notification via Azure Notification Hub to all devices that have linked a merchant’s account to it (this post)

Preparations…

With Azure Notification Hub, we have one of the most powerful tools for sending push notifications out to connected devices. It is designed to handle multiple platforms, so let’s create a new Notification Hub in the Azure Portal. After logging in, select ‘Create a resource‘, followed by ‘Mobile‘ and a final click on ‘Notification Hub‘:

Azure: create a new Notification Hub

Fill in the details of the new Hub. Remember to select your already existing Resource Group and the same region that is used for your Azure Function:

Azure: new Notification Hub creation details

After about a minute, your newly created Notification Hub is ready to use. Select ‘Go to resource‘ to open it.

Azure Deployment finished

Now we are already able to connect to our first platform. For this sample, I will focus on Android. Login to the Firebase console with your Google account. Select ‘Add project‘ and configure your Android app (or add your existing, if you have already one like I do):

Firebase: Add or select project

After following all (self-explanatory) steps, scroll a bit down in the General tab of your Firebase project. You will find an entry like this:

Firebase: download google-services.json file

Download the ‘google-services.json‘-file. We will need it later in our Android app. Now select the ‘Cloud Messaging‘ tab. You will be presented with two keys – copy the upper one:

Firebase: copy server key

Go back to the Azure portal and select ‘Google (GCM/FCM)‘ in ‘Settings’ . Paste the earlier copied key and hit the ‘Save‘-Button:

Azure: paste Firebase server key and save

Now we need to authenticate our Azure Function to the Notification Hub. Select ‘Access Policies‘ in the ‘Manage’ section of your Hub and copy the lower ConnectionString with the ‘Listen,Manage,Send permission:

Azure: copy server ConnectionString

Open your Azure Function in a new tab. In the ‘Application settings‘, add a new setting and paste the ConnectionString. Add another one for the NotificationHub’s name:

Azure Function add Hub name and ConnectionString to Application settings

Go back to the Notification Hub and save the upper ConnectionString locally, as we need that one in our Android application later on

Back to code…

Now that we have prepared the Notification Hub on Azure, let’s write some code that actually sends out our notifications once our Function verified our AtomicPay invoice. In the last post, we already rewrote some of the code in preparation of the final step, our push notifications.

Triggering push notifications

Before we will be able to modify our Function code to actually send the notification, we need to install an additional NuGet package: Microsoft.Azure.Webjobs.Extensions.NotificationHub. Please note that I only was able to get this all up and running with version 1.2.0, version 1.3.0 seems to be not compatible with v1 Functions.

Triggering push notifications can be broken down into these 3 steps:

  1. create a Hub client from the ConnectionString
  2. create the content of the notification
  3. finally send the notification

This translates into this Task inside our InvoiceVerifier class:

        private static async Task TriggerPushNotification(InvoiceInfoDetails invoiceInfoDetails, string accId)
        {
            //we need full shared access connection string (server side)
            var connectionString = ConfigurationManager.AppSettings["NotifHubConnectionString"];
            var hubName = ConfigurationManager.AppSettings["NotifHubName"];

            var hub = NotificationHubClient.CreateClientFromConnectionString(connectionString, hubName);

            var templateParams = new Dictionary<string, string>();
            templateParams["body"] = $"Received payment for invoice id: {invoiceInfoDetails.InvoiceId} ({invoiceInfoDetails.Status.ToString()})";

            var outcome = await hub.SendTemplateNotificationAsync(templateParams, $"accId:{accId}");

            _traceWriter.Info($"attempted to inform merchant {accId} of payment via push");
        }

We are loading the ConnectionString and the Hub’s name from our Function’s Application settings and create a new client connection using these two safely stored properties. To keep this sample simple, I am using a templated notification that can be used across all supported platforms. The receiver is responsible for the handling of this template (we’ll see how later in this post). Finally, we are sending out the notification to the native platforms, where they will be distributed (in our case, via Firebase Cloud Messaging). By including the accId:{accId}tag, we are sending the push notification only to the devices that were registered with that specific merchant account.

Of course, this nicely written method does nothing until we actually use it. Let’s update our Run method. we only need to add one line for our test in the switch that handles the returned invoiceInfoDetails:

switch (invoiceInfoDetails.Status)
{
	case AtomicPay.Entity.InvoiceStatus.Paid:
	case AtomicPay.Entity.InvoiceStatus.PaidAfterExpiry:
	case AtomicPay.Entity.InvoiceStatus.Overpaid:
	case AtomicPay.Entity.InvoiceStatus.Complete:
		log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is paid");
		await TriggerPushNotification(invoiceInfoDetails, accId);
		break;
	default:
		log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is not yet paid");
		//todo: this will trigger additional status handling in future
		break;
}

Publish your updated Azure Function. Our Azure Function is now connected to a Notification Hub and is able to send out push notifications. Of course, push notifications ending in Nomandsland are boring. So let’s go ahead.

Receiving the push notifications

Create a new Xamarin.Android app (with XAML or without, your choice). Of course, also here we have some additional setup to perform.

Import google-services.json

Add the google-services.json we downloaded before from Firebase to your project. Set its Build Action to GoogleServicesJson in the Properties window. I needed to restart Visual Studio to be able to select this option after adding the file.

VS-google-services-build-action

Installing NuGet packages

Of course, we also need to install some additional NuGet packages:

The second step involves some changes to the AndroidManifest, giving the Firebase package some permissions and handle its intents in the application tag:

<receiver android:name="com.google.firebase.iid.FirebaseInstanceIdInternalReceiver" android:exported="false" />
<receiver android:name="com.google.firebase.iid.FirebaseInstanceIdReceiver" android:exported="true" android:permission="com.google.android.c2dm.permission.SEND">
	<intent-filter>
		<action android:name="com.google.android.c2dm.intent.RECEIVE" />
		<action android:name="com.google.android.c2dm.intent.REGISTRATION" />
		<category android:name="${applicationId}" />
	</intent-filter>
</receiver>

Next, we will add a new constants class:

public static class Constants
{
	public const string ListenConnectionString = "<Listen connection string>";
	public const string NotificationHubName = "<hub name>";
}

On the client, we are only using the lower permission ConnectionString we copied earlier from the Azure Notification Hub.

Connecting to Firebase

As our Azure Notification Hub sends notifications via Firebase, Google’s native messaging handler, we need a service that connects our app. The service requests a token (the Xamarin library does all that complex stuff for us) that we’ll need to actually register the device for the notifications. Add a new class called PlatformFirebaseIidService and decorate it with the Service attribute. Besides that, we need to register our interest on the com.google.firebase.INSTANCE_ID_EVENT, which will call into the OnTokenRefresh() method we will override. We’ll end up like this:

[Service]
[IntentFilter(new[] { "com.google.firebase.INSTANCE_ID_EVENT" })]
public class PlatformFirebaseIidService : FirebaseInstanceIdService
{
	const string TAG = "PlatformFirebaseIidService";
	NotificationHub _hub;

	public override void OnTokenRefresh()
	{
		var refreshedToken = FirebaseInstanceId.Instance.Token;
		Log.Debug(TAG, "FCM token: " + refreshedToken);		
	}
}

Now that we hold a fresh Firebase instance token, we can register our app for receiving push notifications. To do this, we’re adding an new method:

void SendRegistrationToServer(string token, List<string> tags = null)
{
	// Register with Notification Hubs
	_hub = new NotificationHub(Constants.NotificationHubName, Constants.ListenConnectionString, this);

	if (tags == null)
		tags = new List<string>() { };

	var templateBody = "{\"data\":{\"message\":\"$(body)\"}}";

	//this one registers a template that can be used cross platform
	//just make sure the template is the same on iOS, Windows, etc.
	var registerTemplate = _hub.RegisterTemplate(token, "defaultTemplate", templateBody, tags.ToArray());
	Log.Debug(TAG, $"Successful registration of Template {registerTemplate.RegistrationId}");
}

Let me break this method down. Of course, we need to connect to our Notification Hub, which is responsible for the decision if our client is a valid receiver or not. If we add tags, they will get send together with the registration. We will add the merchant’s account Id to filter the receiver. As we are sending notifications using a template, we need to register for the template that gets filled by our Azure Function. One thing is left, calling this method after obtaining a token in the OnTokenRefresh override:

SendRegistrationToServer(refreshedToken, new List<string>() { "accId:<yourAccId>" });

Handling incoming firebase messages

Now that our client is registered with both Firebase and the Azure Notification Hub, of course we want it to be able to receive the pushed messages. To achieve this, we need another Service. Add a new class called PlatformFirebaseMessagingService and decorate it once again with the Service attribute. This time, we are interested in the com.google.firebase.MESSAGING_EVENT intent, so let’s add also this one. The service is responsible for parsing our payload and actually trigger a notification helper to send the notification. Here is the code of the service:

[Service]
[IntentFilter(new[] { "com.google.firebase.MESSAGING_EVENT" })]
public class PlatformFirebaseMessagingService : FirebaseMessagingService
{
	const string TAG = "PlatformFirebaseMessagingService";

	public override void OnMessageReceived(RemoteMessage message)
	{
		Log.Debug(TAG, "From: " + message.From);
		string title = "Azure Test message";
		string body = null;

		if (message.GetNotification() != null)
		{
			//These is how most messages will be received
			body = message.GetNotification().Body;
			title = message.GetNotification().Title;
			Log.Debug(TAG, $"Notification Message Body: {body}");
		}
		else
		{
			//Only used for debugging payloads sent from the Azure portal
			body = message.Data.Values.First();
		}

		var notification = NotificationHelper.Instance.GetNotificationBuilder(title, body);
		NotificationHelper.Instance.Notify(1001, notification);

	}
}

Of course, you are curious about the NotificationHelper class. Let’s have a look. Besides being a Singleton, we need to retrieve an instance of the system’s NotificationService. As we do not have multiple notification channels in this sample, it is enough to declare both the name and its id in two constants. In the constructor of the class, we are initializing the channel:

public class NotificationHelper : ContextWrapper
{
	private static NotificationHelper _instance;

	public static NotificationHelper Instance => _instance ?? (_instance = new NotificationHelper(Application.Context));
	
	const string NOTIFICATION_CHANNEL_ID = "default";
	const string NOTIFICATION_CHANNEL_NAME = "notif_test_channel";

	NotificationManager _manager;
	NotificationManager Manager => _manager ?? (_manager = (NotificationManager)GetSystemService(NotificationService));

	public NotificationHelper(Context context) : base(context)
	{
		var channel = new NotificationChannel(NOTIFICATION_CHANNEL_ID, NOTIFICATION_CHANNEL_NAME, NotificationImportance.Default);
		channel.EnableVibration(true);
		channel.LockscreenVisibility = NotificationVisibility.Public;
		
		this.Manager.CreateNotificationChannel(channel);
	}
}

Creating the Notifications involves the Notificiation.Builder class. We’re simplifying the process for us with this method:

public Notification.Builder GetNotificationBuilder(string title, string body)
{
	var intent = new Intent(this.ApplicationContext, typeof(MainActivity));
	intent.AddFlags(ActivityFlags.ClearTop);
	var pendingIntent = PendingIntent.GetActivity(this, 0, intent, PendingIntentFlags.OneShot);

	return new Notification.Builder(this.ApplicationContext, NOTIFICATION_CHANNEL_ID)
		.SetContentTitle(title)
		.SetContentText(body)
		.SetSmallIcon(Resource.Drawable.ic_launcher)
		.SetAutoCancel(true)
		.SetContentIntent(pendingIntent);
}

To read more about creation notifications, have a look at the docs for local (= client side) notifications.

Last but not least, we need to inform the system’s notification service to show the notification. The final helper method to this looks like this:

public void Notify(int id, Notification.Builder notificationBuilder)
{
	this.Manager.Notify(id, notificationBuilder.Build());
}

To test the notification, we have two options – one is to send a test notification via the Notification Hub, the other is to use Postman once again to create to trigger our Azure Function. In both cases, your result should be a notification on your device (after you deployed and run the application without the debugger being attached).

atomicpay-webhook-notification

Conclusion

In this last post of the series, I showed you all steps that are needed to send out push notifications utilizing an Azure Notification Hub. It takes a bit of setup in the beginning, but the code involved is pretty easy and straight forward.

Now that the series is complete, you can have a look at the source code on Github. You need to add your own google-services.json file and your own keys as well to run the sample. As always, I hope this post, as well as this series, is helpful for some of you.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Azure, Dev Stories, Xamarin, 1 comment
Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Internal libraries

Writing (or copy and pasting) the same code over and over again is one of those things I try to avoid when writing code. For quite some time, I already organize such code in libraries. Until last year, this required quite some work managing all libraries for each Xamarin platform I used. Luckily, the MSBuild SDK Extras extensions showed up and made everything a whole lot easier, especially after James Montemagno did a detailed explanation on how to get the most out of it for Xamarin plugins/libraries.

Getting started

Even if I repeat some of the steps of James’ post, I’ll start from scratch on the setup part here. I hope to make the whole process straight forward for everyone – that’s why I think it makes sense to show each and every step. Please make sure you are using the new .csproj type. If you need a refresh on that, you can check my post about migrating to it (if needed).

MSBuild.Sdk.Extras

The first step is pulling in MSBuild.Sdk.Extras, which will enable us to target multiple platforms in one single library. For this, we need a global.json file in the solution folder. Right click on the solution name and select ‘Open Folder in File Explorer‘, then just add a new text file and name it appropriately.

The next step is to define the version of the MSBuild.SDK.Extras library we want to use. The current version is 1.6.65, so let’s define it in the file. Just click the ‘Solution and Folders‘ button to find the file in Visual Studio:

switch to folder view

Add these lines into the file and save it:

{
  "msbuild-sdks": {
    "MSBuild.Sdk.Extras": "1.6.65"
  }
}

Modifying the project file

Switch back to the Solution view and right click on the .csproj file. Select ‘Edit [ProjectName].csproj‘. Let’s modify and add the project definitions. We’ll start right in the first line. Replace the first line to pull in the MSBuild.Sdk.Extras:

<Project Sdk="MSBuild.Sdk.Extras">

Next, we’re separating the Version tag. This will ensure that we’ll find it very quickly in future within the file:

  <!--separated for accessibility-->
  <PropertyGroup>
    <Version>1.0.0.0</Version>
  </PropertyGroup>

Now we are enabling multiple targets, in this case our Xamarin platforms. Please note that there are two separated versions – one that includes UWP and one that does not. I thought I would be fine to remove the non-UWP one if I include UWP and was precent with some strange build errors that where resolved only by re-adding the deleted line. I do not remember the reason, but I made a comment in my template to not remove it – so let’s just keep it that way.

  <!--make it multi-platform library!-->
  <PropertyGroup>
    <UseFullSemVerForNuGet>false</UseFullSemVerForNuGet>
    <!--we are handling compile items ourselves below with a custom naming scheme-->
    <EnableDefaultCompileItems>false</EnableDefaultCompileItems>
    <KEEP ALL THREE IF YOU ADD UWP!-->
    <TargetFrameworks></TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' == 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;uap10.0.16299;</TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' != 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;</TargetFrameworks>
  </PropertyGroup>

Now we will add some default NuGet packages into the project and make sure our file get included only on the correct platform. We follow a simple file naming scheme (Xamarin.Essentials uses the same):

[Class].[platform].cs

This way, we are able to add all platform specific code together with the shared entry point in a single folder. Let’ start with shared items. These will be available on all platforms listed in the PropertyGroup above:

  <!--shared items-->
  <ItemGroup>
    <!--keeping this one ensures everything goes smooth-->
    <PackageReference Include="MSBuild.Sdk.Extras" Version="1.6.65" PrivateAssets="All" />

    <!--most commonly used (by me)-->
    <PackageReference Include="Xamarin.Forms" Version="3.4.0.1029999" />
    <PackageReference Include="Xamarin.Essentials" Version="1.0.1" />

    <!--include content, exclude obj and bin folders-->
    <None Include="**\*.cs;**\*.xml;**\*.axml;**\*.png;**\*.xaml" Exclude="obj\**\*.*;bin\**\*.*;bin;obj" />
    <Compile Include="**\*.shared.cs" />
  </ItemGroup>

The ‘**\‘ part in the Include property of the Compile tag ensures MSBuild includes also classes in subfolders. Now let’s add some platform specific rules to the project:

  <ItemGroup Condition=" $(TargetFramework.StartsWith('netstandard')) ">
    <Compile Include="**\*.netstandard.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('uap10.0')) ">
    <PackageReference Include="Microsoft.NETCore.UniversalWindowsPlatform" Version="6.1.9" />
    <Compile Include="**\*.uwp.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('MonoAndroid')) ">
    <!--need to reference all those libs to get latest minimum Android SDK version (requirement by Google)... #sigh-->
    <PackageReference Include="Xamarin.Android.Support.Annotations" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Compat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Core.Utils" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.CustomTabs" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v4" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Design" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.AppCompat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.CardView" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.Palette" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.MediaRouter" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Core.UI" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Fragment" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Media.Compat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.RecyclerView" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Transition" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="28.0.0.1" />
    <Compile Include="**\*.android.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('Xamarin.iOS')) ">
    <Compile Include="**\*.ios.cs" />
  </ItemGroup>

Two side notes:

  • Do not reference version 6.2.2 of the Microsoft.NETCore.UniversalWindowsPlatform NuGet. There seems to be bug in there that will lead to rejection of your app from the Microsoft Store. Just keep 6.1.9 (for the moment).
  • You may not need all of the Xamarin.Android packages, but there are a bunch of dependencies between them and others, so I decided to keep them all

If you have followed along, hit the save button and close the .csproj file. Verifying everything went well is pretty easy – your solution structure should look like this:

multi-targeting-project

Before we’ll have a look at the NuGet creation part of this post, let’s add some sample code. Just insert this into static partial classes with the appropriate naming scheme for every platform and edit the code to match the platform. The .shared version of this should be empty (for this sample).

     public static partial class Hello
    {
        public static string Name { get; set; }

        public static string Platform { get; set; }

        public static  void Print()
        {
            if (!string.IsNullOrEmpty(Name) && !string.IsNullOrEmpty(Platform))
                System.Diagnostics.Debug.WriteLine($"Hello {Name} from {Platform}");
            else
                System.Diagnostics.Debug.WriteLine($"Hello unkown person from {Device.Android}");
        }
    }

Normally, this would be a Renderer or other platform specific code. You should get the idea.

Preparing NuGet package creation

We will now prepare our solution to automatically generate NuGet packages both for DEBUG and RELEASE configurations. Once the packages are created, we will push it to a local (or network) file folder, which serves as our local NuGet-Server. This will fit for most Indie-developers – which tend to not replicate a full blown enterprise infrastructure for their DevOps needs. I will also mention how you could push the packages to an internal NuGet server on a sideline (we are using a similar setup at work).

Adding NuGet Push configurations

One thing we want to make sure is that we are not going to push packages on every compilation of our library. That’s why we need to separate configurations. To add new configurations, open the Configuration Manager in Visual Studio:

In the Configuration Manager dialog, select the ‘<New…>‘ option from the ‘Active solution configuration‘ ComboBox:

Name the new config to fit your needs, I just use DebugNuget which will signal that we are pushing the NuGet package for distribution. I am copying the settings from the Debug configuration and let Visual Studio add the configurations to project files within the solution. Repeat the same for Release configuration.

The result should look like this:

Modifying the project file (again)

If you head over to your project file, you will see the Configurations tag has new entries:

  <PropertyGroup>
    <Configurations>Debug;Release;DebugNuget;ReleaseNuget</Configurations>
  </PropertyGroup>

Next, add the properties of your assembly and package:

    <!--assmebly properties-->
  <PropertyGroup>
    <AssemblyName>XamarinNugets</AssemblyName>
    <RootNamespace>XamarinNugets</RootNamespace>
    <Product>XamarinNugets</Product>
    <AssemblyVersion>$(Version)</AssemblyVersion>
    <AssemblyFileVersion>$(Version)</AssemblyFileVersion>
    <NeutralLanguage>en</NeutralLanguage>
    <LangVersion>7.1</LangVersion>
  </PropertyGroup>

  <!--nuget package properties-->
  <PropertyGroup>
    <PackageId>XamarinNugets</PackageId>
    <PackageLicenseUrl>https://github.com/MSiccDevXamarinNugets</PackageLicenseUrl>
    <PackageProjectUrl>https://github.com/MSiccDevXamarinNugets</PackageProjectUrl>
    <RepositoryUrl>https://github.com/MSiccDevXamarinNugets</RepositoryUrl>

    <PackageReleaseNotes>Xamarin Nugets sample package</PackageReleaseNotes>
    <PackageTags>xamarin, windows, ios, android, xamarin.forms, plugin</PackageTags>

    <Title>Xamarin Nugets</Title>
    <Summary>Xamarin Nugets sample package</Summary>
    <Description>Xamarin Nugets sample package</Description>

    <Owners>MSiccDev Software Development</Owners>
    <Authors>MSiccDev Software Development</Authors>
    <Copyright>MSiccDev Software Development</Copyright>
  </PropertyGroup>

Configuration specific properties

Now we will add some configuration specific PropertyGroups that control if a package will be created.

Debug and DebugNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Debug' ">
    <DefineConstants>DEBUG</DefineConstants>
    <!--making this pre-release-->
    <PackageVersion>$(Version)-pre</PackageVersion>
    <!--needed for debugging!-->
    <DebugType>full</DebugType>
    <DebugSymbols>true</DebugSymbols>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(Configuration)'=='DebugNuget' ">
    <DefineConstants>DEBUG</DefineConstants>
    <!--enable package creation-->
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <!--making this pre-release-->
    <PackageVersion>$(Version)-pre</PackageVersion>
    <!--needed for debugging!-->
    <DebugType>full</DebugType>
    <DebugSymbols>true</DebugSymbols>
    <GenerateDocumentationFile>false</GenerateDocumentationFile>
    <!--this makes msbuild creating src folder inside the symbols package-->
    <IncludeSource>True</IncludeSource>
    <IncludeSymbols>True</IncludeSymbols>
  </PropertyGroup>

The Debug configuration enables us to step into the Debug code while we are referencing the project directly during development, while the DebugNuget configuration will also generate a NuGet package including Source and Symbols. This is helpful once you find a bug in the NuGet package and allows us to step into this code also if we reference the NuGet instead of the project. Both configurations will add ‘-pre‘ to the version, making these packages only appear if you tick the ‘Include prerelease‘ CheckBox in the NuGet Package Manager.

Release and ReleaseNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Release' ">
    <DefineConstants>RELEASE</DefineConstants>
    <PackageVersion>$(Version)</PackageVersion>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(Configuration)'=='ReleaseNuget' ">
    <DefineConstants>RELEASE</DefineConstants>
    <PackageVersion>$(Version)</PackageVersion>
    <!--enable package creation-->
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <!--include pdb for analytic services-->
    <DebugType>pdbonly</DebugType>
    <GenerateDocumentationFile>true</GenerateDocumentationFile>
  </PropertyGroup>

The relase configuration goes well with less settings. We do not generate a separated symbols-package here, as the .pdb-file without the source will do well in most cases.

Adding Build Targets

We are close to finish our implementation already. Of course, we want to make sure we push only the latest packages. To ensure this, we are cleaning all generated NuGet packages before we build the project/solution:

  <!--cleaning older nugets-->
  <Target Name="CleanOldNupkg" BeforeTargets="Build">
    <ItemGroup>
      <FilesToDelete Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></FilesToDelete>
    </ItemGroup>
    <Delete Files="@(FilesToDelete)" />
    <Message Text="Old nupkg in $(ProjectDir)$(BaseOutputPath)$(Configuration) deleted." Importance="High"></Message>
  </Target>

MSBuild provides a lot of options to configure. We are setting the BeforeTargets property of the target to Build, so once we Clean/Build/Rebuild, all old packages will be deleted by the Delete command. Finally, we are printing a message to confirm the deletion.

Pushing the packages

After completing all these steps above, we are ready to distribute our packages. In our case, we are copying the packages to a local folder with the Copy command.

  <!--pushing to local folder (or network path)-->
  <Target Name="PushDebug" AfterTargets="Pack" Condition="'$(Configuration)'=='DebugNuget'">
    <ItemGroup>
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg"></PackageToCopy>
    </ItemGroup>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>
  </Target>

  <Target Name="PushRelease" AfterTargets="Pack" Condition="'$(Configuration)'=='ReleaseNuget'">
    <ItemGroup>
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></PackageToCopy>
    </ItemGroup>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>
  </Target>

Please note that the local folder could be replaced by a network path. You have to ensure the availability of that path – which adds in some additional work if you choose this route.

If you’re running a full NuGet server (as often happens in Enterprise environments), you can push the packages with this command (instead of the Copy command):

<Exec Command="NuGet push "$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg" [YourPublishKey] -Source [YourNugetServerUrl]" />

The result

If we now select the DebugNuget/ReleaseNuget configuration, Visual Studio will create our NuGet package and push it to our Nuget folder/server:

Let’s have a look into the NuGet package as well. Open your file location defined above and search your package:

As you can see, the Copy command executed successfully. To inspect NuGet packages, you need the NuGet Package Explorer app. Once installed, just double click the package to view its contents. Your result should be similar to this for the DebugNuGet package:

As you can see, we have both the .pdb files as well as the source in our package as intended.

Conclusion

Even as an Indie developer, you can take advantage of the DevOps options provided with Visual Studio and MSBuild. The MSBuild.Sdk.Extras package enables us to maintain a multi-targeting package for our Xamarin(.Forms) code. The whole process needs some setup, but once you have performed the steps above, extending your libraries is just fast forward.

I planned to write this post for quite some time, and I am happy with doing it as my contribution to the #XamarinMonth (initiated by Luis Matos). As always, I hope this post is helpful for some of you. Feel free to clone and play with the full sample I uploaded on Github.

Until the next post, happy coding, everyone!

Helpful links:

Title image credit

P.S. Feel free to download the official app for my blog (that uses a lot of what I am blogging about):
iOS | Android | Windows 10

Posted by msicc in Azure, Dev Stories, iOS, UWP, Xamarin, 3 comments