Xamarin

all my Xamarin posts

What I’ve learned from porting my first app ever to Android and iOS with Xamarin

What I’ve learned from porting my first app ever to Android and iOS with Xamarin

What’s the app about?

The app is about fishing knots. It sounds boring for most people, but for me, this app made me becoming a developer. So I have a somewhat emotional connection to it. It was back in time when Windows Phone 7 was new and hot. A new shiny OS from Microsoft, but clearly lacking the loads of apps that were available on iOS and Android. At that time, I also managed to get my fishing license in Germany.

As I had a hard time to remember how to tie fishing knots, I searched the store and found… nothing. I got very angry over that fact, partly because it meant I had to use one of the static websites back then, but more about the fact that there was this damn app gap (WP7 users will remember). So I finally decided to learn how to write code for Windows Phone and wrote my first app ever after some heavy self-studying months.

Why porting it?

Writing code should soon become my favorite spare-time activity, effectively replacing fishing. And so the years went on, I made some more apps (most of them for Windows Phone) and also managed to become employed as a developer. After some time, S. Nadella became the CEO of Microsoft, and Windows for mobile phones was dead. So I had created all my “babies”, and they were now set to die as Windows Phone/Mobile did. Not accepting this, I started to create a plan to port my apps over to the remaining mobile platforms. After Facebook effectively killed my most successful app (UniShare – that’s another story, though), I stopped porting that one and started with Fishing Knots +.

Reading your own old code (may) hurt

When I was starting to analyze which parts of the code I could reuse, I was kind of shocked. Of course, I knew that there was this code that I wrote when I didn’t know it better, but I refused to have a look into it and refactor it (for obvious reasons from today’s perspective). I violated a lot of best practices back then, the most prominent ones

  • No MVVM
  • Repeating myself over and over again
  • Monster methods with more than 100 lines within

In the end, I did the only right thing and did not reuse any line of my old code.

Reusing the concept without reusing old code

After I took the right decision to not use my old codebase, I needed to abstract the concept from the old app and translate it into what I know now about best practices and MVVM. I did not immediately start with the implementation, however.

The first thing I did was drawing the concept on a piece of paper. I used a no-code language in that sketch and asked my family if they understand the idea behind the app (you could also ask your non-tech friends). This approach helped me to identify the top 3 features of the app:

  • Controllable animation of each knot
  • Easy-to-follow 3-step instructions for each knot
  • Read-Aloud function of the instructions

Having defined the so-called “Minimum Viable Product“, I was ready to think about the implementation(s).

The new implementation

Finding the right implementation isn’t always straight forward. The first thing I wrote was the custom control that powers the controllable animation behind the scenes. I wrote it out of the context in a separated solution as I packed it into a NuGet package after finishing. It turned out to be also the most complex part of the whole app. It uses a common API in Xamarin.Forms, and custom renderers for Android and iOS. I had to go that route because of performance reasons – which is one of the learnings I took away from the porting.

It was also clear that I will use the MVVM pattern in the new version. So I was setting up some basic things using my own Nuget packages that I wrote during working on other Xamarin based projects.

When it came to the overall structure of my app, I thought a Master/Detail implementation would be fine. However, somehow this never felt right, and so I turned to Shell (which was pretty new, so I tried to give it a shot). In the end, I went with a more custom approach. The app uses a TabbedPage with 3 tabs, one being for the animation, the second for the 3-Step tutorial, and last but not least the Settings/About page. The first two pages share a custom top-down menu implementation, bound to the same ViewModel for its items and selection.

More Xamarin.Forms features I learned (to love)

Xamarin and Xamarin.Forms itself are already powerful and have matured a lot since the time I used it to write my first Xamarin app for Telefonicá Germany. Here is a (high level) list of features I started to use:

  • Xamarin.Essentials – the one library that kickstarts your application – seriously!
  • Xamarin Forms Animations – polish the appearance of your app with some nice looking visual activity within the UI
  • Xamarin Forms Effects – easily modify/enhance existing controls without creating a full-blown custom renderer
  • Xamarin Forms VisualStateManager – makes it (sometimes) a whole lot easier to change the UI based on property changes
  • Xamarin.Forms Triggers – alternative approach to modify the UI based on property changes (but not limited to that)

The three musketeers

Because of Xamarin and Xamarin.Forms are such powerful tools, you may run into the situation of needing help/more information. My three musketeers to get missing information, implementation help or solution ideas:

  • Microsoft Xamarin Docs – the docs for Xamarin are pretty extensive and by reading them (even again), I often had one of these “gotcha!”- moments
  • Github – if the docs don’t help, Github may. Be it in the issues of Xamarin(.Forms) or studying the renderers, Github has been as helpful as the docs to me.
  • Web Search – chances are high that someone had similar problems/ideas solved and wrote a blog about it. I don’t blindly copy those solutions. First I read them, then I try to understand them and finally, I implement my own abstraction of them. This way, I am in a steady learning process.

Learn to understand native implementations

I guarantee you will run into a situation where the musketeers do not help when focusing solely on Xamarin. Accept the situation that Xamarin is sitting on top of the native code of others and does the heavy conversion for us. Learn to read Objective-C, Swift, Java and Kotlin code and translate it into C# code. Once you found possible solutions in one of the native samples, blog posts or docs, you will see that most of them are easy to translate into Xamarin code. Do not avoid this part of Xamarin development, it will help you in future, trust me.

Conclusion

Porting over my first app ever to Android and iOS has provided me not only a lot of fun but also huge learnings/practicing. Some of them are of behavioral nature, some of them are code implementations. This post is about the behavioral part – I will write about some of the implementations in my upcoming blog posts.

I hope you enjoyed reading this post. If you have questions or have similar experiences and want to discuss, feel free to leave a comment on this post or connect to me via social media.

Until the next post, happy coding!

Helpful links:

Posted by msicc in Dev Stories, Xamarin, 0 comments
[Updated] #XfEffects: Xamarin.Forms Effect to change the TintColor of ImageButton’s image – (new series)

[Updated] #XfEffects: Xamarin.Forms Effect to change the TintColor of ImageButton’s image – (new series)

The documentation recommends using Effects when we just want to change properties on the underlying native control. I have begun to love Effects as they make my code more readable. With Effects, I always know that there is a platform-specific implementation attached, while that is not obvious when using a custom renderer. Nowadays, I always try to implement an Effect before a Renderer.


Update: I updated the sample project to also respect changes in the ImageButton‘s source. I recently ran into the situation to change the Source (Play/Pause) via my ViewModel based on its state and realized that the effect needs to be reapplied in this case. Please be aware.


The basics

Effects work in a similar way to Renderers. You implement the definition in the Xamarin.Forms project, which attaches it to the control that needs the change. The PlatformEffect implementation needs to be exported to be compiled into the application. Like in a Renderer, the platform implementation also supports property changes. In this new series #XfEffects, I am going to show you some Effects that have been useful for me.

Effect declaration

Let’s turn over to this post’s Effect. We will change the TintColor of an ImageButton to our needs. Let’s start with creating the class for our Effect:

    public class ImageButtonTintEffect : RoutingEffect
    {
        public ImageButtonTintEffect() : base($"XfEffects.{nameof(ImageButtonTintEffect)}")
        {
        }
    }

All Xamarin.Forms Effect implementations need to derive from the RoutingEffect class and pass the Effect‘s name to the base class’ constructor. That’s pretty much everything we have to do for the Effect class itself.

Effect extension for passing parameters

The easiest way for us to get our desired TintColor to the platform implementation is an attached BindableProperty. To be able to attach the BindableProperty, we need a static class that provides the container for the attached property:

public static class ImageButtonTintEffectExtensions
{
    public static readonly BindableProperty TintColorProperty = BindableProperty.CreateAttached("TintColor", typeof(Color), typeof(ImageButtonTintEffectExtensions), default, propertyChanged: OnTintColorPropertyChanged);
    public static Color GetTintColor(BindableObject bindable)
    {
        return (Color)bindable.GetValue(TintColorProperty);
    }
    public static void SetTintColor(BindableObject bindable, Color value)
    {
        bindable.SetValue(TintColorProperty, value);
    }
    private static void OnTintColorPropertyChanged(BindableObject bindable, object oldValue, object newValue)
    {
    }
}

Of course, we want to set the TintColor property as Xamarin.Forms.Color as this will make it pretty easy to control the color from a Style or even a ViewModel.

We want our effect to only being invoked if we change the default TintColor value. This makes sure we are running only code that is necessary for our application:

private static void OnTintColorPropertyChanged(BindableObject bindable, object oldValue, object newValue)
{
    if (bindable is ImageButton current)
    {
        if ((Color)newValue != default)
        {
            if (!current.Effects.Any(e => e is ImageButtonTintEffect))
                current.Effects.Add(Effect.Resolve(nameof(ImageButtonTintEffect)));
        }
        else
        {
            if (current.Effects.Any(e => e is ImageButtonTintEffect))
            {
                var existingEffect = current.Effects.FirstOrDefault(e => e is ImageButtonTintEffect);
                current.Effects.Remove(existingEffect);
            }
        }
    }
}

Last but not least in our Xamarin.Forms application, we want to use the new Effect. This is pretty easy:

<!--import the namespace-->
xmlns:effects="clr-namespace:XfEffects.Effects"
<!--then use it like this-->
<ImageButton Margin="12,0,12,12"
    effects:ImageButtonTintEffectExtensions.TintColor="Red"
    BackgroundColor="Transparent"
    HeightRequest="48"
    HorizontalOptions="CenterAndExpand"
    Source="ic_refresh_white_24dp.png"
    WidthRequest="48">
    <ImageButton.Effects>
        <effects:ImageButtonTintEffect />
    </ImageButton.Effects>
</ImageButton>

We are using the attached property we created above to provide the TintColor to the ImageButtonTintEffect, which we need to add to the ImageButton‘s Effects List.

Android implementation

Let’s have a look at the Android-specific implementation. First, let’s add the platform class and decorate it with the proper attributes to export our effect:

using Android.Content.Res;
using Android.Graphics;
using System;
using System.ComponentModel;
using Xamarin.Forms;
using Xamarin.Forms.Platform.Android;
using AWImageButton = Android.Support.V7.Widget.AppCompatImageButton;
[assembly: ResolutionGroupName("XfEffects")]
[assembly: ExportEffect(typeof(XfEffects.Droid.Effects.ImageButtonTintEffect), nameof(XfEffects.Effects.ImageButtonTintEffect))]
namespace XfEffects.Droid.Effects
{
    public class ImageButtonTintEffect : PlatformEffect
    {
        protected override void OnAttached()
        {
            
        }
        protected override void OnDetached()
        {
        }
        protected override void OnElementPropertyChanged(PropertyChangedEventArgs args)
        {
        }
    }
}

Remember: the ResolutionGroupName needs to be defined just once per app and should not change. Similar to a custom Renderer, we also need to export the definition of the platform implementation and the Forms implementation to make our Effect working.

Android controls like buttons have different states. Properties on Android controls like the color can be set based on their State attribute. Xamarin.Android implements these states in the Android.Resource.Attribute class. We define our ImageButton‘s states like this:

static readonly int[][] _colorStates =
{
    new[] { global::Android.Resource.Attribute.StateEnabled },
    new[] { -global::Android.Resource.Attribute.StateEnabled }, //disabled state
    new[] { global::Android.Resource.Attribute.StatePressed } //pressed state
};

Good to know: certain states like ‘disabled‘ are created by just adding a ‘-‘ in front of the matching state defined in the OS states list (negating it). We need this declaration to create our own ColorStateList, which will override the color of the ImageButton‘s image. Add this method to the class created above:

private void UpdateTintColor()
{
    try
    {
        if (this.Control is AWImageButton imageButton)
        {
            var androidColor = XfEffects.Effects.ImageButtonTintEffectExtensions.GetTintColor(this.Element).ToAndroid();
            var disabledColor = androidColor;
            disabledColor.A = 0x1C; //140
            var pressedColor = androidColor;
            pressedColor.A = 0x24; //180
            imageButton.ImageTintList = new ColorStateList(_colorStates, new[] { androidColor.ToArgb(), disabledColor.ToArgb(), pressedColor.ToArgb() });
            imageButton.ImageTintMode = PorterDuff.Mode.SrcIn;
        }
    }
    catch (Exception ex)
    {
        System.Diagnostics.Debug.WriteLine(
            $"An error occurred when setting the {typeof(XfEffects.Effects.ImageButtonTintEffect)} effect: {ex.Message}\n{ex.StackTrace}");
    }
}

This code works above the Android SDK 23, as only then the ability to modify the A-channel of the defined color was added. The Xamarin.Forms ImageButton translates into a AppCompatImageButton on Android. The AppCompatImageButton has the ImageTintList property. This property is of type ColorStatesList, which uses the states we defined earlier and the matching colors for those states.

Last but not least, we need to set the composition mode. If you want to get a deeper understanding of that, a great starting point is this StackOverflow question. To make things not too complicated, we are infusing the color into the image. The final step is to call the method in the OnAttached override as well as in the OnElementPropertyChanged override.

The result based on the sample I created looks like this:

iOS implementation

Of course, also on iOS, we have to attribute the class, similar to the Android version:

using System;
using System.ComponentModel;
using UIKit;
using Xamarin.Forms;
using Xamarin.Forms.Platform.iOS;
[assembly: ResolutionGroupName("XfEffects")]
[assembly: ExportEffect(typeof(XfEffects.iOS.Effects.ImageButtonTintEffect), nameof(XfEffects.Effects.ImageButtonTintEffect))]
namespace XfEffects.iOS.Effects
{
    public class ImageButtonTintEffect : PlatformEffect
    {
        protected override void OnAttached()
        {
        }
        protected override void OnDetached()
        {
        }
        protected override void OnElementPropertyChanged(PropertyChangedEventArgs args)
        {
        }
    }
}

The underlying control of the Xamarin.Forms ImageButton is a default UIButton on iOS. The UIButton control has an UIImageView, which can be changed with the SetImage method. Based on that knowledge, we are going to implement the UpdateTintColor method:

private void UpdateTintColor()
{
    try
    {
        if (this.Control is UIButton imagebutton)
        {
            if (imagebutton.ImageView?.Image != null)
            {
                var templatedImg = imagebutton.CurrentImage.ImageWithRenderingMode(UIImageRenderingMode.AlwaysTemplate);
                //clear the image on the button
                imagebutton.SetImage(null, UIControlState.Normal);
                imagebutton.ImageView.TintColor = XfEffects.Effects.ImageButtonTintEffectExtensions.GetTintColor(this.Element).ToUIColor();
                imagebutton.TintColor = XfEffects.Effects.ImageButtonTintEffectExtensions.GetTintColor(this.Element).ToUIColor();
                imagebutton.SetImage(templatedImg, UIControlState.Normal);
            }
        }
    }
    catch (Exception ex)
    {
        System.Diagnostics.Debug.WriteLine($"An error occurred when setting the {typeof(ImageButtonTintEffect)} effect: {ex.Message}\n{ex.StackTrace}");
    }
}

Let’s review these code lines. The first step is to extract the already attached Image as a templated Image from the UIButton‘s UIImageView. The second step is to clear the Image from the Button, using the SetImage method and passing null as UIImage parameter. I tried to leave out this step, but it does not work if you do so.

The next step changes the TintColor for the UIButton‘s UIImageView as well as for the button itself. I was only able to get the TintColor changed once I changed both TintColor properties.

The final step is to re-attach the extracted image from step one to the UIImageButton‘s UIImageView, using once again the SetImage method – but this time, we are passing the extracted image as UIImage parameter.

Of course, also here we need to call this method in the OnAttached override as well as in OnElementPropertyChanged.

The result should look similar to this one:

Conclusion

It is pretty easy to implement extended functionality with a Xamarin.Forms Effect. The process is similar to the one of creating a custom renderer. By using attached properties you can fine-tune the usage of this code and also pass property values to the platform implementations.

Please make sure to follow along for the other Effects I will post as part of this new series. I also created a sample app to demonstrate the Effects (find it on Github). As always, I hope this post will be helpful for some of you.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Dev Stories, iOS, Xamarin, 0 comments
How to host a code file on Github as Gist to use in your application

How to host a code file on Github as Gist to use in your application

What the h*** is a Gist?

In case you never heard of Gist, it is an easy to use way to share code files hosted by Github. Everyone with a user account can use this feature, and now that also the premium features are free (thanks to the acquisition by Microsoft), you can even share them secretly.

Where do I find my Gists?

This one is for the beginners. If you know this already, move on. Once you have logged into your Github account, click on your user name. This will open a menu where you can see an option called ‘Your gists’. Once you clicked that one, you will see a page similar to mine (maybe with no gists in it):

gist_overview_gists

How to create a new Gist?

Well, that’s pretty easy. You just click on the ‘+’-button besides your user avatar in the top right corner:

gist_menu_add_new

This will bring up a new gist window. Enter your description, file name and fill in the content of your file or even add more files and hit the ‘Create public gist’ button to create your new gist. If you intend to host multiple files in your gist, please note that you will need the following steps on every single file you add (as each one has its own url).

gist_add_new

How to use this Gist in my app?

Luckily, both files in Github repos as well as in gists can be viewed in the so called ‘Raw’ view. You will find a corresponding button on every code file in the top right corner. Click on it, and you will see a plain-text representation (here is a sample from the one that led to this block post. It is styled by a browser extension that makes json more readable (every developer should already have one of this type installed, btw)):

gist_raw_view

Now we are close to be able to fetch this file into our applications. If you are sure that this file will never change, just use this file. If you know that this file is subject for future changes, you will need to perform a little trick.

Getting always the latest version of our Gist

If you analyze the url, you will notice that there is a unique id between the ‘raw’ part and the file name:

gist_remove_this_id

This id represents the current revision of your Gist. To make sure we always get the latest version of our gist, we need to remove this id. The url must end with ‘raw/yourfile.extensions‘, as you can see here:

gist_always_latest_revision_url

This way, you can update the file and implement an update mechanism into your app that fetches always the latest revision of that file. To fetch the file content into your app, you just need to perform a GET request against that url, without the overload of using Github’s API.

Conclusion

Instead of hosting configuration or data files on a private web server, one can utilize existing infrastructure like the one of Github. Like always, I hope this post will be helpful for some of you.

Until the next post, happy coding, everyone!
Posted by msicc in Dev Stories, Xamarin, 0 comments
How to avoid a distorted Android camera preview with ZXing.Net.Mobile [Updated]

How to avoid a distorted Android camera preview with ZXing.Net.Mobile [Updated]

Update: The application I used to determine this blog post is portrait only, that’s why I totally missed the landscape part. I have to thank Timo Partl, who pointed me to that fact on Twitter. I updated the solution/workaround part to reflect the orientation as well.


If you need to implement QR-scanning into your Xamarin (Forms) app, chances are high you will be using the ZXing.Net.Mobile library (as it is the most complete solution out there). In one of my recent projects, I wanted to exactly that. I have used the library already before and thought it might be an easy play.

Distorted reality…

Reality hit me hard when I realized that the preview is totally distorted:

distorted camera preview

Even with that distortion, the library detects the QR code without problems. However, the distortion is not something you want your user to experience. So I started to investigate why this distortion happens. As I am not the only one experiencing this problem, I am showing to easily fix that issue and the way that led to the solution (knowing also some beginners follow my blog).

Searching the web …

.. often brings you closer to the solution. Most of the time, you are not the only one that runs into such a problem. Doing so let me find the Github issue above, showing my theory is correct. Sometimes, those Github issues provide a solution  – this time, not. After not being able to find anything helpful, I decided to fork the Github repo and downloaded it into Visual Studio.

Debugging

Once the solution was loaded in Visual Studio, I found that there are some samples in the repo that made debugging easy for me. I found the case that matches my implementation best (using the ZXingScannerView). After hitting the F10 (to move step by step through the code) and F11 (to jump into methods) quite often, I understood how the code works under the hood.

The cause

If you are not telling the library the resolution you want to have, it will select one for you based on this rudimentary rule (view source on Github):

// If the user did not specify a resolution, let's try and find a suitable one
if (resolution == null)
{
    foreach (var sps in supportedPreviewSizes)
    {
        if (sps.Width >= 640 && sps.Width <= 1000 && sps.Height >= 360 && sps.Height <= 1000)
        {
            resolution = new CameraResolution
            {
                Width = sps.Width,
                Height = sps.Height
            };
            break;
        }
    }
}

Let’s break it down. This code takes the available resolutions from the (old and deprecated) Android.Hardware.Camera API and selects one that matches the ranges defined in the code without respect to the aspect ratio of the device display. On my Nexus 5X, it always selects the resolution of 400×800. This does not match my device’s aspect ratio, but the Android OS does some stretching and squeezing to make it visible within the SurfaceView used for the preview. The result is the distortion seen above.

Note: The code above is doing exactly what it is supposed to do. However, it was updated last 3 years ago (according to Github). Display resolutions and aspect ratios changed a lot during that time, so we had to arrive at this point sooner or later.

Solution/Workaround

The solution to this is pretty easy. Just provide a CameraResolutionSelectorDelegate with your code setting up the ZXingScannerView. First, we need a method that returns a CameraResolution and takes a List of CameraResolution, let’s have a look at that one first:

public CameraResolution SelectLowestResolutionMatchingDisplayAspectRatio(List<CameraResolution> availableResolutions)
{            
    CameraResolution result = null;

    //a tolerance of 0.1 should not be visible to the user
    double aspectTolerance = 0.1;
    var displayOrientationHeight = DeviceDisplay.MainDisplayInfo.Orientation == DisplayOrientation.Portrait ? DeviceDisplay.MainDisplayInfo.Height : DeviceDisplay.MainDisplayInfo.Width;
    var displayOrientationWidth = DeviceDisplay.MainDisplayInfo.Orientation == DisplayOrientation.Portrait ? DeviceDisplay.MainDisplayInfo.Width : DeviceDisplay.MainDisplayInfo.Height;

    //calculatiing our targetRatio
    var targetRatio = displayOrientationHeight / displayOrientationWidth;
    var targetHeight = displayOrientationHeight;
    var minDiff = double.MaxValue;

    //camera API lists all available resolutions from highest to lowest, perfect for us
    //making use of this sorting, following code runs some comparisons to select the lowest resolution that matches the screen aspect ratio and lies within tolerance
    //selecting the lowest makes Qr detection actual faster most of the time
    foreach (var r in availableResolutions.Where(r => Math.Abs(((double)r.Width / r.Height) - targetRatio) < aspectTolerance))
    {
            //slowly going down the list to the lowest matching solution with the correct aspect ratio
            if (Math.Abs(r.Height - targetHeight) < minDiff)
            minDiff = Math.Abs(r.Height - targetHeight);
            result = r;                
    }

    return result;
}

First, we are setting up a fixed tolerance for the aspect ratio. A value of 0.1 should not be recognizable for users. The next step is calculating the target ratio. I am using the Xamarin.Essentials API here, because it saves me some code and works both in Xamarin.Android only projects as well as Xamarin.Forms ones.

Before we are able to select the best matching resolution, we need to notice a few points:

  • lower resolutions result in faster QR detection (even with big ones)
  • preview resolutions are always presented landscape
  • the list of available resolutions is sorted from biggest to smallest

Considering these points, we are able to loop over the list of available resolutions. If the current ratio is out of our tolerance range, we ignore it and move on. By setting the minDiff double down with every iteration, we are moving down the list to arrive at the lowest possible resolution that matches our display’s aspect ratio best. In the case of my Nexus 5X 480×800 with an aspect ratio of 1.66666~, which matches the display aspect ratio of 1,66111~ pretty close.

Delegating the selection call

Now that we have our calculating method in place, we need to pass the method via the CameraResolutionSelectorDelegate to our MobileBarcodeScanningOptions.

If you are on Xamarin.Android, your code will look similar to this:

var options = new ZXing.Mobile.MobileBarcodeScanningOptions()
{
   PossibleFormats = new List<ZXing.BarcodeFormat>() { ZXing.BarcodeFormat.QR_CODE },
CameraResolutionSelector = new CameraResolutionSelectorDelegate(SelectLowestResolutionMatchingDisplayAspectRatio)
}

If you are on Xamarin.Forms, you will have to use the DependencyService to get to the same result (as the method above has to be written within the Android project):

var options = new ZXing.Mobile.MobileBarcodeScanningOptions()
{
   PossibleFormats = new List<ZXing.BarcodeFormat>() { ZXing.BarcodeFormat.QR_CODE },
CameraResolutionSelector = DependencyService.Get<IZXingHelper>().CameraResolutionSelectorDelegateImplementation
}

The result

Now that we have an updated resolution selection mechanism in place, the result is exactly what we expected, without any distortion:

matching camera preview

Remarks

In case none of the camera resolutions gets selected, the camera preview automatically uses the default resolution. In my tests with three devices, this is always the highest one. The default resolution normally matches the devices aspect ratio. As it is the highest, it will slow down the QR detection, however.

The ZXing.Net.Mobile library uses the deprecated Android.Hardware.Camera API and Android.FastCamera library on Android. The next step would be to migrate over to the Android.Hardware.Camera2 API, which makes the FastCamera library obsolete and is future proof. I had already a look into that step, as I need to advance in two projects (one personal and one at work) with QR scanning, however, I postponed this change.

Conclusion

For the time being, we are still able to use the deprecated mechanism of getting our camera preview right. After identifying the reason for the distortion, I got pretty fast to a workaround/solution that should fit most use cases. As devices and their specs are evolving, we are at least not left behind. I will do another writeup once I found the time to replace the deprecated API in my fork.

As always, I hope this post will be helpful for some of you.

Until the next post, happy coding, everyone!

Title Image Credit

Posted by msicc in Android, Dev Stories, Xamarin, 11 comments
Handle AtomicPay’s webhook with Microsoft Azure (Part 3/3) – sending push notifications with Notification Hub

Handle AtomicPay’s webhook with Microsoft Azure (Part 3/3) – sending push notifications with Notification Hub

Series overview

  1. Handle the incoming notification with an Azure Function (first post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (second post)
  3. Send a push notification via Azure Notification Hub to all devices that have linked a merchant’s account to it (this post)

Preparations…

With Azure Notification Hub, we have one of the most powerful tools for sending push notifications out to connected devices. It is designed to handle multiple platforms, so let’s create a new Notification Hub in the Azure Portal. After logging in, select ‘Create a resource‘, followed by ‘Mobile‘ and a final click on ‘Notification Hub‘:

Azure: create a new Notification Hub

Fill in the details of the new Hub. Remember to select your already existing Resource Group and the same region that is used for your Azure Function:

Azure: new Notification Hub creation details

After about a minute, your newly created Notification Hub is ready to use. Select ‘Go to resource‘ to open it.

Azure Deployment finished

Now we are already able to connect to our first platform. For this sample, I will focus on Android. Login to the Firebase console with your Google account. Select ‘Add project‘ and configure your Android app (or add your existing, if you have already one like I do):

Firebase: Add or select project

After following all (self-explanatory) steps, scroll a bit down in the General tab of your Firebase project. You will find an entry like this:

Firebase: download google-services.json file

Download the ‘google-services.json‘-file. We will need it later in our Android app. Now select the ‘Cloud Messaging‘ tab. You will be presented with two keys – copy the upper one:

Firebase: copy server key

Go back to the Azure portal and select ‘Google (GCM/FCM)‘ in ‘Settings’ . Paste the earlier copied key and hit the ‘Save‘-Button:

Azure: paste Firebase server key and save

Now we need to authenticate our Azure Function to the Notification Hub. Select ‘Access Policies‘ in the ‘Manage’ section of your Hub and copy the lower ConnectionString with the ‘Listen,Manage,Send permission:

Azure: copy server ConnectionString

Open your Azure Function in a new tab. In the ‘Application settings‘, add a new setting and paste the ConnectionString. Add another one for the NotificationHub’s name:

Azure Function add Hub name and ConnectionString to Application settings

Go back to the Notification Hub and save the upper ConnectionString locally, as we need that one in our Android application later on

Back to code…

Now that we have prepared the Notification Hub on Azure, let’s write some code that actually sends out our notifications once our Function verified our AtomicPay invoice. In the last post, we already rewrote some of the code in preparation of the final step, our push notifications.

Triggering push notifications

Before we will be able to modify our Function code to actually send the notification, we need to install an additional NuGet package: Microsoft.Azure.Webjobs.Extensions.NotificationHub. Please note that I only was able to get this all up and running with version 1.2.0, version 1.3.0 seems to be not compatible with v1 Functions.

Triggering push notifications can be broken down into these 3 steps:

  1. create a Hub client from the ConnectionString
  2. create the content of the notification
  3. finally send the notification

This translates into this Task inside our InvoiceVerifier class:

        private static async Task TriggerPushNotification(InvoiceInfoDetails invoiceInfoDetails, string accId)
        {
            //we need full shared access connection string (server side)
            var connectionString = ConfigurationManager.AppSettings["NotifHubConnectionString"];
            var hubName = ConfigurationManager.AppSettings["NotifHubName"];

            var hub = NotificationHubClient.CreateClientFromConnectionString(connectionString, hubName);

            var templateParams = new Dictionary<string, string>();
            templateParams["body"] = $"Received payment for invoice id: {invoiceInfoDetails.InvoiceId} ({invoiceInfoDetails.Status.ToString()})";

            var outcome = await hub.SendTemplateNotificationAsync(templateParams, $"accId:{accId}");

            _traceWriter.Info($"attempted to inform merchant {accId} of payment via push");
        }

We are loading the ConnectionString and the Hub’s name from our Function’s Application settings and create a new client connection using these two safely stored properties. To keep this sample simple, I am using a templated notification that can be used across all supported platforms. The receiver is responsible for the handling of this template (we’ll see how later in this post). Finally, we are sending out the notification to the native platforms, where they will be distributed (in our case, via Firebase Cloud Messaging). By including the accId:{accId}tag, we are sending the push notification only to the devices that were registered with that specific merchant account.

Of course, this nicely written method does nothing until we actually use it. Let’s update our Run method. we only need to add one line for our test in the switch that handles the returned invoiceInfoDetails:

switch (invoiceInfoDetails.Status)
{
	case AtomicPay.Entity.InvoiceStatus.Paid:
	case AtomicPay.Entity.InvoiceStatus.PaidAfterExpiry:
	case AtomicPay.Entity.InvoiceStatus.Overpaid:
	case AtomicPay.Entity.InvoiceStatus.Complete:
		log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is paid");
		await TriggerPushNotification(invoiceInfoDetails, accId);
		break;
	default:
		log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is not yet paid");
		//todo: this will trigger additional status handling in future
		break;
}

Publish your updated Azure Function. Our Azure Function is now connected to a Notification Hub and is able to send out push notifications. Of course, push notifications ending in Nomandsland are boring. So let’s go ahead.

Receiving the push notifications

Create a new Xamarin.Android app (with XAML or without, your choice). Of course, also here we have some additional setup to perform.

Import google-services.json

Add the google-services.json we downloaded before from Firebase to your project. Set its Build Action to GoogleServicesJson in the Properties window. I needed to restart Visual Studio to be able to select this option after adding the file.

VS-google-services-build-action

Installing NuGet packages

Of course, we also need to install some additional NuGet packages:

The second step involves some changes to the AndroidManifest, giving the Firebase package some permissions and handle its intents in the application tag:

<receiver android:name="com.google.firebase.iid.FirebaseInstanceIdInternalReceiver" android:exported="false" />
<receiver android:name="com.google.firebase.iid.FirebaseInstanceIdReceiver" android:exported="true" android:permission="com.google.android.c2dm.permission.SEND">
	<intent-filter>
		<action android:name="com.google.android.c2dm.intent.RECEIVE" />
		<action android:name="com.google.android.c2dm.intent.REGISTRATION" />
		<category android:name="${applicationId}" />
	</intent-filter>
</receiver>

Next, we will add a new constants class:

public static class Constants
{
	public const string ListenConnectionString = "<Listen connection string>";
	public const string NotificationHubName = "<hub name>";
}

On the client, we are only using the lower permission ConnectionString we copied earlier from the Azure Notification Hub.

Connecting to Firebase

As our Azure Notification Hub sends notifications via Firebase, Google’s native messaging handler, we need a service that connects our app. The service requests a token (the Xamarin library does all that complex stuff for us) that we’ll need to actually register the device for the notifications. Add a new class called PlatformFirebaseIidService and decorate it with the Service attribute. Besides that, we need to register our interest on the com.google.firebase.INSTANCE_ID_EVENT, which will call into the OnTokenRefresh() method we will override. We’ll end up like this:

[Service]
[IntentFilter(new[] { "com.google.firebase.INSTANCE_ID_EVENT" })]
public class PlatformFirebaseIidService : FirebaseInstanceIdService
{
	const string TAG = "PlatformFirebaseIidService";
	NotificationHub _hub;

	public override void OnTokenRefresh()
	{
		var refreshedToken = FirebaseInstanceId.Instance.Token;
		Log.Debug(TAG, "FCM token: " + refreshedToken);		
	}
}

Now that we hold a fresh Firebase instance token, we can register our app for receiving push notifications. To do this, we’re adding an new method:

void SendRegistrationToServer(string token, List<string> tags = null)
{
	// Register with Notification Hubs
	_hub = new NotificationHub(Constants.NotificationHubName, Constants.ListenConnectionString, this);

	if (tags == null)
		tags = new List<string>() { };

	var templateBody = "{\"data\":{\"message\":\"$(body)\"}}";

	//this one registers a template that can be used cross platform
	//just make sure the template is the same on iOS, Windows, etc.
	var registerTemplate = _hub.RegisterTemplate(token, "defaultTemplate", templateBody, tags.ToArray());
	Log.Debug(TAG, $"Successful registration of Template {registerTemplate.RegistrationId}");
}

Let me break this method down. Of course, we need to connect to our Notification Hub, which is responsible for the decision if our client is a valid receiver or not. If we add tags, they will get send together with the registration. We will add the merchant’s account Id to filter the receiver. As we are sending notifications using a template, we need to register for the template that gets filled by our Azure Function. One thing is left, calling this method after obtaining a token in the OnTokenRefresh override:

SendRegistrationToServer(refreshedToken, new List<string>() { "accId:<yourAccId>" });

Handling incoming firebase messages

Now that our client is registered with both Firebase and the Azure Notification Hub, of course we want it to be able to receive the pushed messages. To achieve this, we need another Service. Add a new class called PlatformFirebaseMessagingService and decorate it once again with the Service attribute. This time, we are interested in the com.google.firebase.MESSAGING_EVENT intent, so let’s add also this one. The service is responsible for parsing our payload and actually trigger a notification helper to send the notification. Here is the code of the service:

[Service]
[IntentFilter(new[] { "com.google.firebase.MESSAGING_EVENT" })]
public class PlatformFirebaseMessagingService : FirebaseMessagingService
{
	const string TAG = "PlatformFirebaseMessagingService";

	public override void OnMessageReceived(RemoteMessage message)
	{
		Log.Debug(TAG, "From: " + message.From);
		string title = "Azure Test message";
		string body = null;

		if (message.GetNotification() != null)
		{
			//These is how most messages will be received
			body = message.GetNotification().Body;
			title = message.GetNotification().Title;
			Log.Debug(TAG, $"Notification Message Body: {body}");
		}
		else
		{
			//Only used for debugging payloads sent from the Azure portal
			body = message.Data.Values.First();
		}

		var notification = NotificationHelper.Instance.GetNotificationBuilder(title, body);
		NotificationHelper.Instance.Notify(1001, notification);

	}
}

Of course, you are curious about the NotificationHelper class. Let’s have a look. Besides being a Singleton, we need to retrieve an instance of the system’s NotificationService. As we do not have multiple notification channels in this sample, it is enough to declare both the name and its id in two constants. In the constructor of the class, we are initializing the channel:

public class NotificationHelper : ContextWrapper
{
	private static NotificationHelper _instance;

	public static NotificationHelper Instance => _instance ?? (_instance = new NotificationHelper(Application.Context));
	
	const string NOTIFICATION_CHANNEL_ID = "default";
	const string NOTIFICATION_CHANNEL_NAME = "notif_test_channel";

	NotificationManager _manager;
	NotificationManager Manager => _manager ?? (_manager = (NotificationManager)GetSystemService(NotificationService));

	public NotificationHelper(Context context) : base(context)
	{
		var channel = new NotificationChannel(NOTIFICATION_CHANNEL_ID, NOTIFICATION_CHANNEL_NAME, NotificationImportance.Default);
		channel.EnableVibration(true);
		channel.LockscreenVisibility = NotificationVisibility.Public;
		
		this.Manager.CreateNotificationChannel(channel);
	}
}

Creating the Notifications involves the Notificiation.Builder class. We’re simplifying the process for us with this method:

public Notification.Builder GetNotificationBuilder(string title, string body)
{
	var intent = new Intent(this.ApplicationContext, typeof(MainActivity));
	intent.AddFlags(ActivityFlags.ClearTop);
	var pendingIntent = PendingIntent.GetActivity(this, 0, intent, PendingIntentFlags.OneShot);

	return new Notification.Builder(this.ApplicationContext, NOTIFICATION_CHANNEL_ID)
		.SetContentTitle(title)
		.SetContentText(body)
		.SetSmallIcon(Resource.Drawable.ic_launcher)
		.SetAutoCancel(true)
		.SetContentIntent(pendingIntent);
}

To read more about creation notifications, have a look at the docs for local (= client side) notifications.

Last but not least, we need to inform the system’s notification service to show the notification. The final helper method to this looks like this:

public void Notify(int id, Notification.Builder notificationBuilder)
{
	this.Manager.Notify(id, notificationBuilder.Build());
}

To test the notification, we have two options – one is to send a test notification via the Notification Hub, the other is to use Postman once again to create to trigger our Azure Function. In both cases, your result should be a notification on your device (after you deployed and run the application without the debugger being attached).

atomicpay-webhook-notification

Conclusion

In this last post of the series, I showed you all steps that are needed to send out push notifications utilizing an Azure Notification Hub. It takes a bit of setup in the beginning, but the code involved is pretty easy and straight forward.

Now that the series is complete, you can have a look at the source code on Github. You need to add your own google-services.json file and your own keys as well to run the sample. As always, I hope this post, as well as this series, is helpful for some of you.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Azure, Dev Stories, Xamarin, 1 comment
Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Internal libraries

Writing (or copy and pasting) the same code over and over again is one of those things I try to avoid when writing code. For quite some time, I already organize such code in libraries. Until last year, this required quite some work managing all libraries for each Xamarin platform I used. Luckily, the MSBuild SDK Extras extensions showed up and made everything a whole lot easier, especially after James Montemagno did a detailed explanation on how to get the most out of it for Xamarin plugins/libraries.

Getting started

Even if I repeat some of the steps of James’ post, I’ll start from scratch on the setup part here. I hope to make the whole process straight forward for everyone – that’s why I think it makes sense to show each and every step. Please make sure you are using the new .csproj type. If you need a refresh on that, you can check my post about migrating to it (if needed).

MSBuild.Sdk.Extras

The first step is pulling in MSBuild.Sdk.Extras, which will enable us to target multiple platforms in one single library. For this, we need a global.json file in the solution folder. Right click on the solution name and select ‘Open Folder in File Explorer‘, then just add a new text file and name it appropriately.

The next step is to define the version of the MSBuild.SDK.Extras library we want to use. The current version is 1.6.65, so let’s define it in the file. Just click the ‘Solution and Folders‘ button to find the file in Visual Studio:

switch to folder view

Add these lines into the file and save it:

{
  "msbuild-sdks": {
    "MSBuild.Sdk.Extras": "1.6.65"
  }
}

Modifying the project file

Switch back to the Solution view and right click on the .csproj file. Select ‘Edit [ProjectName].csproj‘. Let’s modify and add the project definitions. We’ll start right in the first line. Replace the first line to pull in the MSBuild.Sdk.Extras:

<Project Sdk="MSBuild.Sdk.Extras">

Next, we’re separating the Version tag. This will ensure that we’ll find it very quickly in future within the file:

  <!--separated for accessibility-->
  <PropertyGroup>
    <Version>1.0.0.0</Version>
  </PropertyGroup>

Now we are enabling multiple targets, in this case our Xamarin platforms. Please note that there are two separated versions – one that includes UWP and one that does not. I thought I would be fine to remove the non-UWP one if I include UWP and was precent with some strange build errors that where resolved only by re-adding the deleted line. I do not remember the reason, but I made a comment in my template to not remove it – so let’s just keep it that way.

  <!--make it multi-platform library!-->
  <PropertyGroup>
    <UseFullSemVerForNuGet>false</UseFullSemVerForNuGet>
    <!--we are handling compile items ourselves below with a custom naming scheme-->
    <EnableDefaultCompileItems>false</EnableDefaultCompileItems>
    <KEEP ALL THREE IF YOU ADD UWP!-->
    <TargetFrameworks></TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' == 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;uap10.0.16299;</TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' != 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;</TargetFrameworks>
  </PropertyGroup>

Now we will add some default NuGet packages into the project and make sure our file get included only on the correct platform. We follow a simple file naming scheme (Xamarin.Essentials uses the same):

[Class].[platform].cs

This way, we are able to add all platform specific code together with the shared entry point in a single folder. Let’ start with shared items. These will be available on all platforms listed in the PropertyGroup above:

  <!--shared items-->
  <ItemGroup>
    <!--keeping this one ensures everything goes smooth-->
    <PackageReference Include="MSBuild.Sdk.Extras" Version="1.6.65" PrivateAssets="All" />

    <!--most commonly used (by me)-->
    <PackageReference Include="Xamarin.Forms" Version="3.4.0.1029999" />
    <PackageReference Include="Xamarin.Essentials" Version="1.0.1" />

    <!--include content, exclude obj and bin folders-->
    <None Include="**\*.cs;**\*.xml;**\*.axml;**\*.png;**\*.xaml" Exclude="obj\**\*.*;bin\**\*.*;bin;obj" />
    <Compile Include="**\*.shared.cs" />
  </ItemGroup>

The ‘**\‘ part in the Include property of the Compile tag ensures MSBuild includes also classes in subfolders. Now let’s add some platform specific rules to the project:

  <ItemGroup Condition=" $(TargetFramework.StartsWith('netstandard')) ">
    <Compile Include="**\*.netstandard.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('uap10.0')) ">
    <PackageReference Include="Microsoft.NETCore.UniversalWindowsPlatform" Version="6.1.9" />
    <Compile Include="**\*.uwp.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('MonoAndroid')) ">
    <!--need to reference all those libs to get latest minimum Android SDK version (requirement by Google)... #sigh-->
    <PackageReference Include="Xamarin.Android.Support.Annotations" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Compat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Core.Utils" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.CustomTabs" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v4" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Design" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.AppCompat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.CardView" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.Palette" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.MediaRouter" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Core.UI" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Fragment" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Media.Compat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.RecyclerView" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Transition" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="28.0.0.1" />
    <Compile Include="**\*.android.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('Xamarin.iOS')) ">
    <Compile Include="**\*.ios.cs" />
  </ItemGroup>

Two side notes:

  • Do not reference version 6.2.2 of the Microsoft.NETCore.UniversalWindowsPlatform NuGet. There seems to be bug in there that will lead to rejection of your app from the Microsoft Store. Just keep 6.1.9 (for the moment).
  • You may not need all of the Xamarin.Android packages, but there are a bunch of dependencies between them and others, so I decided to keep them all

If you have followed along, hit the save button and close the .csproj file. Verifying everything went well is pretty easy – your solution structure should look like this:

multi-targeting-project

Before we’ll have a look at the NuGet creation part of this post, let’s add some sample code. Just insert this into static partial classes with the appropriate naming scheme for every platform and edit the code to match the platform. The .shared version of this should be empty (for this sample).

     public static partial class Hello
    {
        public static string Name { get; set; }

        public static string Platform { get; set; }

        public static  void Print()
        {
            if (!string.IsNullOrEmpty(Name) && !string.IsNullOrEmpty(Platform))
                System.Diagnostics.Debug.WriteLine($"Hello {Name} from {Platform}");
            else
                System.Diagnostics.Debug.WriteLine($"Hello unkown person from {Device.Android}");
        }
    }

Normally, this would be a Renderer or other platform specific code. You should get the idea.

Preparing NuGet package creation

We will now prepare our solution to automatically generate NuGet packages both for DEBUG and RELEASE configurations. Once the packages are created, we will push it to a local (or network) file folder, which serves as our local NuGet-Server. This will fit for most Indie-developers – which tend to not replicate a full blown enterprise infrastructure for their DevOps needs. I will also mention how you could push the packages to an internal NuGet server on a sideline (we are using a similar setup at work).

Adding NuGet Push configurations

One thing we want to make sure is that we are not going to push packages on every compilation of our library. That’s why we need to separate configurations. To add new configurations, open the Configuration Manager in Visual Studio:

In the Configuration Manager dialog, select the ‘<New…>‘ option from the ‘Active solution configuration‘ ComboBox:

Name the new config to fit your needs, I just use DebugNuget which will signal that we are pushing the NuGet package for distribution. I am copying the settings from the Debug configuration and let Visual Studio add the configurations to project files within the solution. Repeat the same for Release configuration.

The result should look like this:

Modifying the project file (again)

If you head over to your project file, you will see the Configurations tag has new entries:

  <PropertyGroup>
    <Configurations>Debug;Release;DebugNuget;ReleaseNuget</Configurations>
  </PropertyGroup>

Next, add the properties of your assembly and package:

    <!--assmebly properties-->
  <PropertyGroup>
    <AssemblyName>XamarinNugets</AssemblyName>
    <RootNamespace>XamarinNugets</RootNamespace>
    <Product>XamarinNugets</Product>
    <AssemblyVersion>$(Version)</AssemblyVersion>
    <AssemblyFileVersion>$(Version)</AssemblyFileVersion>
    <NeutralLanguage>en</NeutralLanguage>
    <LangVersion>7.1</LangVersion>
  </PropertyGroup>

  <!--nuget package properties-->
  <PropertyGroup>
    <PackageId>XamarinNugets</PackageId>
    <PackageLicenseUrl>https://github.com/MSiccDevXamarinNugets</PackageLicenseUrl>
    <PackageProjectUrl>https://github.com/MSiccDevXamarinNugets</PackageProjectUrl>
    <RepositoryUrl>https://github.com/MSiccDevXamarinNugets</RepositoryUrl>

    <PackageReleaseNotes>Xamarin Nugets sample package</PackageReleaseNotes>
    <PackageTags>xamarin, windows, ios, android, xamarin.forms, plugin</PackageTags>

    <Title>Xamarin Nugets</Title>
    <Summary>Xamarin Nugets sample package</Summary>
    <Description>Xamarin Nugets sample package</Description>

    <Owners>MSiccDev Software Development</Owners>
    <Authors>MSiccDev Software Development</Authors>
    <Copyright>MSiccDev Software Development</Copyright>
  </PropertyGroup>

Configuration specific properties

Now we will add some configuration specific PropertyGroups that control if a package will be created.

Debug and DebugNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Debug' ">
    <DefineConstants>DEBUG</DefineConstants>
    <!--making this pre-release-->
    <PackageVersion>$(Version)-pre</PackageVersion>
    <!--needed for debugging!-->
    <DebugType>full</DebugType>
    <DebugSymbols>true</DebugSymbols>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(Configuration)'=='DebugNuget' ">
    <DefineConstants>DEBUG</DefineConstants>
    <!--enable package creation-->
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <!--making this pre-release-->
    <PackageVersion>$(Version)-pre</PackageVersion>
    <!--needed for debugging!-->
    <DebugType>full</DebugType>
    <DebugSymbols>true</DebugSymbols>
    <GenerateDocumentationFile>false</GenerateDocumentationFile>
    <!--this makes msbuild creating src folder inside the symbols package-->
    <IncludeSource>True</IncludeSource>
    <IncludeSymbols>True</IncludeSymbols>
  </PropertyGroup>

The Debug configuration enables us to step into the Debug code while we are referencing the project directly during development, while the DebugNuget configuration will also generate a NuGet package including Source and Symbols. This is helpful once you find a bug in the NuGet package and allows us to step into this code also if we reference the NuGet instead of the project. Both configurations will add ‘-pre‘ to the version, making these packages only appear if you tick the ‘Include prerelease‘ CheckBox in the NuGet Package Manager.

Release and ReleaseNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Release' ">
    <DefineConstants>RELEASE</DefineConstants>
    <PackageVersion>$(Version)</PackageVersion>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(Configuration)'=='ReleaseNuget' ">
    <DefineConstants>RELEASE</DefineConstants>
    <PackageVersion>$(Version)</PackageVersion>
    <!--enable package creation-->
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <!--include pdb for analytic services-->
    <DebugType>pdbonly</DebugType>
    <GenerateDocumentationFile>true</GenerateDocumentationFile>
  </PropertyGroup>

The relase configuration goes well with less settings. We do not generate a separated symbols-package here, as the .pdb-file without the source will do well in most cases.

Adding Build Targets

We are close to finish our implementation already. Of course, we want to make sure we push only the latest packages. To ensure this, we are cleaning all generated NuGet packages before we build the project/solution:

  <!--cleaning older nugets-->
  <Target Name="CleanOldNupkg" BeforeTargets="Build">
    <ItemGroup>
      <FilesToDelete Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></FilesToDelete>
    </ItemGroup>
    <Delete Files="@(FilesToDelete)" />
    <Message Text="Old nupkg in $(ProjectDir)$(BaseOutputPath)$(Configuration) deleted." Importance="High"></Message>
  </Target>

MSBuild provides a lot of options to configure. We are setting the BeforeTargets property of the target to Build, so once we Clean/Build/Rebuild, all old packages will be deleted by the Delete command. Finally, we are printing a message to confirm the deletion.

Pushing the packages

After completing all these steps above, we are ready to distribute our packages. In our case, we are copying the packages to a local folder with the Copy command.

  <!--pushing to local folder (or network path)-->
  <Target Name="PushDebug" AfterTargets="Pack" Condition="'$(Configuration)'=='DebugNuget'">
    <ItemGroup>
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg"></PackageToCopy>
    </ItemGroup>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>
  </Target>

  <Target Name="PushRelease" AfterTargets="Pack" Condition="'$(Configuration)'=='ReleaseNuget'">
    <ItemGroup>
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></PackageToCopy>
    </ItemGroup>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>
  </Target>

Please note that the local folder could be replaced by a network path. You have to ensure the availability of that path – which adds in some additional work if you choose this route.

If you’re running a full NuGet server (as often happens in Enterprise environments), you can push the packages with this command (instead of the Copy command):

<Exec Command="NuGet push "$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg" [YourPublishKey] -Source [YourNugetServerUrl]" />

The result

If we now select the DebugNuget/ReleaseNuget configuration, Visual Studio will create our NuGet package and push it to our Nuget folder/server:

Let’s have a look into the NuGet package as well. Open your file location defined above and search your package:

As you can see, the Copy command executed successfully. To inspect NuGet packages, you need the NuGet Package Explorer app. Once installed, just double click the package to view its contents. Your result should be similar to this for the DebugNuGet package:

As you can see, we have both the .pdb files as well as the source in our package as intended.

Conclusion

Even as an Indie developer, you can take advantage of the DevOps options provided with Visual Studio and MSBuild. The MSBuild.Sdk.Extras package enables us to maintain a multi-targeting package for our Xamarin(.Forms) code. The whole process needs some setup, but once you have performed the steps above, extending your libraries is just fast forward.

I planned to write this post for quite some time, and I am happy with doing it as my contribution to the #XamarinMonth (initiated by Luis Matos). As always, I hope this post is helpful for some of you. Feel free to clone and play with the full sample I uploaded on Github.

Until the next post, happy coding, everyone!

Helpful links:

Title image credit

P.S. Feel free to download the official app for my blog (that uses a lot of what I am blogging about):
iOS | Android | Windows 10

Posted by msicc in Azure, Dev Stories, iOS, UWP, Xamarin, 3 comments
Handle AtomicPay’s webhook with Microsoft Azure (Part 2/3) – invoice verification with Azure KeyVault

Handle AtomicPay’s webhook with Microsoft Azure (Part 2/3) – invoice verification with Azure KeyVault

Series overview

  1. Handle the incoming notification with an Azure Function (first post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (this post)
  3. Send a push notification via Azure Notificationhub to all devices that have linked a merchant’s account to it (third post)

Storing credentials securely

When we consume web services, we need to authenticate our applications against them/their API. In the past, I always saw (and in fact, I also made) samples that do ignore the security of those credentials and provide them in code, leaving additional research (if you are new to that topic) and making the samples incomplete. As the goal of this post is to verify the state of a received invoice against the AtomicPay API, the first step is to provide a secure mechanism for storing our API credentials on Azure.

Get your AtomicPay API keys

If you haven’t signed up for an AtomicPay account, you should probably do it now. Once you have logged in, open the menu and select ‘API Integration‘:

AtomicPay menu select API Integration

On this page, you find your pre-generated API-Keys. Should your keys be compromised, you will be able to regenerate your keys here as well:

AtomicPay API Integration page

Introducing Azure KeyVault

According to the Azure KeyVault documentation, it is

a tool for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. A Vault is logical group of secrets.

As we are dealing with a public/private key credential combo, Azure KeyVault fits our goal to securely store the credentials perfectly. Since November last year, Azure Functions are able to use the KeyVault via their AppSettings. This is what the first half of this post is about.

Creating a new KeyVault

Login to the Azure Portal and go to the Marketplace. Search for ‘Key Vault‘ there and select it from the results:

Azure Marketplace search for Key Vault

You will be prompted with the creation menu. Give it a unique name and select your existing resource group (which will make sure we are in the correct location automatically). Once done, click on the create button:

Azure create new key vault menu

It will take a minute or two until the deployment is done. From the notification, you’ll get a direct link to the resource:

In the overview menu, select ‘Secrets’, where we will store our credentials:

Azure KeyVault menu - select Secrets

If you are wondering why we are using the ‘Secrets‘ option over ‘Keys‘ – the keys are already generated by AtomicPay, and the import function wants a backup file. I did not have a deeper look into this option as the ‘Secrets‘ option offers all I need.

The next step is to add a secret entry for AtomicPay’s account id, public and private key:

Azure KeyVault add secret

Once you have added the Application ID and your keys into ‘Secrets‘, click on ‘Overview‘. On the right-hand side, you’ll see an entry called ‘DNS Name’. Hover with the mouse to reveal the copy button and copy it:

Azure KeyVault Url Copy

Now go to your Azure Function and select ‘Application settings‘. Scroll a bit down and add a new setting. You can name it whatever you want, I just used ‘KeyVaultUri‘. Paste the url you have copied earlier into the value field. Do not forget to hit the ‘Save‘-button after that:

Azure Function - Application settings

Now we are just one step away of being able to use the KeyVault in our code. Go back to ‘Overview‘ on your Function and click on the ‘Platform features‘ tab. Select ‘Identity‘ there:

Azure Function select Identity

We need to enable the ‘System assigned‘ managed identity feature to allow the Function to interact with the Azure KeyVault from our code. This will add our Function to an Azure AD instance that handles the access rights for us:

Azure function enable System assigned identity

Now we have everything in place to use the KeyVault to retrieve the API credentials we need for invoice verification.

Retrieving credentials in our code

Before we start to rewrite our function code, we need to install two NuGet-Packages into our Function project. Right click on the project and select ‘Manage NuGet Packages…‘. Search for these to packages and install them:

Refactoring the initial code

Whenever possible, we should take the chance to refactor our code. I have done so on adding the authentication layer for this sample. First, we will be moving the code to get the payload off the webhook into its own method:

        private static async Task<WebhookInvoiceInfo> GetPaymentPayload(HttpRequestMessage requestMessage)
        {
            _traceWriter.Info("trying to get payload object from request...");
            WebhookInvoiceInfo result = null;
            _jsonSerializerSettings = new JsonSerializerSettings()
            {
                MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
                DateParseHandling = DateParseHandling.None,
                Converters ={

                        new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                        StringToLongConverter.Instance,
                        StringToDecimalConverter.Instance,
                        StringToInvoiceStatusConverter.Instance,
                        StringToInvoiceStatusExceptionConverter.Instance
                }
            };

            _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

            using (var stream = await requestMessage.Content.ReadAsStreamAsync())
            {
                using (var reader = new StreamReader(stream))
                {
                    using (var jsonReader = new JsonTextReader(reader))
                    {
                        result = _jsonSerializer.Deserialize<WebhookInvoiceInfo>(jsonReader);
                    }
                }
            }

            return result;
        }

Next, we need are going to write another method to retrieve the credentials from the Azure KeyVault and initialize the AtomicPay SDK properly:

        private static async Task<bool> InitAtomicPay()
        {
            bool isAuthenticated = false;

            //initialize Azure Service Token Provider and authenticate this function against the KeyVault
            var serviceTokenProvider = new AzureServiceTokenProvider();
            var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(serviceTokenProvider.KeyVaultTokenCallback));

            //getting the key vault url
            var vaultBaseUrl = ConfigurationManager.AppSettings["KeyVaultUri"];

            //retrieving the appId and secrets
            var accId = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accId");
            var accPBK = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accPBK");
            var accPVK = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accPVK");

            //initialize AtomicPay SDK and return the result
            await AtomicPay.Config.Current.Init(accId.Value, accPBK.Value, accPVK.Value);

            if (!AtomicPay.Config.Current.IsInitialized)
            {
                isAuthenticated = false;
                _traceWriter.Info("failed to authenticate with AtomicPay");
            }
            else
            {
                isAuthenticated = true;
                _traceWriter.Info("successfully authenticated with AtomicPay.");
            }

            return isAuthenticated;
        }

Now that we authenticated against the API, we are finally able to verify the state of the invoice sent by the webhook. Let’s put everything together:

        //renamed this function (not particullary necessary)
        [FunctionName("InvoiceVerifier")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            //enable other methods to write to log
            _traceWriter = log;
            _traceWriter.Info($"arrived at function trigger for 'InvoiceVerifier'...");

            //retrieving the payload
            var payload = await GetPaymentPayload(req);

            //if payload is null, there is something wrong
            if (payload != null)
            {
                //authenticate the Function against AtomicPay
                var isAuthenticated = await InitAtomicPay();
                if (isAuthenticated)
                {
                    AtomicPay.Entity.InvoiceInfoDetails invoiceInfoDetails = null;

                    _traceWriter.Info($"trying to verify invoice id {payload.InvoiceId} ...");

                    //verifying the invoice 
                    using (var atomicPayClient = new AtomicPay.AtomicPayClient())
                    {
                        var invoiceObj = await atomicPayClient.GetInvoiceByIdAsync(payload?.InvoiceId);

                        if (invoiceObj == null)
                            return req.CreateResponse(HttpStatusCode.BadRequest, $"there was an error getting invoice with id {payload?.InvoiceId}. Response was null");

                        if (invoiceObj.IsError)
                            return req.CreateResponse(HttpStatusCode.BadRequest, $"there was an error getting invoice with id {payload?.InvoiceId}. Message from AtomicPay: {invoiceObj.Value.Message}");
                        else
                            invoiceInfoDetails = invoiceObj.Value?.Result?.FirstOrDefault();
                    }

                    //this will be the point where we will trigger the push notification in the last part
                    switch (invoiceInfoDetails.Status)
                    {
                        case AtomicPay.Entity.InvoiceStatus.Paid:
                        case AtomicPay.Entity.InvoiceStatus.PaidAfterExpiry:
                        case AtomicPay.Entity.InvoiceStatus.Overpaid:
                        case AtomicPay.Entity.InvoiceStatus.Complete:
                            log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is paid");
                            break;
                        default:
                            log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is not yet paid");
                            break;
                    }
                }
                else
                {
                    req.CreateResponse(HttpStatusCode.Unauthorized, "failed to authenticate with AtomicPay");
                }
            }
            else
                req.CreateResponse(HttpStatusCode.BadRequest, "there was an error getting the payload from request body");

            return req.CreateResponse(HttpStatusCode.OK, $"verified received invoice with id: '{payload?.InvoiceId}'");
        }

Save the changes to your code. After that, right click on the project and hit ‘Publish’ to update the function on Azure. Once you have published the updated code, you should be able to test your Function once again with Postman and receive a result like this:

Postman response from updated function

Conclusion

As you can see, Microsoft Azure provides a convenient way to store credentials and use them in your Azure Functions. This approach makes the whole process a whole lot more secure. This post also showed the ease of implementation of the AtomicPay .NET SDK.

In the third and last post of this series, we will connect our function to an Azure Notificationhub for sending push notifications to all devices that have registered with a certain merchant account. As always, I hope also this post was helpful for some of you.

Until the next post, happy coding, everyone!

Title image credit

Posted by msicc in Azure, Crypto&Blockchain, Dev Stories, Xamarin, 0 comments
Handle AtomicPay’s webhook with Microsoft Azure (Part 1/3)

Handle AtomicPay’s webhook with Microsoft Azure (Part 1/3)

In case you missed the announcement of the SDK, here is a quick way to read it now:

What is a webhook?

Webhooks are gaining popularity, and a lot of services and websites provide them for easier interactions with other websites and service. Most commonly, webhooks are triggered by an event, and send a request to subscribers of the webhook. AtomicPay provides such a webhook for invoices and their payment state.

Why Azure?

There are several reasons, with Azure being my personal choice of cloud service being the main one. Besides that, Azure provides two additional features I am a fan of, namely Azure KeyVault for credential handling and Azure Notificationhub for pushing notifications to a broad range of platforms. This new series will be split into three parts:

  1. Handle the incoming notification with an Azure Function (this post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (second post)
  3. Send a push notification via Azure Notificationhub to all devices that have linked a merchant’s account to it (third post)

Getting started

First, login to your Azure account (or create a new one). On the left menu, select ‘Create a resource’ and search for ‘Serverless Function App’:

Azure Portal create new resource

In the next step, you’ll assign a name for your Function, create a new resource group and assign or create a storage account:

Azure new Function App settings

Once deployment is finished, click the ‘Go to resource’ button in the portal notification:

Azure notification deployment succeeded

Next step is to add the code for our function. Click on the ‘+’ sign besides ‘Functions’:

add new function

Now you have to decide how you want to move on in development. I chose Visual Studio, because we need some Nuget packages later and I enjoy having Intellisense while writing code. Just follow the instructions on screen to get your project up and running.

select ide for writing function code

Make sure you are selecting the v1 (.NET Framework) version of the project. This is needed because we want to connect to a Notificationhub later (in the third post), and as of now, v2 does not support this (according to the docs).

VS new project dialog for Azure Function

With a click on ‘OK’, Visual Studio creates the new project for you.

Seeing some code… finally!

If you look at the just generated code in the generated Function class, you may immediately be reminded at something. You’re right, as Azure functions are based on ASP.NET, so there are some similarities. Let’s have a look at the parameters of the Run method (you can change the method name as the FunctionName attribute declares the method already as such).

The first parameter is the HttpTriggerAttribute, which defines the AuthorizationLevel, the allowed methods and an optional route. The only change that I made to the default attribute parameters is to remove the “get” method, because AtomicPay’s Webhook sends a POST request.

The second parameter is the HttpRequestMessage that contains the request sent from AtomicPay. If you have been working with APIs or other web requests already, this should be familiar. We will work with this part as the sample continues.

Last but not least, we have a logging provider that enables us to write to the Azure Function’s log (which can be really useful if you’re searching an error). We will also use this as the sample continues.

AtomicPay webhook payload

In order to be able to handle the payload from AtomicPay, it is helpful to have a payload sample. Luckily, AtomicPay provides a detailed documentation for the webhook (as it does for the API). The payload looks like this:

 {
"invoice_id":"DBVZzHMxjjfdRZYeignEZC",
"order_id":"1235425",
"fiat_price":"1.00",
"fiat_currency":"USD",
"payment_currency":"BTC",
"payment_rate":"4,192.00",
"payment_address":"bc1qmtyax97phenvvs3sdg5r45kdcphd",
"payment_total":"0.00023855",
"payment_paid":"0.00023855",
"payment_due":"0.00000000",
"payment_txid":"14edecbf114f2b2e73cf7470916122286506de5",
"payment_confirmation":"3",
"status":"confirmed",
"statusException":"null"
} 

Depending on the cryptocurrency of the invoice, your function will be triggered multiple times – every time the status updates. As you probably want to have different actions for different states, it makes sense to create a class based on the JSON for deserialization and to further work with:

    public class WebhookInvoiceInfo
    {
        [JsonProperty("invoice_id")]
        public string InvoiceId { get; set; }

        [JsonProperty("order_id")]
        public string OrderId { get; set; }

        [JsonProperty("fiat_price")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal FiatPrice { get; set; }

        [JsonProperty("fiat_currency")]
        public string FiatCurrency { get; set; }

        [JsonProperty("payment_currency")]
        public string PaymentCurrency { get; set; }

        [JsonProperty("payment_rate")]
        public string PaymentRate { get; set; }

        [JsonProperty("payment_address")]
        public string PaymentAddress { get; set; }

        [JsonProperty("payment_total")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentTotal { get; set; }

        [JsonProperty("payment_paid")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentPaid { get; set; }

        [JsonProperty("payment_due")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentDue { get; set; }

        [JsonProperty("payment_txid")]
        public string PaymentTxid { get; set; }

        [JsonProperty("payment_confirmation")]
        [JsonConverter(typeof(StringToLongConverter))]
        public long PaymentConfirmation { get; set; }

        [JsonProperty("status")]
        [JsonConverter(typeof(StringToInvoiceStatusConverter))]
        public InvoiceStatus Status { get; set; }

        [JsonProperty("statusException")]
        [JsonConverter(typeof(StringToInvoiceStatusExceptionConverter))]
        public InvoiceStatusException StatusException { get; set; }
    }

You may have noticed that I have converters attached to some properties above. They are needed to convert the values into their correct types, as the API generally sends just strings. The class and its converters are part of the AtomicPay .NET SDK (find it on Nuget or Github), which we will use also in the second part of this blog series. Add the library in your prefered way to the project.

Working with the webhook’s payload

For this first part of the series, we are just going to write log entries based on the status of the incoming webhook. Just add these lines to the Run method of your function:

        [FunctionName("Function1")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info($"arrived at function trigger for AtomicPay webhook.");
            log.Info("trying to get payload object from request...");
            WebhookInvoiceInfo result = null;
            _jsonSerializerSettings = new JsonSerializerSettings()
            {
                MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
                DateParseHandling = DateParseHandling.None,
                Converters ={

                        new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                        StringToLongConverter.Instance,
                        StringToDecimalConverter.Instance,
                        StringToInvoiceStatusConverter.Instance,
                        StringToInvoiceStatusExceptionConverter.Instance
                }
            };

            _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

            using (var stream = await req.Content.ReadAsStreamAsync())
            {
                using (var reader = new StreamReader(stream))
                {
                    using (var jsonReader = new JsonTextReader(reader))
                    {
                        result = _jsonSerializer.Deserialize<WebhookInvoiceInfo>(jsonReader);
                    }
                }
            }

            if (result != null)
                log.Info($"received payload for invoice id: {result.InvoiceId} with status {result.Status.ToString()}");
            else
                log.Info("there was an error getting the payload from the request body");

            return result == null ?
                req.CreateResponse(HttpStatusCode.BadRequest, "there was an error getting the payload from the request body") :
                req.CreateResponse(HttpStatusCode.OK, "OK");
        }

We are reading the response content from the HttpRequestMessage and deserialize it. If there is no payload or something went wrong, our result will be null. We are writing a log entry that indicates if we have received a payload and the status of the invoice within it. In the end, we are returning the appropriate response to the POST request.

Testing the function locally

Another big advantage of using Visual Studio to write the Azure Function is the ability to debug locally. If you haven’t done so, you’ll need to install the
Azure Functions Core (CLI) tools – Visual Studio will prompt you to do so. Once that is done, we are able to debug locally:

Visual Studio debugging Azure function

Now that our local service is running, let’s test the function with Postman (this is one of my test invoices):

Postman request to local API test

As we can see, everything went smooth and our Function is working like expected:

API log local test

Now that we have verified our Function is able to run, we are ready to publish it.

Publishing the Function to Azure

To initiate the process, right click on the project name and select ‘Publish’. As we have already defined our project in the Azure Portal, we are using the ‘Select Existing’ option in the next step:

Publish to Azure Step 1

I am not selecting the ‘run from package file’ option here. Click ‘Publish’ to continue. You will be prompted with a selection window once again (you may have to login to your account first). Select your created project:

Publish to Azure Step 2

Confirm you selection with ‘OK’. After doing some additional preparation steps, Visual Studio will allow you to click the ‘Publish’ button:

Publish to Azure Step 3

Once you have published your function, visit the Azure Portal once again. Select your published function. The portal will show you the function.json declaration file. In the upper part of the website, you will find options to ‘Save‘, ‘Run‘ and ‘ </> Get function URL ‘. Click on the last one to view your function’s url:

Azure portal get function url

I am pretty sure you noticed the code parameter in the url. This parameter is needed with every call to your Azure function as authorization parameter. To test your function on Azure, copy the url and paste it into Postman (replacing the localhost url we used for local testing). You should get the same result in Postman – optimally, an OK-response.

Conclusion

If you followed along and your function is up and running – congratulations! You just created a basic “serverless” (aka running on someone else’s server) function. In the next post in this series, we will initialize the AtomicPay SDK properly, after we added our API credentials to another feature running on Azure – the KeyVault. As always, I hope this post is helpful for some of you.

Until the next post, happy coding, everyone!

Title image credit

Posted by msicc in Azure, Crypto&Blockchain, Dev Stories, Xamarin, 1 comment
A faster way to add image assets to your Xamarin.iOS project in Visual Studio 2017

A faster way to add image assets to your Xamarin.iOS project in Visual Studio 2017

While Visual Studio has an editor that shall help to add new image assets to the project, it is pretty slow when adding a bunch of new images. If you have to add a lot of images to add, it even crashes once in while, which can be annoying. Because I was in the process of adding more than a hundred images for porting my first app ever from Windows Phone to iOS and Android, I searched for a faster way – and found it.

What’s going on under the hood?

When you add an imageset to your assets structure, Visual Studio does quite some work. These are the steps that are done:

  1. creation of a folder for the imageset in the Assets folder
  2. creation of a Contents.jsonfile
  3. modification of the Contents.json file
  4. modification of the .csproj file

This takes some time for every image, and Visual Studio seems to be quite busy with these steps. After analyzing the way imagesets get added, I immediately recognized that I am faster than Visual Studio if I do that work manually.

How to add the assets – step by step

  1. right click on the project in Solution Explorer and select ‘Open Folder in File Explorer‘ and find the ‘Assets‘ folder  open_folder_in_file_explorer
  2. create a new folder in this format: ‘[yourassetname].imageset
  3. add your image to the folder
  4. create a new file with the name Contents.json   add_image_and_contents_file
  5. open the file (I use Notepad++ for such operations) and add this minimum required jsonto it:
    {
      "images": [
        {
          "scale": "1x",
          "idiom": "universal",
          "filename": "[yourimage].png"
        },
        {
          "scale": "2x",
          "idiom": "universal",
          "filename": "[yourimage].png"
        },
        {
          "scale": "3x",
          "idiom": "universal",
          "filename": "[yourimage].png"
        },
        {
          "idiom": "universal"
        }
      ],
      "properties": {
        "template-rendering-intent": ""
      },
      "info": {
        "version": 1,
        "author": ""
      }
    }
  6. go back to Visual Studio, right click on the project again and select ‘Unload Projectunload_project
  7. right click again and select ‘Edit [yourprojectname].iOS.csproj‘ edit_ios_csproj
  8. find the ItemGroup with the Assets
  9. inside the ItemGroup, add your imageset with these two entries:
    <ImageAsset Include="Assets.xcassets\[yourassetname].imageset\[yourimage].png">
      <Visible>false</Visible>
    </ImageAsset>
    <ImageAsset Include="Assets.xcassets\[yourassetname].imageset\Contents.json">
      <Visible>false</Visible>
    </ImageAsset>   
    
  10. close the file and reload the project by selecting it from the context menu in Solution Explorer

If you followed this steps, your assets should be visible immediately:
assets_with_added_imagesets

I did not measure the time exactly, but I felt I was significantly faster by adding all those images that way. Your mileage may vary, depending on the power of your dev machine.

Conclusion

Features like the AssetManager for iOS in Visual Studio are nice to have, but some have some serious performance problems (most of them are already known). By taking over the process manually, one can be a lot faster than a built-in feature. That said, like always, I hope this post is helpful for some of you.

Happy coding, everyone!

Posted by msicc in Dev Stories, iOS, Xamarin, 1 comment
Xamarin Forms, the MVVMLight Toolkit and I: Command Chaining

Xamarin Forms, the MVVMLight Toolkit and I: Command Chaining

The problem

Sometimes, we want to invoke a method that is available via code for a control. Due to the abstraction of our MVVM application, the ViewModel has no access to all those methods that are available if we would access the control via code. There are several approaches to solve this problem. In one of my recent projects, I needed to invoke a method of a custom control, which should be routed into the platform renderers I wrote for such a custom control. I remembered that I have indeed read quite a few times about command chaining for such cases and tried to implement it. In the beginning, it may sound weird to do this, but the more often I see this technique, the more I like it.

Simple Demo control

For demo purposes, I created this really simple Xamarin.Forms user control:

<?xml version="1.0" encoding="UTF-8"?>
<ContentView xmlns="http://xamarin.com/schemas/2014/forms" 
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="XfMvvmLight.Controls.CommandChainingDemoControl">
  <ContentView.Content>
      <StackLayout HorizontalOptions="FillAndExpand" VerticalOptions="FillAndExpand">
            <Label x:Name="LabelFilledFromBehind" Margin="12" FontSize="Large" />   
        </StackLayout>
  </ContentView.Content>
</ContentView>

As you can see, there is just a label without text. We will write the necessary code to fill this label with some text just by invoking a method in the code behind. To be able to do so, we need a BindableProperty (once again) to get our foot into the door of the control:

public static BindableProperty DemoCommandProperty = BindableProperty.Create(nameof(DemoCommand), typeof(ICommand), typeof(CommandChainingDemoControl), null, BindingMode.OneWayToSource);

public ICommand DemoCommand
{
    get => (ICommand)GetValue(DemoCommandProperty);
    set => SetValue(DemoCommandProperty, value);
}

The implementation is pretty straightforward. We have done this already during this series, so you should be familiar if you were following. One thing, however, is different. For this BindableProperty, we are using BindingMode.OneWayToSource. By doing so, we are basically making it a read-only property, which sends its changes only down to the ViewModel (the source). If we would not do this, the ViewModel could change the property, which we do not want here.

Now we have the BindableProperty in place, we need to create an instance of the Command that will be sent down to the ViewModel. We are doing this as soon as the control is instantiated in the constructor:

public CommandChainingDemoControl()
      {
          InitializeComponent();

          this.DemoCommand = new Command(() =>
          {
              FillFromBehind();
          });
      }

      private void FillFromBehind()
      {
          this.LabelFilledFromBehind.Text = "Text was empty, but we used command chaining to show this text inside a control.";
      }

That’s all we need to do in the code behind.

ViewModel

For this demo, I created a new page and a corresponding ViewModel in the demo project. Here is the very basic ViewModel code:

using System.Windows.Input;
using GalaSoft.MvvmLight.Command;

namespace XfMvvmLight.ViewModel
{
    public class CommandChainingDemoViewModel : XfNavViewModelBase
    {
        private ICommand _invokeDemoCommand;
        private RelayCommand _demo1Command;

        public CommandChainingDemoViewModel()
        {
        }

        public ICommand InvokeDemoCommand { get => _invokeDemoCommand; set => Set(ref _invokeDemoCommand, value); }

        public RelayCommand Demo1Command => _demo1Command ?? (_demo1Command = new RelayCommand(() =>
        {
            this.InvokeDemoCommand?.Execute(null);
        }));
    }
}

As you can see, the ViewModel includes two Commands. One is the pure ICommand implementation that gets its value from the OneWayToSource-Binding. We are not using MVVMLight’s RelayCommand here to avoid casting between types, which always led to an exception when I tested the implementation first. The second command is bound to a button in the CommandChainingDemoPage and will be the trigger to execute the InvokeDemoCommand.

Final steps

The final steps are just a few simple ones. We need to connect the  InvokeDemoCommand to the user control we created earlier, while we need to bind the Demo1Commandto the corresponding button in the view. This is the page’s code after doing so:

<?xml version="1.0" encoding="utf-8" ?>
<baseCtrl:XfNavContentPage
    xmlns:baseCtrl="clr-namespace:XfMvvmLight.BaseControls" xmlns="http://xamarin.com/schemas/2014/forms"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             xmlns:ctrl="clr-namespace:XfMvvmLight.Controls;assembly=XfMvvmLight"
             x:Class="XfMvvmLight.View.CommandChainingDemoPage" RegisteredPageKey="{Binding CommandChainingDemoPageKey, Source=Locator}">
    <ContentPage.BindingContext>
        <Binding Path="CommandChainingDemoVm" Source="{StaticResource Locator}" />
    </ContentPage.BindingContext>

    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="*"/>
            <RowDefinition Height="Auto"/>
        </Grid.RowDefinitions>

        <ctrl:CommandChainingDemoControl Grid.Row="0" DemoCommand="{Binding InvokeDemoCommand, Mode=OneWayToSource}" Margin="12"></ctrl:CommandChainingDemoControl>

        <Button Text="Execute Command Chaining" Command="{Binding Demo1Command}" Margin="12" Grid.Row="1" />

    </Grid>
</baseCtrl:XfNavContentPage>

One thing to point out is that we are also specifying the OneWayToSource binding here once again. It should work with normal binding, but it I recommend to do like I did, which makes the code easier to understand for others (and of course yourself). That’s all – we have now a working command chain that invokes a method inside the user control from our ViewModel.

Conclusion

Command chaining can be a convenient way to invoke actions on controls that are otherwise not possible due to the abstraction of layers in MVVM. Once you got the concept, they are pretty easy to implement. This technique is also usable outside of Xamarin.Forms, so do not hesitate to use it out there. Just remember the needed steps:

  • create a user control (or a derived one if you need to call a method on framework controls)
  • add a BindableProperty/DependecyProperty and set its default binding mode to OneWayToSource
  • instantiate the BindableProperty/DependecyProperty inside the constructor of the user control
  • pass the method call/code into the Action part of the newly created Command  instance
  • create the commands in the ViewModel
  • connect the Commands to your final view implementation

Like I wrote earlier, I came across this (again) when I was writing a custom Xamarin.Forms control with renderers, where I had to invoke methods inside the renderer from my ViewModel. Other techniques that I saw to solve this is using Messengers (be it the one from MVVMLight or the Xamarin.Forms Messenger implementation) or the good old Boolean switch implementation (uses also a BindableProperty/DependecyProperty). I decided to use the command chaining approach as it is pretty elegant in my eyes and not that complicated to implement.

The series’ sample project is updated and available here on Github. Like always, I hope this post is useful for some of you.

Happy coding, everyone!


all articles of this series

title image credit

Posted by msicc in Android, Dev Stories, iOS, UWP, Xamarin, 7 comments