Xamarin

all my Xamarin posts

Handle AtomicPay’s webhook with Microsoft Azure (Part 2/3) – invoice verification with Azure KeyVault

Handle AtomicPay’s webhook with Microsoft Azure (Part 2/3) – invoice verification with Azure KeyVault

Series overview

  1. Handle the incoming notification with an Azure Function (first post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (this post)
  3. Send a push notification via Azure Notificationhub to all devices that have linked a merchant’s account to it (third post)

Storing credentials securely

When we consume web services, we need to authenticate our applications against them/their API. In the past, I always saw (and in fact, I also made) samples that do ignore the security of those credentials and provide them in code, leaving additional research (if you are new to that topic) and making the samples incomplete. As the goal of this post is to verify the state of a received invoice against the AtomicPay API, the first step is to provide a secure mechanism for storing our API credentials on Azure.

Get your AtomicPay API keys

If you haven’t signed up for an AtomicPay account, you should probably do it now. Once you have logged in, open the menu and select ‘API Integration‘:

AtomicPay menu select API Integration

On this page, you find your pre-generated API-Keys. Should your keys be compromised, you will be able to regenerate your keys here as well:

AtomicPay API Integration page

Introducing Azure KeyVault

According to the Azure KeyVault documentation, it is

a tool for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. A Vault is logical group of secrets.

As we are dealing with a public/private key credential combo, Azure KeyVault fits our goal to securely store the credentials perfectly. Since November last year, Azure Functions are able to use the KeyVault via their AppSettings. This is what the first half of this post is about.

Creating a new KeyVault

Login to the Azure Portal and go to the Marketplace. Search for ‘Key Vault‘ there and select it from the results:

Azure Marketplace search for Key Vault

You will be prompted with the creation menu. Give it a unique name and select your existing resource group (which will make sure we are in the correct location automatically). Once done, click on the create button:

Azure create new key vault menu

It will take a minute or two until the deployment is done. From the notification, you’ll get a direct link to the resource:

In the overview menu, select ‘Secrets’, where we will store our credentials:

Azure KeyVault menu - select Secrets

If you are wondering why we are using the ‘Secrets‘ option over ‘Keys‘ – the keys are already generated by AtomicPay, and the import function wants a backup file. I did not have a deeper look into this option as the ‘Secrets‘ option offers all I need.

The next step is to add a secret entry for AtomicPay’s account id, public and private key:

Azure KeyVault add secret

Once you have added the Application ID and your keys into ‘Secrets‘, click on ‘Overview‘. On the right-hand side, you’ll see an entry called ‘DNS Name’. Hover with the mouse to reveal the copy button and copy it:

Azure KeyVault Url Copy

Now go to your Azure Function and select ‘Application settings‘. Scroll a bit down and add a new setting. You can name it whatever you want, I just used ‘KeyVaultUri‘. Paste the url you have copied earlier into the value field. Do not forget to hit the ‘Save‘-button after that:

Azure Function - Application settings

Now we are just one step away of being able to use the KeyVault in our code. Go back to ‘Overview‘ on your Function and click on the ‘Platform features‘ tab. Select ‘Identity‘ there:

Azure Function select Identity

We need to enable the ‘System assigned‘ managed identity feature to allow the Function to interact with the Azure KeyVault from our code. This will add our Function to an Azure AD instance that handles the access rights for us:

Azure function enable System assigned identity

Now we have everything in place to use the KeyVault to retrieve the API credentials we need for invoice verification.

Retrieving credentials in our code

Before we start to rewrite our function code, we need to install two NuGet-Packages into our Function project. Right click on the project and select ‘Manage NuGet Packages…‘. Search for these to packages and install them:

Refactoring the initial code

Whenever possible, we should take the chance to refactor our code. I have done so on adding the authentication layer for this sample. First, we will be moving the code to get the payload off the webhook into its own method:

        private static async Task<WebhookInvoiceInfo> GetPaymentPayload(HttpRequestMessage requestMessage)
        {
            _traceWriter.Info("trying to get payload object from request...");
            WebhookInvoiceInfo result = null;
            _jsonSerializerSettings = new JsonSerializerSettings()
            {
                MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
                DateParseHandling = DateParseHandling.None,
                Converters ={

                        new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                        StringToLongConverter.Instance,
                        StringToDecimalConverter.Instance,
                        StringToInvoiceStatusConverter.Instance,
                        StringToInvoiceStatusExceptionConverter.Instance
                }
            };

            _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

            using (var stream = await requestMessage.Content.ReadAsStreamAsync())
            {
                using (var reader = new StreamReader(stream))
                {
                    using (var jsonReader = new JsonTextReader(reader))
                    {
                        result = _jsonSerializer.Deserialize<WebhookInvoiceInfo>(jsonReader);
                    }
                }
            }

            return result;
        }

Next, we need are going to write another method to retrieve the credentials from the Azure KeyVault and initialize the AtomicPay SDK properly:

        private static async Task<bool> InitAtomicPay()
        {
            bool isAuthenticated = false;

            //initialize Azure Service Token Provider and authenticate this function against the KeyVault
            var serviceTokenProvider = new AzureServiceTokenProvider();
            var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(serviceTokenProvider.KeyVaultTokenCallback));

            //getting the key vault url
            var vaultBaseUrl = ConfigurationManager.AppSettings["KeyVaultUri"];

            //retrieving the appId and secrets
            var accId = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accId");
            var accPBK = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accPBK");
            var accPVK = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accPVK");

            //initialize AtomicPay SDK and return the result
            await AtomicPay.Config.Current.Init(accId.Value, accPBK.Value, accPVK.Value);

            if (!AtomicPay.Config.Current.IsInitialized)
            {
                isAuthenticated = false;
                _traceWriter.Info("failed to authenticate with AtomicPay");
            }
            else
            {
                isAuthenticated = true;
                _traceWriter.Info("successfully authenticated with AtomicPay.");
            }

            return isAuthenticated;
        }

Now that we authenticated against the API, we are finally able to verify the state of the invoice sent by the webhook. Let’s put everything together:

        //renamed this function (not particullary necessary)
        [FunctionName("InvoiceVerifier")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            //enable other methods to write to log
            _traceWriter = log;
            _traceWriter.Info($"arrived at function trigger for 'InvoiceVerifier'...");

            //retrieving the payload
            var payload = await GetPaymentPayload(req);

            //if payload is null, there is something wrong
            if (payload != null)
            {
                //authenticate the Function against AtomicPay
                var isAuthenticated = await InitAtomicPay();
                if (isAuthenticated)
                {
                    AtomicPay.Entity.InvoiceInfoDetails invoiceInfoDetails = null;

                    _traceWriter.Info($"trying to verify invoice id {payload.InvoiceId} ...");

                    //verifying the invoice 
                    using (var atomicPayClient = new AtomicPay.AtomicPayClient())
                    {
                        var invoiceObj = await atomicPayClient.GetInvoiceByIdAsync(payload?.InvoiceId);

                        if (invoiceObj == null)
                            return req.CreateResponse(HttpStatusCode.BadRequest, $"there was an error getting invoice with id {payload?.InvoiceId}. Response was null");

                        if (invoiceObj.IsError)
                            return req.CreateResponse(HttpStatusCode.BadRequest, $"there was an error getting invoice with id {payload?.InvoiceId}. Message from AtomicPay: {invoiceObj.Value.Message}");
                        else
                            invoiceInfoDetails = invoiceObj.Value?.Result?.FirstOrDefault();
                    }

                    //this will be the point where we will trigger the push notification in the last part
                    switch (invoiceInfoDetails.Status)
                    {
                        case AtomicPay.Entity.InvoiceStatus.Paid:
                        case AtomicPay.Entity.InvoiceStatus.PaidAfterExpiry:
                        case AtomicPay.Entity.InvoiceStatus.Overpaid:
                        case AtomicPay.Entity.InvoiceStatus.Complete:
                            log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is paid");
                            break;
                        default:
                            log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is not yet paid");
                            break;
                    }
                }
                else
                {
                    req.CreateResponse(HttpStatusCode.Unauthorized, "failed to authenticate with AtomicPay");
                }
            }
            else
                req.CreateResponse(HttpStatusCode.BadRequest, "there was an error getting the payload from request body");

            return req.CreateResponse(HttpStatusCode.OK, $"verified received invoice with id: '{payload?.InvoiceId}'");
        }

Save the changes to your code. After that, right click on the project and hit ‘Publish’ to update the function on Azure. Once you have published the updated code, you should be able to test your Function once again with Postman and receive a result like this:

Postman response from updated function

Conclusion

As you can see, Microsoft Azure provides a convenient way to store credentials and use them in your Azure Functions. This approach makes the whole process a whole lot more secure. This post also showed the ease of implementation of the AtomicPay .NET SDK.

In the third and last post of this series, we will connect our function to an Azure Notificationhub for sending push notifications to all devices that have registered with a certain merchant account. As always, I hope also this post was helpful for some of you.

Until the next post, happy coding, everyone!

Title image credit

Posted by msicc in Azure, Crypto&Blockchain, Dev Stories, Xamarin, 0 comments
Handle AtomicPay’s webhook with Microsoft Azure (Part 1/3)

Handle AtomicPay’s webhook with Microsoft Azure (Part 1/3)

In case you missed the announcement of the SDK, here is a quick way to read it now:

What is a webhook?

Webhooks are gaining popularity, and a lot of services and websites provide them for easier interactions with other websites and service. Most commonly, webhooks are triggered by an event, and send a request to subscribers of the webhook. AtomicPay provides such a webhook for invoices and their payment state.

Why Azure?

There are several reasons, with Azure being my personal choice of cloud service being the main one. Besides that, Azure provides two additional features I am a fan of, namely Azure KeyVault for credential handling and Azure Notificationhub for pushing notifications to a broad range of platforms. This new series will be split into three parts:

  1. Handle the incoming notification with an Azure Function (this post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (second post)
  3. Send a push notification via Azure Notificationhub to all devices that have linked a merchant’s account to it (third post)

Getting started

First, login to your Azure account (or create a new one). On the left menu, select ‘Create a resource’ and search for ‘Serverless Function App’:

Azure Portal create new resource

In the next step, you’ll assign a name for your Function, create a new resource group and assign or create a storage account:

Azure new Function App settings

Once deployment is finished, click the ‘Go to resource’ button in the portal notification:

Azure notification deployment succeeded

Next step is to add the code for our function. Click on the ‘+’ sign besides ‘Functions’:

add new function

Now you have to decide how you want to move on in development. I chose Visual Studio, because we need some Nuget packages later and I enjoy having Intellisense while writing code. Just follow the instructions on screen to get your project up and running.

select ide for writing function code

Make sure you are selecting the v1 (.NET Framework) version of the project. This is needed because we want to connect to a Notificationhub later (in the third post), and as of now, v2 does not support this (according to the docs).

VS new project dialog for Azure Function

With a click on ‘OK’, Visual Studio creates the new project for you.

Seeing some code… finally!

If you look at the just generated code in the generated Function class, you may immediately be reminded at something. You’re right, as Azure functions are based on ASP.NET, so there are some similarities. Let’s have a look at the parameters of the Run method (you can change the method name as the FunctionName attribute declares the method already as such).

The first parameter is the HttpTriggerAttribute, which defines the AuthorizationLevel, the allowed methods and an optional route. The only change that I made to the default attribute parameters is to remove the “get” method, because AtomicPay’s Webhook sends a POST request.

The second parameter is the HttpRequestMessage that contains the request sent from AtomicPay. If you have been working with APIs or other web requests already, this should be familiar. We will work with this part as the sample continues.

Last but not least, we have a logging provider that enables us to write to the Azure Function’s log (which can be really useful if you’re searching an error). We will also use this as the sample continues.

AtomicPay webhook payload

In order to be able to handle the payload from AtomicPay, it is helpful to have a payload sample. Luckily, AtomicPay provides a detailed documentation for the webhook (as it does for the API). The payload looks like this:

 {
"invoice_id":"DBVZzHMxjjfdRZYeignEZC",
"order_id":"1235425",
"fiat_price":"1.00",
"fiat_currency":"USD",
"payment_currency":"BTC",
"payment_rate":"4,192.00",
"payment_address":"bc1qmtyax97phenvvs3sdg5r45kdcphd",
"payment_total":"0.00023855",
"payment_paid":"0.00023855",
"payment_due":"0.00000000",
"payment_txid":"14edecbf114f2b2e73cf7470916122286506de5",
"payment_confirmation":"3",
"status":"confirmed",
"statusException":"null"
} 

Depending on the cryptocurrency of the invoice, your function will be triggered multiple times – every time the status updates. As you probably want to have different actions for different states, it makes sense to create a class based on the JSON for deserialization and to further work with:

    public class WebhookInvoiceInfo
    {
        [JsonProperty("invoice_id")]
        public string InvoiceId { get; set; }

        [JsonProperty("order_id")]
        public string OrderId { get; set; }

        [JsonProperty("fiat_price")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal FiatPrice { get; set; }

        [JsonProperty("fiat_currency")]
        public string FiatCurrency { get; set; }

        [JsonProperty("payment_currency")]
        public string PaymentCurrency { get; set; }

        [JsonProperty("payment_rate")]
        public string PaymentRate { get; set; }

        [JsonProperty("payment_address")]
        public string PaymentAddress { get; set; }

        [JsonProperty("payment_total")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentTotal { get; set; }

        [JsonProperty("payment_paid")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentPaid { get; set; }

        [JsonProperty("payment_due")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentDue { get; set; }

        [JsonProperty("payment_txid")]
        public string PaymentTxid { get; set; }

        [JsonProperty("payment_confirmation")]
        [JsonConverter(typeof(StringToLongConverter))]
        public long PaymentConfirmation { get; set; }

        [JsonProperty("status")]
        [JsonConverter(typeof(StringToInvoiceStatusConverter))]
        public InvoiceStatus Status { get; set; }

        [JsonProperty("statusException")]
        [JsonConverter(typeof(StringToInvoiceStatusExceptionConverter))]
        public InvoiceStatusException StatusException { get; set; }
    }

You may have noticed that I have converters attached to some properties above. They are needed to convert the values into their correct types, as the API generally sends just strings. The class and its converters are part of the AtomicPay .NET SDK (find it on Nuget or Github), which we will use also in the second part of this blog series. Add the library in your prefered way to the project.

Working with the webhook’s payload

For this first part of the series, we are just going to write log entries based on the status of the incoming webhook. Just add these lines to the Run method of your function:

        [FunctionName("Function1")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info($"arrived at function trigger for AtomicPay webhook.");
            log.Info("trying to get payload object from request...");
            WebhookInvoiceInfo result = null;
            _jsonSerializerSettings = new JsonSerializerSettings()
            {
                MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
                DateParseHandling = DateParseHandling.None,
                Converters ={

                        new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                        StringToLongConverter.Instance,
                        StringToDecimalConverter.Instance,
                        StringToInvoiceStatusConverter.Instance,
                        StringToInvoiceStatusExceptionConverter.Instance
                }
            };

            _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

            using (var stream = await req.Content.ReadAsStreamAsync())
            {
                using (var reader = new StreamReader(stream))
                {
                    using (var jsonReader = new JsonTextReader(reader))
                    {
                        result = _jsonSerializer.Deserialize<WebhookInvoiceInfo>(jsonReader);
                    }
                }
            }

            if (result != null)
                log.Info($"received payload for invoice id: {result.InvoiceId} with status {result.Status.ToString()}");
            else
                log.Info("there was an error getting the payload from the request body");

            return result == null ?
                req.CreateResponse(HttpStatusCode.BadRequest, "there was an error getting the payload from the request body") :
                req.CreateResponse(HttpStatusCode.OK, "OK");
        }

We are reading the response content from the HttpRequestMessage and deserialize it. If there is no payload or something went wrong, our result will be null. We are writing a log entry that indicates if we have received a payload and the status of the invoice within it. In the end, we are returning the appropriate response to the POST request.

Testing the function locally

Another big advantage of using Visual Studio to write the Azure Function is the ability to debug locally. If you haven’t done so, you’ll need to install the
Azure Functions Core (CLI) tools – Visual Studio will prompt you to do so. Once that is done, we are able to debug locally:

Visual Studio debugging Azure function

Now that our local service is running, let’s test the function with Postman (this is one of my test invoices):

Postman request to local API test

As we can see, everything went smooth and our Function is working like expected:

API log local test

Now that we have verified our Function is able to run, we are ready to publish it.

Publishing the Function to Azure

To initiate the process, right click on the project name and select ‘Publish’. As we have already defined our project in the Azure Portal, we are using the ‘Select Existing’ option in the next step:

Publish to Azure Step 1

I am not selecting the ‘run from package file’ option here. Click ‘Publish’ to continue. You will be prompted with a selection window once again (you may have to login to your account first). Select your created project:

Publish to Azure Step 2

Confirm you selection with ‘OK’. After doing some additional preparation steps, Visual Studio will allow you to click the ‘Publish’ button:

Publish to Azure Step 3

Once you have published your function, visit the Azure Portal once again. Select your published function. The portal will show you the function.json declaration file. In the upper part of the website, you will find options to ‘Save‘, ‘Run‘ and ‘ </> Get function URL ‘. Click on the last one to view your function’s url:

Azure portal get function url

I am pretty sure you noticed the code parameter in the url. This parameter is needed with every call to your Azure function as authorization parameter. To test your function on Azure, copy the url and paste it into Postman (replacing the localhost url we used for local testing). You should get the same result in Postman – optimally, an OK-response.

Conclusion

If you followed along and your function is up and running – congratulations! You just created a basic “serverless” (aka running on someone else’s server) function. In the next post in this series, we will initialize the AtomicPay SDK properly, after we added our API credentials to another feature running on Azure – the KeyVault. As always, I hope this post is helpful for some of you.

Until the next post, happy coding, everyone!

Title image credit

Posted by msicc in Azure, Crypto&Blockchain, Dev Stories, Xamarin, 1 comment
A faster way to add image assets to your Xamarin.iOS project in Visual Studio 2017

A faster way to add image assets to your Xamarin.iOS project in Visual Studio 2017

While Visual Studio has an editor that shall help to add new image assets to the project, it is pretty slow when adding a bunch of new images. If you have to add a lot of images to add, it even crashes once in while, which can be annoying. Because I was in the process of adding more than a hundred images for porting my first app ever from Windows Phone to iOS and Android, I searched for a faster way – and found it.

What’s going on under the hood?

When you add an imageset to your assets structure, Visual Studio does quite some work. These are the steps that are done:

  1. creation of a folder for the imageset in the Assets folder
  2. creation of a Contents.jsonfile
  3. modification of the Contents.json file
  4. modification of the .csproj file

This takes some time for every image, and Visual Studio seems to be quite busy with these steps. After analyzing the way imagesets get added, I immediately recognized that I am faster than Visual Studio if I do that work manually.

How to add the assets – step by step

  1. right click on the project in Solution Explorer and select ‘Open Folder in File Explorer‘ and find the ‘Assets‘ folder  open_folder_in_file_explorer
  2. create a new folder in this format: ‘[yourassetname].imageset
  3. add your image to the folder
  4. create a new file with the name Contents.json   add_image_and_contents_file
  5. open the file (I use Notepad++ for such operations) and add this minimum required jsonto it:
    {
      "images": [
        {
          "scale": "1x",
          "idiom": "universal",
          "filename": "[yourimage].png"
        },
        {
          "scale": "2x",
          "idiom": "universal",
          "filename": "[yourimage].png"
        },
        {
          "scale": "3x",
          "idiom": "universal",
          "filename": "[yourimage].png"
        },
        {
          "idiom": "universal"
        }
      ],
      "properties": {
        "template-rendering-intent": ""
      },
      "info": {
        "version": 1,
        "author": ""
      }
    }
  6. go back to Visual Studio, right click on the project again and select ‘Unload Projectunload_project
  7. right click again and select ‘Edit [yourprojectname].iOS.csproj‘ edit_ios_csproj
  8. find the ItemGroup with the Assets
  9. inside the ItemGroup, add your imageset with these two entries:
    <ImageAsset Include="Assets.xcassets\[yourassetname].imageset\[yourimage].png">
      <Visible>false</Visible>
    </ImageAsset>
    <ImageAsset Include="Assets.xcassets\[yourassetname].imageset\Contents.json">
      <Visible>false</Visible>
    </ImageAsset>   
    
  10. close the file and reload the project by selecting it from the context menu in Solution Explorer

If you followed this steps, your assets should be visible immediately:
assets_with_added_imagesets

I did not measure the time exactly, but I felt I was significantly faster by adding all those images that way. Your mileage may vary, depending on the power of your dev machine.

Conclusion

Features like the AssetManager for iOS in Visual Studio are nice to have, but some have some serious performance problems (most of them are already known). By taking over the process manually, one can be a lot faster than a built-in feature. That said, like always, I hope this post is helpful for some of you.

Happy coding, everyone!

Posted by msicc in Dev Stories, iOS, Xamarin, 1 comment
Xamarin Forms, the MVVMLight Toolkit and I: Command Chaining

Xamarin Forms, the MVVMLight Toolkit and I: Command Chaining

The problem

Sometimes, we want to invoke a method that is available via code for a control. Due to the abstraction of our MVVM application, the ViewModel has no access to all those methods that are available if we would access the control via code. There are several approaches to solve this problem. In one of my recent projects, I needed to invoke a method of a custom control, which should be routed into the platform renderers I wrote for such a custom control. I remembered that I have indeed read quite a few times about command chaining for such cases and tried to implement it. In the beginning, it may sound weird to do this, but the more often I see this technique, the more I like it.

Simple Demo control

For demo purposes, I created this really simple Xamarin.Forms user control:

<?xml version="1.0" encoding="UTF-8"?>
<ContentView xmlns="http://xamarin.com/schemas/2014/forms" 
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="XfMvvmLight.Controls.CommandChainingDemoControl">
  <ContentView.Content>
      <StackLayout HorizontalOptions="FillAndExpand" VerticalOptions="FillAndExpand">
            <Label x:Name="LabelFilledFromBehind" Margin="12" FontSize="Large" />   
        </StackLayout>
  </ContentView.Content>
</ContentView>

As you can see, there is just a label without text. We will write the necessary code to fill this label with some text just by invoking a method in the code behind. To be able to do so, we need a BindableProperty (once again) to get our foot into the door of the control:

public static BindableProperty DemoCommandProperty = BindableProperty.Create(nameof(DemoCommand), typeof(ICommand), typeof(CommandChainingDemoControl), null, BindingMode.OneWayToSource);

public ICommand DemoCommand
{
    get => (ICommand)GetValue(DemoCommandProperty);
    set => SetValue(DemoCommandProperty, value);
}

The implementation is pretty straightforward. We have done this already during this series, so you should be familiar if you were following. One thing, however, is different. For this BindableProperty, we are using BindingMode.OneWayToSource. By doing so, we are basically making it a read-only property, which sends its changes only down to the ViewModel (the source). If we would not do this, the ViewModel could change the property, which we do not want here.

Now we have the BindableProperty in place, we need to create an instance of the Command that will be sent down to the ViewModel. We are doing this as soon as the control is instantiated in the constructor:

public CommandChainingDemoControl()
      {
          InitializeComponent();

          this.DemoCommand = new Command(() =>
          {
              FillFromBehind();
          });
      }

      private void FillFromBehind()
      {
          this.LabelFilledFromBehind.Text = "Text was empty, but we used command chaining to show this text inside a control.";
      }

That’s all we need to do in the code behind.

ViewModel

For this demo, I created a new page and a corresponding ViewModel in the demo project. Here is the very basic ViewModel code:

using System.Windows.Input;
using GalaSoft.MvvmLight.Command;

namespace XfMvvmLight.ViewModel
{
    public class CommandChainingDemoViewModel : XfNavViewModelBase
    {
        private ICommand _invokeDemoCommand;
        private RelayCommand _demo1Command;

        public CommandChainingDemoViewModel()
        {
        }

        public ICommand InvokeDemoCommand { get => _invokeDemoCommand; set => Set(ref _invokeDemoCommand, value); }

        public RelayCommand Demo1Command => _demo1Command ?? (_demo1Command = new RelayCommand(() =>
        {
            this.InvokeDemoCommand?.Execute(null);
        }));
    }
}

As you can see, the ViewModel includes two Commands. One is the pure ICommand implementation that gets its value from the OneWayToSource-Binding. We are not using MVVMLight’s RelayCommand here to avoid casting between types, which always led to an exception when I tested the implementation first. The second command is bound to a button in the CommandChainingDemoPage and will be the trigger to execute the InvokeDemoCommand.

Final steps

The final steps are just a few simple ones. We need to connect the  InvokeDemoCommand to the user control we created earlier, while we need to bind the Demo1Commandto the corresponding button in the view. This is the page’s code after doing so:

<?xml version="1.0" encoding="utf-8" ?>
<baseCtrl:XfNavContentPage
    xmlns:baseCtrl="clr-namespace:XfMvvmLight.BaseControls" xmlns="http://xamarin.com/schemas/2014/forms"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             xmlns:ctrl="clr-namespace:XfMvvmLight.Controls;assembly=XfMvvmLight"
             x:Class="XfMvvmLight.View.CommandChainingDemoPage" RegisteredPageKey="{Binding CommandChainingDemoPageKey, Source=Locator}">
    <ContentPage.BindingContext>
        <Binding Path="CommandChainingDemoVm" Source="{StaticResource Locator}" />
    </ContentPage.BindingContext>

    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="*"/>
            <RowDefinition Height="Auto"/>
        </Grid.RowDefinitions>

        <ctrl:CommandChainingDemoControl Grid.Row="0" DemoCommand="{Binding InvokeDemoCommand, Mode=OneWayToSource}" Margin="12"></ctrl:CommandChainingDemoControl>

        <Button Text="Execute Command Chaining" Command="{Binding Demo1Command}" Margin="12" Grid.Row="1" />

    </Grid>
</baseCtrl:XfNavContentPage>

One thing to point out is that we are also specifying the OneWayToSource binding here once again. It should work with normal binding, but it I recommend to do like I did, which makes the code easier to understand for others (and of course yourself). That’s all – we have now a working command chain that invokes a method inside the user control from our ViewModel.

Conclusion

Command chaining can be a convenient way to invoke actions on controls that are otherwise not possible due to the abstraction of layers in MVVM. Once you got the concept, they are pretty easy to implement. This technique is also usable outside of Xamarin.Forms, so do not hesitate to use it out there. Just remember the needed steps:

  • create a user control (or a derived one if you need to call a method on framework controls)
  • add a BindableProperty/DependecyProperty and set its default binding mode to OneWayToSource
  • instantiate the BindableProperty/DependecyProperty inside the constructor of the user control
  • pass the method call/code into the Action part of the newly created Command  instance
  • create the commands in the ViewModel
  • connect the Commands to your final view implementation

Like I wrote earlier, I came across this (again) when I was writing a custom Xamarin.Forms control with renderers, where I had to invoke methods inside the renderer from my ViewModel. Other techniques that I saw to solve this is using Messengers (be it the one from MVVMLight or the Xamarin.Forms Messenger implementation) or the good old Boolean switch implementation (uses also a BindableProperty/DependecyProperty). I decided to use the command chaining approach as it is pretty elegant in my eyes and not that complicated to implement.

The series’ sample project is updated and available here on Github. Like always, I hope this post is useful for some of you.

Happy coding, everyone!


all articles of this series

title image credit

Posted by msicc in Android, Dev Stories, iOS, UWP, Xamarin, 7 comments
Xamarin Forms, the MVVMLight Toolkit and I: migrating the Forms project and MVVMLight to .NET Standard

Xamarin Forms, the MVVMLight Toolkit and I: migrating the Forms project and MVVMLight to .NET Standard

When I started this series, Xamarin and Xamarin.Forms did not fully support .NET Standard. The sample project for this series has still a portable class library, and as I wanted to blog on another topic, I got reminded that I never updated the project. This post will be all about the change to .NET Standard, focusing on the Xamarin.Forms project as well as the MVVMLight library, which is also available as a .NET Standard version in the meantime.

Step-by-Step

I have done the necessary conversion steps already quite a few times, and to get you through the conversion very quickly, I will show you a set of screenshots. Where applicable, I will provide also some XML snippets that you can copy and paste.

Step 1

step1_unload project

The first step is to unload the project file. To do so, select your Xamarin Forms project in Solution Explorer and right click again on it bring up the context menu. From there, select “Unload Project“.

Step 2

step2_edit_csproj

Next, right-click the selected Xamarin.Forms project again to make the context menu visible once again. From there select “Edit [PROJECTNAME].csproj” to open up the file.

Step 3

step3_replace_all

Now that the file is open, press “CTRL + A”, followed by “DEL”. Seriously, just do it. Once the file is empty, copy these lines and paste them into your now empty file:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>

</Project>

Step 4

step4_open_packages_config

This step is not really necessary, but I want to point one thing out. The old PCL-project type has listed quite a bunch of libraries in the packages.config file. These are all NuGet packages necessary to make the app actually run (they get pulled in during the install of a NuGet package as dependencies). Now that we are converting, we are getting rid of the packages.config file. You can delete right away in the Solution Explorer. We will “install” the needed packages (marked with the arrows) in

Step 5

step5_changed_csproj

For “installing” NuGet packages into the converted project, we are adding a new ItemGroup to the XML-File. The Package System.ValueTuple is referenced – because in this series’ sample project we have some code in there that uses it. The absolute minimum you need to get it running is:

<!--
Template for Nuget Packages:
<ItemGroup>
  <PackageReference Include="" Version="" />
</ItemGroup>
-->

<ItemGroup>
  <!--MvvmLight has changed, so do we!-->
  <PackageReference Include="MvvmLightLibsStd10" Version="5.4.1" />
  <!--needed references-->
  <PackageReference Include="Xamarin.Forms" Version="3.1.0.583944" />
</ItemGroup>

If you have other NuGet packages, just add them into this item group. They will get installed in the correct version if you follow this tutorial until the end.

You might have noticed that the MVVMLight package I am inserting here is not the same as before. This is absolutely true, but for a reason. Laurent Bugnion has published the .NET Standard version quite some time ago. If you want to read his blog post, you can find it here.

The second change I want to outline is DebugTypesettings. These are also not set in that way if you create a new project that is already .NET Standard. In order to enable debugging of your Xamarin.Forms code, however, you absolutely should also add these lines:

<!--these enable debugging in the Forms project-->
<PropertyGroup Condition=" '$(Configuration)'=='Debug' ">
  <DebugType>full</DebugType>
  <DebugSymbols>true</DebugSymbols>
  <GenerateDocumentationFile>false</GenerateDocumentationFile>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)'=='Release' ">
  <DebugType>pdbonly</DebugType>
  <GenerateDocumentationFile>true</GenerateDocumentationFile>
</PropertyGroup>

These properties provide the debug symbols and the pdb-files a lot of analytic services are dependent on. Simple put, copy and paste them into your project. Hit the “Save”-Button and close the file.

Step 6

step6_delete_obj

Now that we have changed the project file, our already downloaded packages and created dll-files are no longer valid. To make sure we are able to compile, we need to delete the content of the obj folder in the Folder View of the Solution Explorer in Visual Studio.

Step 7

step7_delete_bin

Do the same to the content of the bin folder and switch back to the Solution View by hitting the Folder View-Button again

Step 8

step8_reload_project

Now we are finally able to reload the project as the base conversion is already done.

Step 9

step9_fix errors

Trying to be optimistic and hit the Rebuild button will result in these (and maybe even more) errors. The first one is one that we can solve very fast, while the second one is only the first in a row of errors.

Step 10

step10_delete_properties_folder

To solve the assembly attributes error, just go to the Folder View again and select the Properties folder. Bring up the context menu by right-clicking on it and select the delete option. Confirm the deletion to get rid of the folder. The new project style creates the assembly information based on the project file during build, which is causing the errors.

Step 11

step11_unneeded_reference

Now let’s face the next error. As the MVVMLight .NET Standard version does no longer rely on the CommonServiceLocatorlike before, we are able to remove this reference from our ViewModelLocator.

Step 12

step12_unneeded_instantiation

Of course, we now can also remove the instantiation call for the removed ServiceLocator.

Step 13

step13_replace_servicelocator_calls

In the ViewModel instance reference, replace ServiceLocator.Current with SimpleIoc.Default. Hit the save button again. You might have more errors to fix. Do so, and finally save your ViewModelLocator.

Step 14

step14_rebuild_succeeded

After all the work is done, we are now able to compile the .NET Standard version of our Xamarin.Forms project. If you fixed all of your errors in the step before, you should achieve a similar result like me and get a success message.

Final steps

Now that the Xamarin.Forms project is built, you might want to try to build all other projects in the solution as well.  The changed structure of the Xamarin.Forms project will have an impact also on the platform projects, that’s why I absolutely recommend deleting the contents of bin and object folders there, too. This will solve you a lot of work to get things to compile again.

As always, I hope this post is helpful for some of you. If you have any questions, feel free to ping me on my social accounts or write a comment below. The next post in this series will involve again more code, so stay tuned –  it will be out soon.

Please find the updated sample project here on GitHub.

Happy coding, everyone!

title image credit

Posted by msicc in Dev Stories, Xamarin, 1 comment
Xamarin.Forms, Akavache and I: ensuring protection of sensitive data

Xamarin.Forms, Akavache and I: ensuring protection of sensitive data

Recap

Some of you might remember my posts about encryption for Android, iOS and Windows 10. If not, take a look here:

Xamarin Android: asymmetric encryption without any user input or hardcoded values

How to perform asymmetric encryption without user input/hardcoded values with Xamarin iOS

Using the built-in UWP data protection for data encryption

It is no coincidence that I wrote these three posts before starting with this Akavache series, as we’ll use those techniques to protect sensitive data with Akavache. So you might have a look first before you read on.

Creating a secure blob cache in Akavache

Akavache has a special type for saving sensitive data  – based on the interface ISecureBlobCache. The first step is to extend the IBlobCacheInstanceHelperinterface we implemented in the first post of this series:

    public interface IBlobCacheInstanceHelper
    {
        void Init();

        IBlobCache LocalMachineCache { get; set; }

        ISecureBlobCache SecretLocalMachineCache { get; set; }
    }

Of course, all three platform implementations of the IBlobCacheInstanceHelperinterface need to be updated as well. The code to add for all three platform is the same:

public ISecureBlobCache SecretLocalMachineCache { get; set; }     

private void GetSecretLocalMachineCache()
{
    var secretCache = new Lazy<ISecureBlobCache>(() =>
                                                 {
                                                     _filesystemProvider.CreateRecursive(_filesystemProvider.GetDefaultSecretCacheDirectory()).SubscribeOn(BlobCache.TaskpoolScheduler).Wait();
                                                     return new SQLiteEncryptedBlobCache(Path.Combine(_filesystemProvider.GetDefaultSecretCacheDirectory(), "secret.db"), new PlatformCustomAkavacheEncryptionProvider(), BlobCache.TaskpoolScheduler);
                                                 });

    this.SecretLocalMachineCache = secretCache.Value;
}

As we will use the same name for all platform implementations, that’s already all we have to do here.

Platform specific encryption provider

Implementing the platform specific code is nothing new. Way before I used Akavache, others have already implemented solutions. The main issue is that there is no platform implementation for Android and iOS (and maybe others). My solution is inspired by this blog post by Kent Boogart, which is (as far as I can see), also broadly accepted amongst the community. The only thing I disliked about it was the requirement for a password – which either would be something reversible or causing a (maybe) bad user experience.

Akavache provides the IEncryptionProviderinterface, which contains two methods. One for encryption, the other one for decryption. Those two methods are working with byte[]both for input and output. You should be aware and know how to convert your data to that.

Implementing the  IEncryptionProvider interface

The implementation of Akavache’s encryption interface is following the same principle on all three platforms.

  • provide a reference to the internal TaskpoolSchedulerin the constructor
  • get an instance of our platform specific encryption provider
  • get or create keys (Android and iOS)
  • provide helper methods that perform encryption/decryption

Let’s have a look at the platform implementations. I will show the full class implementation and remarking them afterwards.

Android

[assembly: Xamarin.Forms.Dependency(typeof(PlatformCustomAkavacheEncryptionProvider))]
namespace XfAkavacheAndI.Android.PlatformImplementations
{
    public class PlatformCustomAkavacheEncryptionProvider : IEncryptionProvider
    {
        private readonly IScheduler _scheduler;

        private static readonly string KeyStoreName = $"{BlobCache.ApplicationName.ToLower()}_secureStore";

        private readonly PlatformEncryptionKeyHelper _encryptionKeyHelper;

        private const string TRANSFORMATION = "RSA/ECB/PKCS1Padding";
        private IKey _privateKey = null;
        private IKey _publicKey = null;

        public PlatformCustomAkavacheEncryptionProvider()
        {
            _scheduler = BlobCache.TaskpoolScheduler ?? throw new ArgumentNullException(nameof(_scheduler), "Scheduler is null");

            _encryptionKeyHelper = new PlatformEncryptionKeyHelper(Application.Context, KeyStoreName);
            GetOrCreateKeys();
        }

        public IObservable<byte[]> DecryptBlock(byte[] block)
        {
            if (block == null)
            {
                throw new ArgumentNullException(nameof(block), "block cannot be null");
            }

            return Observable.Start(() => Decrypt(block), _scheduler);
        }

        public IObservable<byte[]> EncryptBlock(byte[] block)
        {
            if (block == null)
            {
                throw new ArgumentNullException(nameof(block), "block cannot be null");
            }

            return Observable.Start(() => Encrypt(block), _scheduler);
        }


        private void GetOrCreateKeys()
        {
            if (!_encryptionKeyHelper.KeysExist())
                _encryptionKeyHelper.CreateKeyPair();

            _privateKey = _encryptionKeyHelper.GetPrivateKey();
            _publicKey = _encryptionKeyHelper.GetPublicKey();
        }


        public byte[] Encrypt(byte[] rawBytes)
        {
            if (_publicKey == null)
            {
                throw new ArgumentNullException(nameof(_publicKey), "Public key cannot be null");
            }

            var cipher = Cipher.GetInstance(TRANSFORMATION);
            cipher.Init(CipherMode.EncryptMode, _publicKey);

            return cipher.DoFinal(rawBytes);
        }

        public byte[] Decrypt(byte[] encyrptedBytes)
        {
            if (_privateKey == null)
            {
                throw new ArgumentNullException(nameof(_privateKey), "Private key cannot be null");
            }

            var cipher = Cipher.GetInstance(TRANSFORMATION);
            cipher.Init(CipherMode.DecryptMode, _privateKey);

            return cipher.DoFinal(encyrptedBytes);
        }
    }

As you can see, I am getting Akavache’s  internal TaskpoolSchedulerin the constructor, like initial stated. Then, for this sample, I am using RSA encryption. The helper methods pretty much implement the same code like in the post about my KeyStore implementation. The only thing to do is to use these methods in the EncryptBlock and DecyrptBlock method implementations, which is done asynchronously via Observable.Start.

iOS

[assembly: Xamarin.Forms.Dependency(typeof(PlatformCustomAkavacheEncryptionProvider))]
namespace XfAkavacheAndI.iOS.PlatformImplementations
{
    public class PlatformCustomAkavacheEncryptionProvider : IEncryptionProvider
    {
        private readonly IScheduler _scheduler;

        private readonly PlatformEncryptionKeyHelper _encryptionKeyHelper;


        private SecKey _privateKey = null;
        private SecKey _publicKey  = null;

        public PlatformCustomAkavacheEncryptionProvider()
        {
            _scheduler = BlobCache.TaskpoolScheduler ??
                         throw new ArgumentNullException(nameof(_scheduler), "Scheduler is null");

            _encryptionKeyHelper = new PlatformEncryptionKeyHelper(BlobCache.ApplicationName.ToLower());
            GetOrCreateKeys();
        }

        public IObservable<byte[]> DecryptBlock(byte[] block)
        {
            if (block == null)
            {
                throw new ArgumentNullException(nameof(block), "block can't be null");
            }

            return Observable.Start(() => Decrypt(block), _scheduler);
        }

        public IObservable<byte[]> EncryptBlock(byte[] block)
        {
            if (block == null)
            {
                throw new ArgumentNullException(nameof(block), "block can't be null");
            }

            return Observable.Start(() => Encrypt(block), _scheduler);
        }


        private void GetOrCreateKeys()
        {
            if (!_encryptionKeyHelper.KeysExist())
                _encryptionKeyHelper.CreateKeyPair();

            _privateKey = _encryptionKeyHelper.GetPrivateKey();
            _publicKey = _encryptionKeyHelper.GetPublicKey();
        }

        private byte[] Encrypt(byte[] rawBytes)
        {
            if (_publicKey == null)
            {
                throw new ArgumentNullException(nameof(_publicKey), "Public key cannot be null");
            }

            var code = _publicKey.Encrypt(SecPadding.PKCS1, rawBytes, out var encryptedBytes);

            return code == SecStatusCode.Success ? encryptedBytes : null;
        }

        private byte[] Decrypt(byte[] encyrptedBytes)
        {
            if (_privateKey == null)
            {
                throw new ArgumentNullException(nameof(_privateKey), "Private key cannot be null");
            }

            var code = _privateKey.Decrypt(SecPadding.PKCS1, encyrptedBytes, out var decryptedBytes);

            return code == SecStatusCode.Success ? decryptedBytes : null;
        }

    }
}

The iOS implementation follows the same schema as the Android implementation. However, iOS uses the KeyChain, which makes the encryption helper methods itself different.

UWP

[assembly: Xamarin.Forms.Dependency(typeof(PlatformCustomAkavacheEncryptionProvider))]
namespace XfAkavacheAndI.UWP.PlatformImplementations
{
    public class PlatformCustomAkavacheEncryptionProvider : IEncryptionProvider
    {
        private readonly IScheduler _scheduler;

        private string _localUserDescriptor = "LOCAL=user";
        private string _localMachineDescriptor = "LOCAL=machine";

        public bool UseForAllUsers { get; set; } = false;

        public PlatformCustomAkavacheEncryptionProvider()
        {
            _scheduler = BlobCache.TaskpoolScheduler ??
                         throw new ArgumentNullException(nameof(_scheduler), "Scheduler is null");
        }

        public IObservable<byte[]> EncryptBlock(byte[] block)
        {
            if (block == null)
            {
                throw new ArgumentNullException(nameof(block), "block can't be null");
            }

            return Observable.Start(() => Encrypt(block).GetAwaiter().GetResult(), _scheduler);
        }

        public IObservable<byte[]> DecryptBlock(byte[] block)
        {
            if (block == null)
            {
                throw new ArgumentNullException(nameof(block), "block can't be null");
            }

            return Observable.Start(() => Decrypt(block).GetAwaiter().GetResult(), _scheduler);
        }


        public async Task<byte[]> Encrypt(byte[] data)
        {
            var provider = new DataProtectionProvider(UseForAllUsers ? _localMachineDescriptor : _localUserDescriptor);

            var contentBuffer = CryptographicBuffer.CreateFromByteArray(data);
            var contentInputStream = new InMemoryRandomAccessStream();
            var protectedContentStream = new InMemoryRandomAccessStream();

            //storing data in the stream
            IOutputStream outputStream = contentInputStream.GetOutputStreamAt(0);
            var dataWriter = new DataWriter(outputStream);
            dataWriter.WriteBuffer(contentBuffer);
            await dataWriter.StoreAsync();
            await dataWriter.FlushAsync();

            //reopening in input mode
            IInputStream encodingInputStream = contentInputStream.GetInputStreamAt(0);

            IOutputStream protectedOutputStream = protectedContentStream.GetOutputStreamAt(0);
            await provider.ProtectStreamAsync(encodingInputStream, protectedOutputStream);
            await protectedOutputStream.FlushAsync();

            //verify that encryption happened
            var inputReader = new DataReader(contentInputStream.GetInputStreamAt(0));
            var protectedReader = new DataReader(protectedContentStream.GetInputStreamAt(0));

            await inputReader.LoadAsync((uint)contentInputStream.Size);
            await protectedReader.LoadAsync((uint)protectedContentStream.Size);

            var inputBuffer = inputReader.ReadBuffer((uint)contentInputStream.Size);
            var protectedBuffer = protectedReader.ReadBuffer((uint)protectedContentStream.Size);

            if (!CryptographicBuffer.Compare(inputBuffer, protectedBuffer))
            {
               return protectedBuffer.ToArray();
            }
            else
            {
                return null;
            }
        }

        public async Task<byte[]> Decrypt(byte[] encryptedBytes)
        {
            var provider = new DataProtectionProvider();

            var encryptedContentBuffer = CryptographicBuffer.CreateFromByteArray(encryptedBytes);
            var contentInputStream = new InMemoryRandomAccessStream();
            var unprotectedContentStream = new InMemoryRandomAccessStream();

            IOutputStream outputStream = contentInputStream.GetOutputStreamAt(0);
            var dataWriter = new DataWriter(outputStream);
            dataWriter.WriteBuffer(encryptedContentBuffer);
            await dataWriter.StoreAsync();
            await dataWriter.FlushAsync();

            IInputStream decodingInputStream = contentInputStream.GetInputStreamAt(0);

            IOutputStream protectedOutputStream = unprotectedContentStream.GetOutputStreamAt(0);
            await provider.UnprotectStreamAsync(decodingInputStream, protectedOutputStream);
            await protectedOutputStream.FlushAsync();

            DataReader reader2 = new DataReader(unprotectedContentStream.GetInputStreamAt(0));
            await reader2.LoadAsync((uint)unprotectedContentStream.Size);
            IBuffer unprotectedBuffer = reader2.ReadBuffer((uint)unprotectedContentStream.Size);

            return unprotectedBuffer.ToArray();
        }
    }   
}

Last but not least, we have also an implementation for Windows applications. It is using the DataProtection API, which does handle all that key stuff and let’s us focus on the encryption itself. As the API is asynchronously, I am using .GetAwaiter().GetResult()Task extensions to make it compatible with Observable.Start.

Conclusion

Using the implementations above paired with our instance helper makes it easy to protect data in our apps. With all those data breach scandals and law changes around, this is one possible way secure way to handle sensitive data, as we do not have hardcoded values or any user interaction involved.

For better understanding of all that code, I made a sample project available that has all the referenced and mentioned classes implemented. Feel free to fork it and play with it (or even give me some feedback). For using the implementations, please refer to my post about common usages I wrote a few days ago. The only difference is that you would use SecretLocalMachineCacheinstead of LocalMachineCache for sensitive data.

As always, I hope this post is helpful for some of you.

Until the next post, happy coding!


P.S. Feel free to download my official app for msicc.net, which – of course – uses the implementations above:
iOS Android Windows 10

Posted by msicc in Android, Dev Stories, iOS, UWP, Xamarin, 3 comments
Xamarin.Forms, Akavache and I: storing, retrieving and deleting data

Xamarin.Forms, Akavache and I: storing, retrieving and deleting data

Caching always has the same job: provide data that is frequently used in very little time. As I mentioned in my first post of this series, Akavache is my first choice because it is fast. It also provides a very easy way to interact with it (once one gets used to Reactive Extensions). The code I am showing here is living in the Forms project, but can also be called from the platform projects thanks to the interface we defined already before.

Enabling async support

First things first: we should write our code asynchronously, that’s why we need to enable async support by adding using System.Reactive.Linq;to the using statements in our class. This one is not so obvious, and I read a lot of questions on the web where this was the simple solution. So now you know, let’s go ahead.

Simple case

The most simple case of storing data is just throwing data with a key into the underlying database:

//getting a reference to the cache instance
var cache = SimpleIoc.Default.GetInstance<IBlobCacheInstanceHelper>().LocalMachineCache;
var dataToSave = "this is a simple string to save into the database";
await cache.InsertObject<string>("YourKeyHere", dataToSave);

Of course, we need a reference to the IBlobCacheinstance we have already in place. I am saving a simple string here for demo purposes, but you can also save more complex types like a list of blog posts into the cache. Akavache uses Json.NET , which will serialize the data into a valid json string that you can be saved. Similarly, it is very easy to get the data deserialized from the database:

var dataFromCache = cache.GetObject<string>("YourKeyHere");

That’s it. For things like storing Boolean values, simple strings (unencrypted), dates etc., this might already be everything you need.

Caching data from the web

Of course it wouldn’t be necessary to implement an advanced library if we would have only this scenario. More often, we are fetching data from the web and need to save it in our apps. There are several reasons to do this, with saving (mobile) data volume and performance being the two major reasons.

Akavache provides a bunch of very useful Extensions. The most prominent one I am using is the GetOrFetchObject<T>method. A typical implementation looks like this:

var postsCache = await cache.GetOrFetchObject<List<BlogPost>>(feedName,
    async () =>
    {
        var newPosts = await _postsHandler.GetPostsAsync(BaseUrl, 20, 20, 1, feedName.ToCategoryId()).ConfigureAwait(false);

        await cache.InsertObject<List<BlogPost>>(feedName, newPostsDto);

        return newPosts;
    });

The GetOrFetchObject<T>method’s minimum parameters are the key of the cache entry and an asynchronous function that shall be executed when there is no data in the cache. In the sample above, it loads the latest 20 posts from a WordPress blog (utilizing my WordPressReader lib) and saves it into the cache instance before returning the downloaded data. The method has an optional parameter of DateTimeOffset, which may be interesting if you need to expire the saved data after some time.

Saving images/documents from the web

If you need to download files, be it images or other documents, from the web, Akavache provides another helper extension:

byte[] bytes = await cache.DownloadUrl("YourFileKeyHere", url);

Personally, I am loading all files with this method, even though there are some special image loading methods available as well (see the readme at Akavache’s repo). The main reason I am doing so is that until now, I always have a platform specific implementation for such cases – mainly due to performance reasons. I one of the following blog posts you will see such an implementation for image caching using a custom renderer on each platform.

Deleting data from the cache

When working with caches, one cannot avoid the situation that data needs to be removed manually from the cache.

//delete a single entry by key:
cache.Invalidate("KeyToDelete");

//delete all entries with the same type:
cache.InvalidateAllObjects<BlogPost>();

//delete all entries
cache.InvalidateAll();

If you want to continue with some other action after deletion completes, you can use the Subscribe method to invoke this action:

cache.InvalidateAll().Subscribe(x => YourMethodToInvoke());

Conclusion

Even though Akavache provides more methods to store and retrieve data, the ones I mentioned above are those that I use frequently and without problems in my Xamarin.Forms applications, while still being able to invoke them in platform specific code as well. If you want to have a look at the other methods that are available, click the link above to the GitHub repo of Akavache. As always, I hope this blog post is helpful for some of you.

Until the next post, happy coding, everyone!

Posted by msicc in Android, Dev Stories, iOS, UWP, Xamarin, 1 comment
Xamarin.Forms, Akavache and I: Initial setup (new series)

Xamarin.Forms, Akavache and I: Initial setup (new series)

Caching is never a trivial task. Sometimes, we can use built-in storages, but more often, these take quite some time when we are storing a large amount of data (eg. large datasets or large json strings). I tried quite a few approaches, including:

  • built-in storage
  • self handled files
  • plugins that use a one or all of the above
  • Akavache (which uses SQLite under the hood)

Why Akavache wins

Well, the major reason is quite easy. It is fast. Really fast. At least compared to the other options. You may not notice the difference until you are using a background task that relies on the cached data or until you try to truly optimize startup performance of your Xamarin Android app. Those two where the reason for me to switch, because once implemented, it does handle both jobs perfectly. Because it is so fast, there is quite an amount of apps that uses it. Bonus: there are a lot of tips on StackOverflow as well as on GitHub, as it is already used by a lot of developers.

Getting your projects ready

Well, as often, it all starts with the installation of NuGet packages. As I am trying to follow good practices wherever I can, I am using .netStandard whenever possible. The latest stable version of Akavache does work partially in .netStandard projects, but I recommend to use the latest alpha (by the time of this post) in your .netStandard project (even if VisualStudio keeps telling you that a pre release dependency is not a good idea). If you are using the package reference in your project files, there might be some additional work to bring everything to build and run smoothly, especially in a Xamarin.Android project.

You mileage may vary, but in my experience, you should install the following dependencies and Akavache separately:

After installing this packages in your Xamarin.Forms and platform projects, we are ready for the next step.

Initializing Akavache

Basically, you should be able to use Akavache in a very simple way, by just defining the application name like this during application initialization:

BlobCache.ApplicationName = "MyAkavachePoweredApp";

You can do this assignment in your platform project as well as in your Xamarin.Forms project, both ways will work. Just remember to do this, as also to get my code working, this is a needed step.

There are static properties  like BlobCache.LocalMachineone can use to cache data. However, once your app will use an advanced library like Akavache, it is very likely that he complexity of your app will force you into a more complex scenario. In my case, the usage of a scheduled job on Android was the reason why I am doing the initialization on my own. The scheduled job starts a process for the application, and the job updates data in the cache that the application uses. There were several cases where the standard initialization did not work, so I decided to make the special case to a standard case. The following code will also work in simple scenarios, but keeps doors open for more complex ones as well. The second reason why I did my own implementation is the MVVM structure of my apps.

IBlobCacheInstanceHelper rules them all

Like often when we want to use platform implementations, all starts with an interface that dictates the functionality. Let’s start with this simple one:

public interface IBlobCacheInstanceHelper
{
    void Init();
    IBlobCache LocalMachineCache { get; set; }
}

We are defining our own IBlobCacheinstance, which we will initialize with the Init() method on each platform. Let’s have a look on the platform implementations:

[assembly: Xamarin.Forms.Dependency(typeof(PlatformBlobCacheInstanceHelper))]
namespace [YOURNAMESPACEHERE]
{
    public class PlatformBlobCacheInstanceHelper : IBlobCacheInstanceHelper
    {
        private IFilesystemProvider _filesystemProvider;

        public PlatformBlobCacheInstanceHelper() { }

        public void Init()
        {
            _filesystemProvider = Locator.Current.GetService<IFilesystemProvider>();
            GetLocalMachineCache();
        }

        public IBlobCache LocalMachineCache { get; set; }

        private void GetLocalMachineCache()
        {

            var localCache = new Lazy<IBlobCache>(() => 
                                                  {
                                                      _filesystemProvider.CreateRecursive(_filesystemProvider.GetDefaultLocalMachineCacheDirectory()).SubscribeOn(BlobCache.TaskpoolScheduler).Wait();
                                                      return new SQLitePersistentBlobCache(Path.Combine(_filesystemProvider.GetDefaultLocalMachineCacheDirectory(), "blobs.db"), BlobCache.TaskpoolScheduler);
                                                  });

            this.LocalMachineCache = localCache.Value;
        }

        //TODO: implement other cache types if necessary at some point
    }
}

Let me explain what this code does.

As SQLite, which is powering Akavache, is file based, we need to provide a file path. The Init() method assigns Akavache’s internal IFileSystemProviderinterface to the internal member. After getting an instance via Splat’s Locator, we can now use it to get the file path and create the .db-file for our local cache. The GetLocalMachineCache()method is basically a copy of Akavache’s internal registration. It lazily creates an instance of BlobCache through the IBlobCacheinterface. The create instance is then passed to the LocalMachineCacheproperty, which we will use later on. Finally, we will be using the DependencyServiceof Xamarin.Forms to get an instance of our platform implementation, which is why we need to define the Dependency attribute as well.

Note: you can name the file whatever you want. If you are already using Akavache and want to change the instance handling, you should keep the original names used by Akavache. This way, your users will not lose any data.

This implementation can be used your Android, iOS and UWP projects within your Xamarin.Forms app. If you are wondering why I do this separately for every platform, you are right. Until now, there is no need to do it that way. The code above would also work solely in your Xamarin.Forms project. Once you are coming to the point where you need encrypted data in your cache, the platform implementations will change on every platform. This will be topic of a future blog post, however.

If you have been reading my series about MVVMLight, you may guess the next step already. This is how I initialize the platform implementation within my ViewModelLocator:

//register:
var cacheInstanceHelper = DependencyService.Get<IBlobCacheInstanceHelper>();
if (!SimpleIoc.Default.IsRegistered<IBlobCacheInstanceHelper>())
     SimpleIoc.Default.Register<IBlobCacheInstanceHelper>(()=> cacheInstanceHelper);

//initialize:
//cacheInstanceHelper.Init();
//or
SimpleIoc.Default.GetInstance<IBlobCacheInstanceHelper>().Init();

So that’s it, we are now ready to use our local cache powered by Akavache within our Xamarin.Forms project. In the next post, we will have a look on how to use akavache for storing and retrieving data.

Until then, happy coding, everyone!

Posted by msicc in Android, Dev Stories, iOS, UWP, Xamarin, 1 comment
Using the built-in UWP data protection for data encryption

Using the built-in UWP data protection for data encryption

DataProtectionProvider

The UWP has an easy-to-use helper class to perform encryption tasks on data. Before you can use the DataProtectionProvider class, you have to decide on which level you want the encryption to happen. The level is derived from the protection descriptor, which derives the encryption material internally from the user data specified. For non-enterprise apps, you can choose between four levels (passed in as protectionDescriptor parameter with the constructor):

  • “LOCAL=user”
  • “LOCAL=machine”
  • “WEBCREDENTIALS=MyPasswordName”
  • “WEBCREDENTIALS=MyPasswordName,myweb.com”

If you want the encryption to be only for the currently logged-in user, use the first provider, for a machine-wide implementation use the second one in the list above. The providers for web credentials are intended to be used within the browser/web view. Personally, I never used them, because of this, this post will focus on local implementations.

Encrypting data

There are two ways to encrypt data with the DataProtectionProvider. You can encrypt a s string or a stream. My implementation uses always the stream implementation. This way I can encrypt whole files if needed and simple strings with the same method. Here is the encryption Task:

public async Task<byte[]> Encrypt(byte[] data)
{
    var provider = new DataProtectionProvider(UseForAllUsers ? _localMachineDescriptor : _localUserDescriptor);

    var contentBuffer = CryptographicBuffer.CreateFromByteArray(data);
    var contentInputStream = new InMemoryRandomAccessStream();
    var protectedContentStream = new InMemoryRandomAccessStream();

    //storing data in the stream
    IOutputStream outputStream = contentInputStream.GetOutputStreamAt(0);
    var dataWriter = new DataWriter(outputStream);
    dataWriter.WriteBuffer(contentBuffer);
    await dataWriter.StoreAsync();
    await dataWriter.FlushAsync();

    //reopening in input mode
    IInputStream encodingInputStream = contentInputStream.GetInputStreamAt(0);

    IOutputStream protectedOutputStream = protectedContentStream.GetOutputStreamAt(0);
    await provider.ProtectStreamAsync(encodingInputStream, protectedOutputStream);
    await protectedOutputStream.FlushAsync();

    //verify that encryption happened
    var inputReader = new DataReader(contentInputStream.GetInputStreamAt(0));
    var protectedReader = new DataReader(protectedContentStream.GetInputStreamAt(0));

    await inputReader.LoadAsync((uint)contentInputStream.Size);
    await protectedReader.LoadAsync((uint)protectedContentStream.Size);

    var inputBuffer = inputReader.ReadBuffer((uint)contentInputStream.Size);
    var protectedBuffer = protectedReader.ReadBuffer((uint)protectedContentStream.Size);

    if (!CryptographicBuffer.Compare(inputBuffer, protectedBuffer))
    {
       return protectedBuffer.ToArray();
    }
    else
    {
        return null;
    }
}

First, I am defining the provider based on a boolean property. The passed in byte array is then converted to an IBuffer, which in turn makes it easier to pass it to the InMemoryRandomAccessStream that we will encrypt. After writing the data into an this stream, we need to reopen the stream in input mode, which allows us to call the ProtectStreamAsyncmethod. By comparing the both streams, we are finally verifying that encryption happened.

Decrypting data

Of course it is very likely we want to decrypt data we encrypted with the above method at some point. We are basically reversing the process above with the following Task:

public async Task<byte[]> Decrypt(byte[] encryptedBytes)
{
    var provider = new DataProtectionProvider();

    var encryptedContentBuffer = CryptographicBuffer.CreateFromByteArray(encryptedBytes);
    var contentInputStream = new InMemoryRandomAccessStream();
    var unprotectedContentStream = new InMemoryRandomAccessStream();

    IOutputStream outputStream = contentInputStream.GetOutputStreamAt(0);
    var dataWriter = new DataWriter(outputStream);
    dataWriter.WriteBuffer(encryptedContentBuffer);
    await dataWriter.StoreAsync();
    await dataWriter.FlushAsync();

    IInputStream decodingInputStream = contentInputStream.GetInputStreamAt(0);

    IOutputStream protectedOutputStream = unprotectedContentStream.GetOutputStreamAt(0);
    await provider.UnprotectStreamAsync(decodingInputStream, protectedOutputStream);
    await protectedOutputStream.FlushAsync();

    DataReader reader2 = new DataReader(unprotectedContentStream.GetInputStreamAt(0));
    await reader2.LoadAsync((uint)unprotectedContentStream.Size);
    IBuffer unprotectedBuffer = reader2.ReadBuffer((uint)unprotectedContentStream.Size);

    return unprotectedBuffer.ToArray();
}

For decryption, we do not need to specify the protection descriptor again. The OS will find the right one on its own. While I have read that this sometimes makes problems, I haven’t got any problems until now. Once again, we are creating InMemoryRandomAccessStreaminstances to perform decryption on it. Finally, after we unprotected our data with the UnProtectStreamAsyncmethod, we are reading the data back into a byte array.

Usage

The usage of these two helpers is once again pretty straight forward. Let’s have a look at data encryption:

var textToCrypt = "this is just a plain test text that will be encrypted and decrypted";

//async method
var encrypted = await Encrypt(Encoding.UTF8.GetBytes(textToCrypt));
//non async method:
//var encrypted = Encrypt(Encoding.UTF8.GetBytes(textToCrypt)).GetAwaiter().GetResult();

Decryption of encoded data is done like this:

//async method
var decrypted = await Decrypt(encrypted);

//non async method
//var decrypted = Decrypt(encrypted).GetAwaiter().GetResult();

//getting back the string:
var decryptedString = Encoding.UTF8.GetString(decrypted);

Conclusion

Using the built in DataProtection API makes it very easy to protect sensitive data within an UWP app. You can use it on both a string or a stream, it is up to you if you follow my way to use always a stream or make it dependent on your scenario. If you want to have more control over the key size, the encryption method used or any other detail, I wrote a post about doing everything on my own a while back (find it here). Even if it is from 2015 and based on WINRT (Win8.1), the APIs are still alive and the post should help you to get started.

You can have a look on Android encryption here, while you’ll find the iOS post here.

As always, I hope this post will be helpful for some of you. If you have any questions/feedback, feel free to comment below or connect with me via my social networks.

Happy coding, everyone!

Posted by msicc in Dev Stories, UWP, Xamarin, 3 comments
How to perform asymmetric encryption without user input/hardcoded values with Xamarin iOS

How to perform asymmetric encryption without user input/hardcoded values with Xamarin iOS

I am not repeating all the initial explanations why you should use this way of encryption to secure sensible data in your app(s), as I did this already in the post on how to do that on Android.

iOS KeyChain

The KeyChain API is the most important security element on iOS (and MacOS). The interaction with it is not as difficult as one thinks, after getting the concept for the different using scenarios it supports. In our case, we want to create a public/private key pair for use within our app. Pretty much like on Android, we want no user input and no hardcoded values to get this pair.

Preparing key (pair) creation

On iOS, things are traditionally a little bit more complex than on other platforms. This is also true for things like encryption. The first step would be to prepare the RSA parameters we want to use for encryption. However, that turned out to be a bit challenging because we need to pass in some keys and values that live in the native iOS security library and Xamarin does not fully expose them. Luckily, there is at least a Xamarin API that helps us to extract those values. I found this SO post helpful to understand what is needed for the creation of the key pair. I adapted some of the snippets into my own helper class, this is also true for the IosConstantsclass:

    internal class IosConstants
    {
        private static IosConstants _instance;

        public static IosConstants Instance => _instance ?? (_instance = new IosConstants());

        public readonly NSString KSecAttrKeyType;
        public readonly NSString KSecAttrKeySize;
        public readonly NSString KSecAttrKeyTypeRSA;
        public readonly NSString KSecAttrIsPermanent;
        public readonly NSString KSecAttrApplicationTag;
        public readonly NSString KSecPrivateKeyAttrs;
        public readonly NSString KSecClass;
        public readonly NSString KSecClassKey;
        public readonly NSString KSecPaddingPKCS1;
        public readonly NSString KSecAccessibleWhenUnlocked;
        public readonly NSString KSecAttrAccessible;

        public IosConstants()
        {
            var handle = Dlfcn.dlopen(Constants.SecurityLibrary, 0);

            try
            {
                KSecAttrApplicationTag = Dlfcn.GetStringConstant(handle, "kSecAttrApplicationTag");
                KSecAttrKeyType = Dlfcn.GetStringConstant(handle, "kSecAttrKeyType");
                KSecAttrKeyTypeRSA = Dlfcn.GetStringConstant(handle, "kSecAttrKeyTypeRSA");
                KSecAttrKeySize = Dlfcn.GetStringConstant(handle, "kSecAttrKeySizeInBits");
                KSecAttrIsPermanent = Dlfcn.GetStringConstant(handle, "kSecAttrIsPermanent");
                KSecPrivateKeyAttrs = Dlfcn.GetStringConstant(handle, "kSecPrivateKeyAttrs");
                KSecClass = Dlfcn.GetStringConstant(handle, "kSecClass");
                KSecClassKey = Dlfcn.GetStringConstant(handle, "kSecClassKey");
                KSecPaddingPKCS1 = Dlfcn.GetStringConstant(handle, "kSecPaddingPKCS1");
                KSecAccessibleWhenUnlocked = Dlfcn.GetStringConstant(handle, "kSecAttrAccessibleWhenUnlocked");
                KSecAttrAccessible = Dlfcn.GetStringConstant(handle, "kSecAttrAccessible");

            }
            finally
            {
                Dlfcn.dlclose(handle);
            }
        }
    }

This class picks out the values we need to create the RSA parameters that will be passed to the KeyChain API later. No we have everything in place to create those with this helper method:

private NSDictionary CreateRsaParams()
{
    IList<object> keys = new List<object>();
    IList<object> values = new List<object>();

    //creating the private key params
    keys.Add(IosConstants.Instance.KSecAttrApplicationTag);
    keys.Add(IosConstants.Instance.KSecAttrIsPermanent);
    keys.Add(IosConstants.Instance.KSecAttrAccessible);

    values.Add(NSData.FromString(_keyName, NSStringEncoding.UTF8));
    values.Add(NSNumber.FromBoolean(true));
    values.Add(IosConstants.Instance.KSecAccessibleWhenUnlocked);

    NSDictionary privateKeyAttributes = NSDictionary.FromObjectsAndKeys(values.ToArray(), keys.ToArray());

    keys.Clear();
    values.Clear();

    //creating the keychain entry params
    //no need for public key params, as it will be created from the private key once it is needed
    keys.Add(IosConstants.Instance.KSecAttrKeyType);
    keys.Add(IosConstants.Instance.KSecAttrKeySize);
    keys.Add(IosConstants.Instance.KSecPrivateKeyAttrs);

    values.Add(IosConstants.Instance.KSecAttrKeyTypeRSA);
    values.Add(NSNumber.FromInt32(this.KeySize));
    values.Add(privateKeyAttributes);

    return NSDictionary.FromObjectsAndKeys(values.ToArray(), keys.ToArray());
}

In order to use the SecKeyAPI to create a random key for us, we need to pass in a NSDictionarythat holds a list of private key attributes and is attached to a parent NSDictionary that holds it together with some other configuration values for the KeyChain API. If you want, you could also create a NSDictionary for the public key, but that is not needed for my implementation as I request it later from the private key (we’ll have a look on that as well).

Finally, let the OS create a private key

Now we have all our parameters in place, we are able to create a new private key by calling the SecKey.CreateRandomKey() method:

public bool CreatePrivateKey()
{
    Delete();
    var keyParams = CreateRsaParams();

    SecKey.CreateRandomKey(keyParams, out var keyCreationError);

    if (keyCreationError != null)
    {
        Debug.WriteLine($"{keyCreationError.LocalizedFailureReason}\n{keyCreationError.LocalizedDescription}");
    }

    return keyCreationError == null;
}

Like on Android, it is a good idea to call into the Delete() method before creating a new key (I’ll show you that method later). This makes sure your app uses just one key with the specified name. After that, we create a new random key with the help of the OS. Because we specified it to be a private key for RSA before, it will be exactly that. If there is an error, we will return false and print it in the Debug console.

Retrieving the private key

Now we have created the new private key, we are able to retrieve it like this:

public SecKey GetPrivateKey()
{
    var privateKey = SecKeyChain.QueryAsConcreteType(
        new SecRecord(SecKind.Key)
        {
            ApplicationTag = NSData.FromString(_keyName, NSStringEncoding.UTF8),
            KeyType = SecKeyType.RSA,
            Synchronizable = shouldSyncAcrossDevices
        },
        out var code);

    return code == SecStatusCode.Success ? privateKey as SecKey : null;
}

We are using the QueryAsConcreteType method to find our existing key in the Keychain. If the OS does not find the key, we are returning null. In this case, we would need to create a new key.

Retrieving the public key for encryption

Of course, we need a public key if we want to encrypt our data. Here is how to get this public key from the private key:

public SecKey GetPublicKey()
{
    return GetPrivateKey()?.GetPublicKey();
}

Really, that’s it. Even if we are not creating a public key explicitly when we are creating our private key, we are getting a valid public key for encryption from the GetPublicKeymethod, called on the private key instance.

Deleting the key pair

Like I said already earlier, sometimes we need to delete our encryption key(s). This little helper method does the job for us:

public bool Delete()
{
    var findExisting = new SecRecord(SecKind.Key)
    {
        ApplicationTag = NSData.FromString(_keyName, NSStringEncoding.UTF8),
        KeyType = SecKeyType.RSA,
        Synchronizable = _shouldSyncAcrossDevices
    };

    SecStatusCode code = SecKeyChain.Remove(findExisting);

    return code == SecStatusCode.Success;
}

This time, we are searching for a SecRecord with the kind key, and calling the Removemethod of the SecKeyChain API. Based on the status code, we finally return a bool that indicates if we were successful. Note: When we create a new key, (actually) I do not care about the status and just create a new one in my helper class. If we delete the key from another place, we are probably going to work with that status code.

As I did with the Android version, I did not create a demo project, but you can have a look at the full class in this Gist on GitHub.

Usage

Now that we have our helper in place, we are able to encrypt and decrypt data in a very easy way. First, we need to obtain a private and a public key:

var helper = new PlatformEncryptionKeyHelper("testKeyHelper");

if (!helper.KeysExist())
{
    helper.CreateKeyPair();
}

var privKey = helper.GetPrivateKey();
var pubKey = helper.GetPublicKey();

The encryption method needs to be called directly on the public key instance:

var textToCrypt = "this is just a plain test text that will be encrypted and decrypted";
pubKey.Encrypt(SecPadding.PKCS1, Encoding.UTF8.GetBytes(textToCrypt), out var encBytes);

For getting the plain value back, we need to call the decryption method on the private key instance:

privKey.Decrypt(SecPadding.PKCS1, encBytes, out var decBytes);
var decrypted = Encoding.UTF8.GetString(decBytes);

It may make sense to wrap these calls into helper methods, but you could also just use it like I did for demoing purposes. Just remember to use always the same padding method, otherwise you will not get any value back from the encrypted byte array.

Once again, if you need encryption of data in a Xamarin.Forms project, just extract an interface from the class or match it the interface you may already have extracted from the Android version. As I stated already before, every developer should use the right tools to encrypt data really securely in their apps. With that post, you now have also a starting point for your own Xamarin iOS implementation.

Like always, I hope this will be helpful for some of you. In my next post, we will have a look into the OS provided options for encryption and decryption on Windows 10 (UWP).

Until then, happy coding, everyone!

Posted by msicc in Dev Stories, iOS, Xamarin, 1 comment