all my Azure posts

Sending push notifications to your Xamarin app from WordPress with Azure, Part II – the Function

Sending push notifications to your Xamarin app from WordPress with Azure, Part II – the Function

First, let’s have a look at the lineup of this series once again:

  • Preparing your WordPress (blog/site)
  • Preparing the Azure Function and connect the Webhook (this post)
  • Preparing the Notification Hub
  • Send the notification to Android
  • Send the notification to iOS
  • Adding in Xamarin.Forms

Creating a new Azure Function in Visual Studio

The most simple approach to create a new Azure Function (if you already have an Azure account) is adding a new project to your Xamarin solution:

After the project is loaded, double click on the .csproj file in the Solution Explorer to open the file for editing it. Make sure you have the following two PropertyGroup entries:

    <PackageReference Include="Microsoft.Azure.NotificationHubs" Version="1.0.9" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.NotificationHubs" Version="1.3.0" />
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.31" />
    <PackageReference Include="Newtonsoft.Json" Version="9.0.1" />

You may notice that I made an all caps comment into the second PropertyGroup entry. As I am using a v1 Function, these are the latest packages that I am able to use. They are doing their job, and allow us to use an easy way to bind the Function to the Azure NotificationHub , which we are going to implement in the next post. I delayed updating the whole setup to use a v2 function intentionally at this point.

Processing the Webhook payload

In order to be able to process the payload (remember, we are getting a JSON string) from our WordPress Webhook, we need to deserialize it. Let’s create the class that holds all information about it:

using Newtonsoft.Json;

namespace NewPostHandler
    public class PublishedPostNotification
        public long Id { get; set; }

        public string Title { get; set; }

        public string Status { get; set; }

        public string FeaturedMedia { get; set; }

The class gets it pretty straight forward, we will use this implementation as-is for the payload we are sending to Android later on. The use of the StringToLongConverter is optional. For completeness, here is the implementation:

using Newtonsoft.Json;
using System;

namespace NewPostHandler
    public class StringToLongConverter : JsonConverter
        public override bool CanConvert(Type t) => t == typeof(long) || t == typeof(long?);

        public override object ReadJson(JsonReader reader, Type t, object existingValue, JsonSerializer serializer)
            if (reader.TokenType == JsonToken.Null) return default(long);
            var value = serializer.Deserialize<string>(reader);
            if (long.TryParse(value, out var l))
                return l;

            return default(long);

        public override void WriteJson(JsonWriter writer, object untypedValue, JsonSerializer serializer)
            if (untypedValue == null)
                serializer.Serialize(writer, null);
            var value = (long)untypedValue;
            serializer.Serialize(writer, value.ToString());

        public static readonly StringToLongConverter Instance = new StringToLongConverter();

Now that we prepared our data transferring object, it is time to finally have a look at the processor code.

public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
    _log = log;
    _log.Info("arrived at 'HandleNewPostHook' function trigger.");

    //ignoring any query parameters, only using POST body

    PublishedPostNotification result = null;

        _jsonSerializerSettings = new JsonSerializerSettings()
            MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
            DateParseHandling = DateParseHandling.None,
            Converters =
                new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },

        _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

        using (var stream = await req.Content.ReadAsStreamAsync())
            using (var reader = new StreamReader(stream))
                using (var jsonReader = new JsonTextReader(reader))
                    result = _jsonSerializer.Deserialize<PublishedPostNotification>(jsonReader);

        if (result == null)
            _log.Error("There was an error processing the request (serialization result is NULL)");
            return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing the post body");

       //subject of the next post
      //await TriggerPushNotificationAsync(result);
    catch (Exception ex)
        _log.Error("There was an error processing the request", ex);
        return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing the post body");

    if (result.Id != default)
        _log.Info($"initiated processing of published post with id {result.Id}");
        return req.CreateResponse(HttpStatusCode.OK, "Processing new published post...");
        _log.Error("There was an error processing the request (cannot process result Id with default value)");
        return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing (postId not valid)");

Let’s go through the code. The first thing I want to know is if we ever enter the Function, so I log the entrance. The second step is setting up the JsonSerializer to deserialize the payload into the DTO class I created before.

There are several scenarios that I am handling and returning different responses. Ideally, we would run through and arrive at the TriggerPushNotificationAsync call, followed by a jump the ‘OK‘- response if the post id received from our Webhook is valid. During testing, however, I ran into other situations as well, where I return a ‘Bad Request‘ response with a hint that something went wrong.

The implementation of the TriggerPushNotificationAsync method is not shown in this post as it will be subject of the next post in this series.

Testing the code locally

One of the reasons I chose to start the Function in Visual Studio is its ability to debug it locally. If you don’t have the necessary tools installed, Visual Studio will prompt you to do so. After installing them, you’ll be able to follow along.

Once the service is running, we will be able to test our function. If you haven’t already heard about it, Postman will be the easiest tool for that. Copy the function url and paste it into the url field in Postman. Next, add a sample JSON payload to the body (settings: raw, JSON) and hit the ‘Send’ button:

If all goes well, Postman will give you a success response:

The Azure CLI will also write some output:

As you can see, all of our log entries were written to the CLI, plus some additional information from Azure itself. Don’t worry for the moment about the anonymous authorization state, this is just because we are running locally. In theory, we could already publish the function to Azure now. As we know that we will extend the Function in the next post, however, we will not do this right now.


As you can see, writing an Azure Function isn’t as complicated as it sounds. Visual Studio brings all the tools you need to get started pretty fast. The ability to test the Function code locally is another big advantage that comes with Visual Studio.

In the next post, we will configure the NotificationHub on Azure and extend our Function to call into it and fire the notifications.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 1 comment
Sending push notifications to your Xamarin app from WordPress with Azure, Part I [new series]

Sending push notifications to your Xamarin app from WordPress with Azure, Part I [new series]


Choosing the “right” solution for sending push notifications isn’t easy if you have a WordPress blog. There are quite a bunch of options to choose from, and the right one for you might differ from my decision. I am using the most generic solution – a Webhook that triggers an Azure Function, which triggers the notification via Azure Notification Hubs. This series will grow as follows:

  • Preparing your WordPress (blog/site) (this post)
  • Preparing the Azure Function and connect the Webhook
  • Preparing the Notification Hub
  • Send the notification to Android
  • Send the notification to iOS
  • Adding in Xamarin.Forms

The app implementations are very platform-specific, but it is quite easy to integrate the post notifications in a Xamarin.Forms app (which will be the last post in this series). If you want to see the whole integration already in action, feel free to download my blog reader app:

WordPress plugins for the win

If you have a self-hosted blog like I do, you may know that the plugin ecosystem is there to help you with a lot of things that WordPress hasn’t out of the box. While a WordPress-hosted site as Webhook integration without an additional plugin, we need one to create such a Webhook on a self-hosted WordPress blog. The plugin I am using is simply called “Notification” and can be found here.

To install the plugin, follow these simple steps:

  • choose “Plugins” on your WordPress dashboard
  • select “Add New” and type in “notification”
  • Hit the “Install Now” button
  • Activate the plugin

Exploring the options

Once you have installed and activated the plugin, you will have a new option in the dashboard menu. Let’s have a look at the options.

  • Notifications – this shows you a list of your currently active notifications
  • Add New Notification – lets you create a new notification
  • Extensions – the plugin allows you to extend your notifications with external services like Slack, Twitter or SendGrid to engage even more users. We do not need these for the webhook, however.
  • Settings – the control panel for the plugin – this is where we will be for the rest of this blog

Enabling the Webhook

On the Settings page, select the “CARRIERS” option. The plugin uses so-called carriers to send out the notifications. By default, the Email carrier is active. I do not need this one for the moment, so I deactivated it an activated the Webhook carrier instead:

Setting Post Triggers

The next step is to verify we have the trigger for posts active:

You can modify the other triggers as well, but for the moment, I am focusing just on posts. I am thinking about integrating comments in the future, which will allow even more interaction from within my app.

Add a new notification

Let’s create our first notification. Select the “Add New Notification” action, which will bring up this page:

Select the “Add New Carrier” option and add the Webhook carrier:

Next, select the Trigger for the Webhook. During development, I am using the saving draft option as it allows me to easily trigger a notification without annoying anyone:

This will enable the “Merge Tags” list on the right-hand side. To create the Webhook payload, we need to add some arguments (using the “Add argument” button). Tip: you can copy the merge tag by just clicking on it and paste it into the “Value” box:

Don’t forget to activate the JSON format – we do not want it to be sent as XML. Make sure the Carrier is enabled and hit the save button on the upper right.

Testing the Webhook

Now that we finished the setup of our Webhook, let’s test it. To do so, go to the “Settings” page again and select “DEBUGGING”. Check the “Enable Notification logging” box and click the “Save Changes” button:

To test the notification, just create a new blog post and save the draft. Go back to the “DEBUGGING” setting, where you will be presented a new Notification log entry. Expanding this log entry, you will see some common data about the notification:

If you scroll a bit further down, you will see the payload of the Webhook. Sadly, you won’t get the raw JSON string, but a structured overview of the payload:

Verify that the payload contains all the data you need and adjust the settings if necessary. Once that is done, we are ready to go to the next blog post (coming soon).


In this post, I showed you how to create a Webhook that will trigger our upcoming Azure Function. Thanks to the “Notification” plugin, the process is pretty straight forward. In the next post, we will have a look at the Azure Function that will handle the Webhook.

Until the next post, happy coding, everyone!

Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 1 comment
MSicc’s Blog version 1.6.0 out now for Android and iOS

MSicc’s Blog version 1.6.0 out now for Android and iOS

Here are the new features:

Push Notifications

With version 1.6.0 of the app, you can opt-in to receive push notifications once I publish a new blog post. I use an Azure Function (v1, for the ease of bindings – at least for now), and of course, an Azure NotificationHub. The Function gets called from a WebHook (via a plugin on WordPress), which triggers it to run (the next blog posts I write will be about how I achieved this, btw.)

New Design using Xamarin.Forms Shell

I also overhauled the design of the application. Initially, it was a MasterDetail app, but I never felt happy with that. Using Xamarin.Forms.Shell, I optimized the app to only show the last 30 posts I wrote. If you need older articles, you’ll be able to search for them within the app. The new design is a “v1” and will be constantly improved along with new features.

Bugs fixed in this release
  • fixed a bug where code snippets were not correctly displayed
  • fixed a bug where the app did not refresh posts after cleaning the cache
  • other minor fixes and improvements

I hope some of you will use the application and give me some feedback.

You can download the app using these links:
iOS | Android

Until the next post, happy coding, everyone!

Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 0 comments
Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

As I am moving forward in my current Linux journey, I recently came into a situation where a second user would have been handy. So I tried a few things to create the new user and allow the new user to only log in via SSH.

What the h*** is SSH?

SSH stands for Secure Shell and describes a protocol to connect via encrypted credentials. The security is provided by cryptographic keys, where the server only knows the public key and the client that wants to connect needs the matching private key. The most popular implementation is OpenSSH, which is available as an additional feature on Windows 10 since last fall. If you want to learn more about SSH, read the Wikipedia entry as well.

Create the SSH key pair on Windows

Install the OpenSSH client

First, install the OpenSSH client on your Windows machine. Proceed as follows:

  • open the start menu and type ‘apps’
  • select ‘Apps and features’ settings page
  • select ‘Optional features’
  • click on ‘Add a feature’
  • search ‘OpenSSH client’, click on it and ‘Install’

If you are scrolling down the list of installed features, you should find the entry for the OpenSSH client.

Note: If you are not able to install this feature, it may be the right time to update your Windows installation to the latest version.

Create your SSH key pair

If your user profile does not have a folder called ‘.ssh’, it is time to create it now. Type ‘%USERPROFILE%‘ in the Windows Explorer’s address bar to get to the right folder immediately and create the folder.

Now that we have the folder OpenSSH searches for, we are already able to create our new SSH keypair. Open a command prompt and type this:

ssh-keygen -t rsa -b 4096 -C ""

This will initiate the keypair creation. The -C parameter is optional and can be anything. You’ll find these keys often created with <user at address> combinations. After OpenSSH has created your keypair in memory, it will ask for a location to save the file. If you do not enter anything, it’ll save it as id_rsa into the .ssh folder created earlier. If you want to save it to another file name, you can do so:


Please note that the file name (without any extension!) is separated by ‘/’, not by ‘\’. If you do not respect this, it will give you an error that the file name does not exist. This happens only if don’t use the default filename. After the files are created, OpenSSH asks you for a passphrase to protect your private key. Nowadays, every single keypair should be password protected (just my 2cts). Once the creation is complete, you’ll see something like this:

Your identification has been saved in C:\Users\msicc\.ssh/id_test.
Your public key has been saved in C:\Users\msicc\.ssh/
The key fingerprint is:
The key's randomart image is:
+---[RSA 4096]----+
|          E...   |
|          .o. o  |
|      .  .. o+.oo|
|       oo. oo=.o=|
|      .=S.  o.=.o|
|      =o.. . * o.|
|     .. ..o o * .|
|       . =o+ + o |
|        =o+=* ...|

As you can see, we are able to create SSH keys without any third-party application on Windows. You can now safely close the console window (e.g by typing ‘exit‘).

Deploying the public key to the server

Of course, the whole key pair thing has only sense if we are using it to secure our client/server communication. On Linux, there would be the easy to use command ssh-copy-id to deploy the key. Some Windows tutorials are showing the scp command, but I never got that working with my Azure VM. The only way left was to deploy it manually (which is not that difficult if you know how).

The manual way

After logging in (using Azure CLI), we are going to add a new user to the gang:

sudo useradd -m newuser

This will create a new user and its home directory. If the OS asks you for a password, it is up to you and the OS settings for empty passwords to provide one or not. Next step would be to add it to one or more groups if necessary:

sudo usermod -aG sudo newuser

The -aG parameter of the usermod command adds the specified group(s) to the user’s groups table. If you want to add the user to more than one group, separate them with a comma (no whitespace after the comma!).

To make things a little bit easier for us, we are going to log in as the new user:

sudo su newuser

Note: If you want to proceed without logging in as the new user, you’ll need to change ~ to /home/newuser for the following commands.

To make Linux accept our prior created SSH key only for our new user, we need to create the .ssh folder and the file for allowed keys in the new user’s home directory. Let’s start with the .ssh directory:

sudo mkdir ~/.ssh
sudo chmod 0700 ~/.ssh
sudo chown newuser:newuser ~/.ssh

Let’s break it down. Obviously, the mkdir command creates the .ssh folder. The chmod command with the 0700 parameter gives full access only to our new user. Finally, the chown command makes our new user the owner of the file.

Linux saves the allowed keys in a file called ‘authorized_keys‘, so that’s the next we are going to create. After creation, we change the file’s access rights to 0644, which makes it read-only for all users except our new user. Execute these commands:

sudo touch ~/.ssh/authorized_keys
sudo chmod 0644 ~/.ssh/authorized_keys

We’re getting closer… earlier, OpenSSH created two files in the .ssh folder on Windows. We are going to copy the contents of the .pub file into the authorized_keys file now. You can extract the content of the .pub file with Notepad on Windows. Once you have that one in your clipboard, open the authorized_keys file (using your favorite editor):

nano ~/.ssh/authorized_keys

Paste the content of your .pub file by right-clicking on the Azure CLI window. Save the file and close it. To secure the new user account, we need to apply some additional steps.

The first one is to delete and disable the password of our new user:

sudo passwd -d -l newuser

The -d parameter completely deletes the password. The -l parameter locks the state, preventing the user to set a new password (without using sudo, that is).

Additional security measures

Now that we have our SSH public key on our server, we are save to disable the password-based login in general. To do so, we need to modify the sshd_config file of our server. Open it with the editor of your choice:

sudo nano /etc/ssh/sshd_config

Search for PasswordAuthentication and ChallengeResponseAuthentication. Uncomment these entries if necessary by removing the ‘#‘ in front of the line an set them to ‘no‘. Some tutorials that are floating around the web tell you to even set UsePAM to no, but following this recommendation always disabled the login completely for me on the Azure VM and I always had to reset the SSH config via the Azure CLI.

Save and exit the sshd_config file. To take these changes into effect, we need to restart the SSH service on our service:

sudo service ssh restart

Wait a few seconds and then verify that the service restarted by controlling its status:

systemctl status ssh.service

That’s it, we have deployed our new SSH key to our server and took additional security measures. There’s just one thing left: try to log in via ssh as the new user. This is a pretty easy task. In the Azure CLI (or a local command prompt), enter the following command:

 ssh -i %USERPROFILE%\.ssh\id_test

After entering your password, you should be logged in just like you did with your admin user.

Bonus: add a shared directory

More often than not, you might want to create scripts or other files that are available to all users. Follow these simple steps:

sudo groupadd shared
sudo usermod -aG shared newuser

sudo mkdir -p /var/helpers
sudo chgrp -R shared /var/helpers
sudo chmod -R 2775 /var/helpers

The first two lines create a new group and assign our new user to the group. Repeat the second command for every user you want to be in that group.

Then we create the shared folder /helpers in the /var folder of our server. Utilizing the chgrp command, we give the ownership to our shared group. Last but not least, we are modifying the access rights once again for the /var/helper folder. The combination 2775 means that every new file inherits the group from the folder, allowing all members of the group to read, write and execute the file, while users outside the shared group only can read and execute the file.


As you can see, one does not always have to use third-party tools to get things done. Like I said at the beginning of this post, once you know the steps that are needed, it is pretty easy to create a new SSH key pair, create a new user and manually deploy the public key to the server. As always, I hope this post is helpful for some of you.

Helpful links

Title Image Credit (Pixabay)

Posted by msicc in Azure, Linux, 0 comments
Run your own Bitcoin Full Node on an Azure Linux VM

Run your own Bitcoin Full Node on an Azure Linux VM

One year ago, I began my journey in the crypto and blockchain area. Recently, multiple circumstances made me thinking about my own crypto-related server. Of course, I choose Azure for running this server (for the time being). It has been a while since I last touched Linux, so I had quite a bit to refresh and learn. In this post, I’ll show you the steps that are needed for setting up an independent full node to support the Bitcoin network.

Setting up the Virtual Machine on Azure

First, we need to install the Azure CLI on our computer. We will use this one to connect to our virtual machine later on via SSH. Follow the instructions found here.

The second prerequisite is a program to generate SSH keys. You can use either the OpenSSH client shipping with Windows 10 (latest versions), use the Azure CLI or PuTTY (follow these instructions).

Create the VM

Once you have installed the CLI and your SSH keys are created, log into your Azure account. Go to the marketplace, and search for ‘ubuntu‘. Choose Ubuntu Server 18.04 LTS and hit the ‘Create‘ button in the next window.

Fill in the details of your Azure VM on the first page of the creation module:

Do not forget to set the SSH admin user, as adding one afterward is not as easy as it seems and it often also fails (I had a hard time to learn that). Also, we need to allow traffic through the default HTTP (80) and SSH (22) ports. Once you configured everything, go to disks.

I did not select premium disks but instead went with a Standard HDD to save some money. You can change that to your needs. The important part here is to NOT use managed disks as we will need to resize the OS disk after the creation of the VM. Follow the rest of the steps in the creation wizard and create your virtual machine. Once the machine is created, you should create a DNS label (hit the ‘Configure’ link at the VM’s overview page to get to the IP settings).

Log in via SSH

Let’s try if we can log in to our Linux VM via the Azure CLI. Open the ‘Microsoft Azure Command Prompt‘ on your PC. To be able to connect to our virtual machine, we need to log in to Azure first to obtain an access token for our session:

az login 

This will open a new browser tab, where you need to log in to your account again. After that, we will be redirected back to the CLI. Once that happened, go back to the overview page of your VM and click on ‘Connect’. This will open a pane where we will see the RDP and SSH connection option. Select SSH and copy the text below ‘Login using VM local account‘:

Paste it into the Azure Command Prompt window and provide your password (you should never use a password-less SSH key). If your screen looks now similar to this, you have successfully logged in:

Now that we have verified that we are able to log in via SSH, type exit to log out again as we have one step left to perform on the virtual machine.

Resizing the OS disk

A Bitcoin full node needs to download and verify the whole blockchain. The current size of the blockchain is around 250 GB, which would never fit in the default size of the OS disk of our VM. Luckily, it is pretty easy to resize the OS disk by running some commands in the Azure CLI.

First, stop the virtual machine:

az vm stop --resource-group YourResourceGroupName --name YourVMName 

Resizing the OS disk needs the VM to be deallocated (this may take some minutes):

az vm deallocate --resource-group YourResourceGroupName --name YourVMName

Once deallocation has finished, we are able to resize the OS disk with this command:

az vm update --resource-group YourResourceGroupName --name YourVMName --set storageProfile.osDisk.diskSizeGB=1024

Once you see the new size in the returned response from Azure, we can start the VM again:

az vm start --resource-group YourResourceGroupName --name YourVMName 

Depending on the distribution you are using, you may have to perform additional steps. Ubuntu, however, mounts the new disk size without any additional action. Verifying the new disk size is pretty easy (after logging back in via SSH), as the System Information displayed after login should already reflect the change (like in the screen above).

Preparing the Bitcoin node

After completing all the steps above, we are finally able to move on with the preparations for the Bitcoin node.

Bitcoin service user

As we will run the node as a service, we need an unprivileged service account:

sudo useradd -m -s /dev/null bitcoin

Great, we just created the account, including the creation of the home directory for the service user and default shell entry. If you want to see the system’s response, just leave out the /dev/null part out when running the command.

As we are going to restrict the access to the service to the bitcoin user (and its default group, which is also bitcoin), we need to add our admin user to the group. Run the following command to do so:

sudo usermod -a -G bitcoin [admin-user]

In order to make these changes (especially the group add) persistent, we need to perform a restart before we move on. You can not only use the Azure portal or CLI, but also use this command to make the VM restart immediately:

sudo shutdown -r now

This will log you out of the current SSH session. After waiting one minute or two, just log back in to continue the preparation of the full node.

Downloading and Verifying Bitcoin binaries

The next step involves downloading the bitcoin core package and its valid signature files (as we don’t trust, but verify). Run these two commands (you will need to hit enter a second time after the first download finished). I also created a temp directory for the download and other stuff.

mkdir ~/temp
cd ~/temp

According to the latest releases are signed with Wladimir J. van der Laan’s releases key, which has the fingerprint we are going to verify the downloaded binaries. It is recommended to do this for all crypto-related binaries you’re downloading, no matter on which OS.

Let’s try to import the key of Mr. van der Laan:

gpg --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

You may get an error message telling you there was a server failure. In this case, run the following command to import the key:

gpg --keyserver hkp:// --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

If you still get errors, there might be some missing packages or other reasons for the server failure. I managed to come through with the second command more often than the first, but in the end, I had the key in my local key store.

Now let’s verify the downloaded files:

gpg --verify SHA256SUMS.asc
sha256sum --ignore-missing -c SHA256SUMS.asc

If the command tells you that the signature is good and indeed from Mr. van der Laan, everything is fine with the hash file. The second command verifies the archive we downloaded earlier and should result in ‘OK’. If not, you should immediately delete those files as they might contain malware.

Note: I have read quite a few comments on reddit and other sites that we can safely ignore those warnings …

Installing Bitcoin binaries

Now that we have our bitcoin service user and verified the bitcoin binaries, we are finally at the point to install them:

tar zxf bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
pushd bitcoin-$btc_version/bin; sudo cp bitcoind bitcoin-cli /usr/bin; popd;
bitcoind --version

After getting the install verification via the version string, it is a good practice to remove both the binaries and the hash file. If you need to reinstall, perform the steps above again to verify the authenticity of the files. Run these commands to remove those files:

rm bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
rm SHA256SUMS.asc

Create a service config file

Now that we have Bitcoin installed, we need to prepare a .conf file for our upcoming service:

vi bitcoin.conf  
nano bitcoin.conf

This will open a text editor on Linux. Enter the base config to get the RPC server of the Bitcoin daemon (needed for bitcoin-cli) activated and save the file to the temp disc (press ‘ESC’ + ‘:’ and write ‘wq‘ if you used vi:, ‘CTRL’+’x’ followed by ‘y’ on nano):

# Global Options
server=1 #activating rpc
rpcconnect= #default
rpcport=8332 #default
rpcallowip= #default
rpcbind=  #default
disablewallet=1 #keeping wallet off (atm)
# Options only for mainnet
# Options only for testnet
# Options only for regtest

Now we just need to copy that file into the /etc/bitcoin folder. If this folder does not yet exist, create it with the following command:

sudo mkdir -p /etc/bitcoin

Next, copy the bitcoin.conf file to it:

sudo cp bitcoin.conf /etc/bitcoin

Assign ownership to the bitcoin service user and make it readable for all users:

sudo chown bitcoin:bitcoin /etc/bitcoin/bitcoin.conf
sudo chmod 0664 /etc/bitcoin/bitcoin.conf

The last step is to create a directory for the daemon where we will find the PID (Process ID) file after starting the service:

sudo mkdir -p /run/bitcoind/
sudo chmod 0755 /run/bitcoind/
sudo chown bitcoin:bitcoin /run/bitcoind/

Create the Bitcoin service

Now we are able to set up the core service of our node, the Bitcoin daemon service. In order to start, just copy and paste the one found at Bitcoin’s Github account into a new file on your VM. Once you have that file in your temp folder, copy it over to the system’s services folder:

sudo cp bitcoind.service /lib/systemd/system

Now we need to enable the service to make it automatically starting up on reboot:

sudo systemctl enable bitcoind

And finally, we need to start the service (or reboot the machine if you want to test that part):

sudo systemctl start bitcoind

After some time, you should be able to use the bitcoin-cli to get some info of your local blockchain copy:

sudo bitcoin-cli -rpccookiefile=/var/lib/bitcoind/.cookie -datadir=/var/lib/bitcoind -getinfo

Please note you need to use sudo because the owner of the service and the files is our bitcoin user we created earlier. If you don’t want to always pass the authentication cookie path, you can create a symbolic link to your current user’s .bitcoin folder:

sudo ln -s /var/lib/bitcoind/.cookie ~/.bitcoin/.cookie

Now we are able to just call sudo bitcoin-cli -getinfo. Another way of checking if the service and the daemon are running is to read the log that gets generated:

sudo tail /var/lib/bitcoind/debug.log -f 

You should see the log scrolling through as it writes new entries. The most entries will look like this:

2019-09-07T14:54:51Z UpdateTip: new best=000000000000004fa323c7ee57b4b22272c7ea757a6a5bdb53dbda73572f559d height=239240 version=0x00000002 log2_work=70.203272 tx=18819570 date='2013-06-02T08:32:35Z' progress=0.041989 cache=675.7MiB(5032783txo)

If you arrived at this point

Congratulations! You are running a Bitcoin full node (without wallet for the time being, though). It will take some time to download and verify the whole blockchain, but you are now effectively helping and securing the Bitcoin network.

This is just the first post about my journey with Bitcoin and my own node. Make sure to follow for future blog posts. As always, I hope this post will be helpful for some of you.

Helpful links

Note: this post was reposted on my account.

Posted by msicc in Azure, Crypto&Blockchain, Linux, 0 comments
Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Internal libraries

Writing (or copy and pasting) the same code over and over again is one of those things I try to avoid when writing code. For quite some time, I already organize such code in libraries. Until last year, this required quite some work managing all libraries for each Xamarin platform I used. Luckily, the MSBuild SDK Extras extensions showed up and made everything a whole lot easier, especially after James Montemagno did a detailed explanation on how to get the most out of it for Xamarin plugins/libraries.

Getting started

Even if I repeat some of the steps of James’ post, I’ll start from scratch on the setup part here. I hope to make the whole process straight forward for everyone – that’s why I think it makes sense to show each and every step. Please make sure you are using the new .csproj type. If you need a refresh on that, you can check my post about migrating to it (if needed).


The first step is pulling in MSBuild.Sdk.Extras, which will enable us to target multiple platforms in one single library. For this, we need a global.json file in the solution folder. Right click on the solution name and select ‘Open Folder in File Explorer‘, then just add a new text file and name it appropriately.

The next step is to define the version of the MSBuild.SDK.Extras library we want to use. The current version is 1.6.65, so let’s define it in the file. Just click the ‘Solution and Folders‘ button to find the file in Visual Studio:

switch to folder view

Add these lines into the file and save it:

  "msbuild-sdks": {
    "MSBuild.Sdk.Extras": "1.6.65"

Modifying the project file

Switch back to the Solution view and right click on the .csproj file. Select ‘Edit [ProjectName].csproj‘. Let’s modify and add the project definitions. We’ll start right in the first line. Replace the first line to pull in the MSBuild.Sdk.Extras:

<Project Sdk="MSBuild.Sdk.Extras">

Next, we’re separating the Version tag. This will ensure that we’ll find it very quickly in future within the file:

  <!--separated for accessibility-->

Now we are enabling multiple targets, in this case our Xamarin platforms. Please note that there are two separated versions – one that includes UWP and one that does not. I thought I would be fine to remove the non-UWP one if I include UWP and was precent with some strange build errors that where resolved only by re-adding the deleted line. I do not remember the reason, but I made a comment in my template to not remove it – so let’s just keep it that way.

  <!--make it multi-platform library!-->
    <!--we are handling compile items ourselves below with a custom naming scheme-->
    <TargetFrameworks Condition=" '$(OS)' == 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;uap10.0.16299;</TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' != 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;</TargetFrameworks>

Now we will add some default NuGet packages into the project and make sure our file get included only on the correct platform. We follow a simple file naming scheme (Xamarin.Essentials uses the same):


This way, we are able to add all platform specific code together with the shared entry point in a single folder. Let’ start with shared items. These will be available on all platforms listed in the PropertyGroup above:

  <!--shared items-->
    <!--keeping this one ensures everything goes smooth-->
    <PackageReference Include="MSBuild.Sdk.Extras" Version="1.6.65" PrivateAssets="All" />

    <!--most commonly used (by me)-->
    <PackageReference Include="Xamarin.Forms" Version="" />
    <PackageReference Include="Xamarin.Essentials" Version="1.0.1" />

    <!--include content, exclude obj and bin folders-->
    <None Include="**\*.cs;**\*.xml;**\*.axml;**\*.png;**\*.xaml" Exclude="obj\**\*.*;bin\**\*.*;bin;obj" />
    <Compile Include="**\*.shared.cs" />

The ‘**\‘ part in the Include property of the Compile tag ensures MSBuild includes also classes in subfolders. Now let’s add some platform specific rules to the project:

  <ItemGroup Condition=" $(TargetFramework.StartsWith('netstandard')) ">
    <Compile Include="**\*.netstandard.cs" />

  <ItemGroup Condition=" $(TargetFramework.StartsWith('uap10.0')) ">
    <PackageReference Include="Microsoft.NETCore.UniversalWindowsPlatform" Version="6.1.9" />
    <Compile Include="**\*.uwp.cs" />

  <ItemGroup Condition=" $(TargetFramework.StartsWith('MonoAndroid')) ">
    <!--need to reference all those libs to get latest minimum Android SDK version (requirement by Google)... #sigh-->
    <PackageReference Include="Xamarin.Android.Support.Annotations" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Compat" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Core.Utils" Version="" />
    <PackageReference Include="Xamarin.Android.Support.CustomTabs" Version="" />
    <PackageReference Include="Xamarin.Android.Support.v4" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Design" Version="" />
    <PackageReference Include="Xamarin.Android.Support.v7.AppCompat" Version="" />
    <PackageReference Include="Xamarin.Android.Support.v7.CardView" Version="" />
    <PackageReference Include="Xamarin.Android.Support.v7.Palette" Version="" />
    <PackageReference Include="Xamarin.Android.Support.v7.MediaRouter" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Core.UI" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Fragment" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Media.Compat" Version="" />
    <PackageReference Include="Xamarin.Android.Support.v7.RecyclerView" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Transition" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="" />
    <Compile Include="**\*.android.cs" />

  <ItemGroup Condition=" $(TargetFramework.StartsWith('Xamarin.iOS')) ">
    <Compile Include="**\*.ios.cs" />

Two side notes:

  • Do not reference version 6.2.2 of the Microsoft.NETCore.UniversalWindowsPlatform NuGet. There seems to be bug in there that will lead to rejection of your app from the Microsoft Store. Just keep 6.1.9 (for the moment).
  • You may not need all of the Xamarin.Android packages, but there are a bunch of dependencies between them and others, so I decided to keep them all

If you have followed along, hit the save button and close the .csproj file. Verifying everything went well is pretty easy – your solution structure should look like this:


Before we’ll have a look at the NuGet creation part of this post, let’s add some sample code. Just insert this into static partial classes with the appropriate naming scheme for every platform and edit the code to match the platform. The .shared version of this should be empty (for this sample).

     public static partial class Hello
        public static string Name { get; set; }

        public static string Platform { get; set; }

        public static  void Print()
            if (!string.IsNullOrEmpty(Name) && !string.IsNullOrEmpty(Platform))
                System.Diagnostics.Debug.WriteLine($"Hello {Name} from {Platform}");
                System.Diagnostics.Debug.WriteLine($"Hello unkown person from {Device.Android}");

Normally, this would be a Renderer or other platform specific code. You should get the idea.

Preparing NuGet package creation

We will now prepare our solution to automatically generate NuGet packages both for DEBUG and RELEASE configurations. Once the packages are created, we will push it to a local (or network) file folder, which serves as our local NuGet-Server. This will fit for most Indie-developers – which tend to not replicate a full blown enterprise infrastructure for their DevOps needs. I will also mention how you could push the packages to an internal NuGet server on a sideline (we are using a similar setup at work).

Adding NuGet Push configurations

One thing we want to make sure is that we are not going to push packages on every compilation of our library. That’s why we need to separate configurations. To add new configurations, open the Configuration Manager in Visual Studio:

In the Configuration Manager dialog, select the ‘<New…>‘ option from the ‘Active solution configuration‘ ComboBox:

Name the new config to fit your needs, I just use DebugNuget which will signal that we are pushing the NuGet package for distribution. I am copying the settings from the Debug configuration and let Visual Studio add the configurations to project files within the solution. Repeat the same for Release configuration.

The result should look like this:

Modifying the project file (again)

If you head over to your project file, you will see the Configurations tag has new entries:


Next, add the properties of your assembly and package:

    <!--assmebly properties-->

  <!--nuget package properties-->

    <PackageReleaseNotes>Xamarin Nugets sample package</PackageReleaseNotes>
    <PackageTags>xamarin, windows, ios, android, xamarin.forms, plugin</PackageTags>

    <Title>Xamarin Nugets</Title>
    <Summary>Xamarin Nugets sample package</Summary>
    <Description>Xamarin Nugets sample package</Description>

    <Owners>MSiccDev Software Development</Owners>
    <Authors>MSiccDev Software Development</Authors>
    <Copyright>MSiccDev Software Development</Copyright>

Configuration specific properties

Now we will add some configuration specific PropertyGroups that control if a package will be created.

Debug and DebugNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Debug' ">
    <!--making this pre-release-->
    <!--needed for debugging!-->

  <PropertyGroup Condition=" '$(Configuration)'=='DebugNuget' ">
    <!--enable package creation-->
    <!--making this pre-release-->
    <!--needed for debugging!-->
    <!--this makes msbuild creating src folder inside the symbols package-->

The Debug configuration enables us to step into the Debug code while we are referencing the project directly during development, while the DebugNuget configuration will also generate a NuGet package including Source and Symbols. This is helpful once you find a bug in the NuGet package and allows us to step into this code also if we reference the NuGet instead of the project. Both configurations will add ‘-pre‘ to the version, making these packages only appear if you tick the ‘Include prerelease‘ CheckBox in the NuGet Package Manager.

Release and ReleaseNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Release' ">

  <PropertyGroup Condition=" '$(Configuration)'=='ReleaseNuget' ">
    <!--enable package creation-->
    <!--include pdb for analytic services-->

The relase configuration goes well with less settings. We do not generate a separated symbols-package here, as the .pdb-file without the source will do well in most cases.

Adding Build Targets

We are close to finish our implementation already. Of course, we want to make sure we push only the latest packages. To ensure this, we are cleaning all generated NuGet packages before we build the project/solution:

  <!--cleaning older nugets-->
  <Target Name="CleanOldNupkg" BeforeTargets="Build">
      <FilesToDelete Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></FilesToDelete>
    <Delete Files="@(FilesToDelete)" />
    <Message Text="Old nupkg in $(ProjectDir)$(BaseOutputPath)$(Configuration) deleted." Importance="High"></Message>

MSBuild provides a lot of options to configure. We are setting the BeforeTargets property of the target to Build, so once we Clean/Build/Rebuild, all old packages will be deleted by the Delete command. Finally, we are printing a message to confirm the deletion.

Pushing the packages

After completing all these steps above, we are ready to distribute our packages. In our case, we are copying the packages to a local folder with the Copy command.

  <!--pushing to local folder (or network path)-->
  <Target Name="PushDebug" AfterTargets="Pack" Condition="'$(Configuration)'=='DebugNuget'">
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg"></PackageToCopy>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>

  <Target Name="PushRelease" AfterTargets="Pack" Condition="'$(Configuration)'=='ReleaseNuget'">
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></PackageToCopy>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>

Please note that the local folder could be replaced by a network path. You have to ensure the availability of that path – which adds in some additional work if you choose this route.

If you’re running a full NuGet server (as often happens in Enterprise environments), you can push the packages with this command (instead of the Copy command):

<Exec Command="NuGet push "$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg" [YourPublishKey] -Source [YourNugetServerUrl]" />

The result

If we now select the DebugNuget/ReleaseNuget configuration, Visual Studio will create our NuGet package and push it to our Nuget folder/server:

Let’s have a look into the NuGet package as well. Open your file location defined above and search your package:

As you can see, the Copy command executed successfully. To inspect NuGet packages, you need the NuGet Package Explorer app. Once installed, just double click the package to view its contents. Your result should be similar to this for the DebugNuGet package:

As you can see, we have both the .pdb files as well as the source in our package as intended.


Even as an Indie developer, you can take advantage of the DevOps options provided with Visual Studio and MSBuild. The MSBuild.Sdk.Extras package enables us to maintain a multi-targeting package for our Xamarin(.Forms) code. The whole process needs some setup, but once you have performed the steps above, extending your libraries is just fast forward.

I planned to write this post for quite some time, and I am happy with doing it as my contribution to the #XamarinMonth (initiated by Luis Matos). As always, I hope this post is helpful for some of you. Feel free to clone and play with the full sample I uploaded on Github.

Until the next post, happy coding, everyone!

Helpful links:

Title image credit

P.S. Feel free to download the official app for my blog (that uses a lot of what I am blogging about):
iOS | Android | Windows 10

Posted by msicc in Azure, Dev Stories, iOS, UWP, Xamarin, 3 comments

Getting productive with WAMS: How to handle erroneous push channels


As I wrote already in my former article ‘Getting productive with WAMS- about the mpns object (push data to the user’s Windows Phone)’, I needed to add some more detailed error handling to the mpns object on my Mobile Service.

There are two error codes that appear frequently: 404 (Not Found) and 412 (Precondition Failed).

The 404 error code

happens when the push channel gets invalid. Reasons for that can be uninstall or reinstall of the app, or also a hard reset of the users device. In this case, we are following the recommendation of Microsoft to stop sending push notifications via this channel.

There might be several ways, but I prefer to work with SQL queries.  This is how I delete those channels within error: function(error):

if (error.statusCode === 404)
 var sqlDelInvalidChannel = "DELETE from pushChannel WHERE id = " +;
 mssql.query(sqlDelInvalidChannel, {
 success: function(){
	console.log("deleted invalid push channel with id: " +;
	error: function(err)
           console.log("there was a problem deleting push channel with id: " + + ", " + err)

As you can see, this is a very simple approach to get rid of those invalid channels.

The 412 error code

needs some more advanced handling. Microsoft recommends to send the push notification as normal, but the code recommends a delay of 61 minutes to resend. Also, with some research on the web, I found out that often the 412 will turn into a 404 after those 61 minutes (at least I found a lot of developers stating this). This is why I went with a different approach. I am going to wait those 61 minutes, and do not send a push notification to those devices.

For that, I do a simple trick. I am saving the time the error shows up in my push channel table, as well as the  time after those 61 minutes. On top, I use a Boolean to determine if the channel is in the delay phase.

Here is the simple code for that, again within error: function(error):

if (error.statusCode === 412)
	var t = new Date();
	var tnow = t.getTime();
	var tnow61 = tnow + 3660000;

	var sqlSave412TimeStamp = "UPDATE pushChannel SET Found412Time=" +  tnow + ", EndOf412Hour=" + tnow61 + ", IsPushDelayed = 'true' WHERE id=" +;


Now if my script runs the next time over this push channel, I need to check if the push channel is within the delay phase. Here’s the code:

if (channel.IsPushDelayed === true)
	var t = new Date();
	var tnow = t.getTime();
	var EndofHour = channel.EndOf412Hour;
	var tdelay = EndofHour - tnow;

	if (tdelay > 0)
	 console.log("push delivery on id: " + + " is delayed for: " + (Math.floor(tdelay/1000/60)) + " minutes");
	else if (tdelay < 0)
	 var sqlDelete412TimeStamp = "UPDATE pushChannel SET Found412Time=0 , EndOf412Hour= 0, IsPushDelayed = 'false' WHERE id=" +;

As you can see, I am checking my Boolean that I added before. If it is still true, I am writing a log entry. If the time has passed already, I am resetting those time values to 0 respective the Boolean to false.

The script will now check if the push channel is still a 412 or turned in to a 404, and so everything starts over again.

Other error codes

There might be other error codes as well. I did not see any other than those two in my logs, but for the case there would be another, I simply added this code to report them:

 console.error("error in Toast Push Channel: " + channel.twitterScreenName,, error)

This way, you can easily handle push channel errors in your Mobile Service.

If you have other error codes in your logs, check this list from Microsoft to determine what you should do:

Note: there might be better ways to handle those errors. If you are using such a way, feel free to leave a comment with your approach.

Otherwise, I hope this post will be helpful for some of you.

Happy coding!

Posted by msicc in Azure, Dev Stories, 0 comments

Getting productive with WAMS: How to call Twitter REST API 1.1 from a scheduled script


Like I promised in my first post about Windows Azure Mobile Services, I will show you how to call the Twitter Rest API 1.1 from a scheduled script. The documentation of the http request object does only use Twitter API 1.0 (which is no longer available).

First, you will need a Consumer key and a Consumer secret for your app. Just go to, register with your Twitter account and then add a new application.

The second thing you will need, is the so called Access token and Access token secret. Both are user dependent, without them Twitter will give you an error that your app is not authorized to use this account for anything on Twitter.

There are several ways to obtain these values. As I am registering the user within my phone app, I am uploading these values from phone and store it in my Mobile Services database.

To generate the requested data, we need several additional data for our request to Twitter:

  • a timestamp for the oAuth Header and the signature string
  • a random number to secure the request (= nonce)
  • an oAuth signature (signed array of the user’s data)
  • a HMAC encoded Hash string

These data is used for our request to Twitter.

Let’s start with the “simple” things:

generate Timestamp:

//generating the timestamp for the OAuth Header and signature string
var timestamp  = new Date() / 1000;
timestamp = Math.round(timestamp);

generate nonce

function generateNonce() {
    var code = "";
    for (var i = 0; i < 20; i++) {
        code += Math.floor(Math.random() * 9).toString();
    return code;

oAuth signature

//generating the oAuth signatured array for the Twitter request
function generateOAuthSignature(method, url, data) {
    //remove query string parameters
    var index = url.indexOf('?');
    if (index > 0)
        url = url.substring(0, url.indexOf('?'));

    var signingToken = encodeURIComponent(ConsumerSecret) + "&" + encodeURIComponent(twitterAccessTokenSecret);

    var keys = [];
    for (var d in data) {
        if (d != 'oauth_signature') {
            //console.log('data:', d);

    var output = "GET&" + encodeURIComponent(url) + "&";
    var params = "";
    keys.forEach(function (k) {
        params += "&" + encodeURIComponent(k) + "=" + encodeURIComponent(data[k]);
    params = encodeURIComponent(params.substring(1));

    return hashString(signingToken, output + params, "base64");

generate the HMAC encoded hash string

//generate Hash-string, encoded in HMAC-SHA1 as required by Twitter's API v1.1
function hashString(key, str, encoding) {
    //console.log('basestring:', str);
    var hmac = crypto.createHmac("sha1", key);
    return hmac.digest(encoding);

Now we have prepared all of these functions, we are well prepared to call the Twitter API. In this example we are calling the user’s profile data:

function requestToTwitter()

    //the url declaration has to be in this function to make the request working!
    //declaring it in another function would cause an error 401 from Twitter's API
    url = '' + twitterId;

    //generate data for sending the request to Twitter
    //this is the data used in the signature string as well as in the Authorization header
    var oAuthData = {oauth_consumer_key: ConsumerKey, oauth_nonce: nonce, oauth_signature: null, oauth_signature_method: "HMAC-SHA1", oauth_timestamp: timestamp, oauth_token: twitterAccessToken, oauth_version: "1.0"};
    var sigData = {};
    for (var k in oAuthData) {
        sigData[k] = oAuthData[k];
    sigData['user_id'] = twitterId;

    var sig = generateOAuthSignature('GET', url, sigData);
    oAuthData.oauth_signature = sig;

    var oAuthHeader = "";
    for (k in oAuthData) {
        oAuthHeader += "," + encodeURIComponent(k) + "=\"" + encodeURIComponent(oAuthData[k]) + "\"";
    oAuthHeader = oAuthHeader.substring(1);
    //very important to not miss the space after OAuth!
    authHeader = 'OAuth '+oAuthHeader;

    var reqOptions = {
            uri: url,
            headers: { 'Accept': 'application/json', 'Authorization': authHeader }

    var httpRequest = require('request');
        httpRequest(reqOptions,callback );


var callback = function(err, response, body) {
    //console.log("in requestToTwitter = callback"); 
            if (err) {
            } else if (response.statusCode !== 200) {
                console.log("from twitter callback " + response.statusCode + " response: " + response.body);
            } else {
                var userProfile = JSON.parse(body);
                UserIdFromTwitter =;
                twitterScreenName = userProfile.screen_name;


You may have noticed that there are several variables that are not declared within these functions. Just declare them globally in your scheduled script.

You can read more about the oAuth authorization process at

There are more services out there that use the oAuth process, so you should be able to convert this for other requests, like (formerly Read It later) and others.

As always, I hope this post was helpful for some of you.

Happy coding!

Posted by msicc in Azure, Dev Stories, 0 comments

Getting productive with WAMS: how to set up a timeout before running code in an interval


I truly love Windows Azure Mobile Services, as it provides an easy way to connect Windows Phone apps to the cloud and also manages Live Tiles and Toast Notifications via Push. This way, we don’t have an impact on the users battery life by running background agents.

However, scheduled scripts are only able to run in predefined time windows, with 15 minutes at lowest.

It may happen that you want to run your script in shorter intervals, as I am currently implementing into mine.

The functions:  setTimeout() and setInterval()

The difference between those two is pretty simple:

  • setTimeout() is running the function included only once after the time in milliseconds has passed
  • setInterval() is running the function included in an interval set up in milliseconds until it gets stopped

Knowing this, it is pretty simple to set up those two functions each for itself:

setTimeout(function() { 
//code to run 
}, time in milliseconds);

setInterval(function() {
//code to run
}, time in milliseconds);

I highly recommend to declare an interval as a variable. If you won’t do that, the interval will run forever as you never can stop it!

But what if we need first a timeout and then an interval?

That is the problem I was trying to solve over the last two days. I was playing around with all kinds of code constellations to achieve this goal. And I found a solution.

We need to set up a few things to achieve that goal:

  • global variables for the number of runs and the interval
  • a function that only proves if the interval is still valid to run or needs to be stopped
  • our function that should be run in an interval

In my example, I want the code to be executed five times, then the interval should be stopped (cleared). I achieve this goal with a pretty simple if/else clause, like you can see:

if (NumberOfRuns < 6)
        console.log("interval ran " + NumberOfRuns + " times");
        //stoping the interval       
        console.log("interval stopped!");

As this is a sample script, I use the function doSomething() to be executed. This represents your code that should be executed if the interval is still valid. Only important thing: counting up the number of runs every time the function is hit.

    //counting up the number of runs with every call of the function

    //your code runs here

And in our starting point (the function that has the same name like your script), we finally declare the the timeout as well as we start to count:

   //setting the NumberofRuns to 1 as after timeout the first run starts
   NumberOfRuns = 1;

   //settings timeout to call the function that proves if the interval is still valid
       //interval has to be defined in this scope, otherwise it will not be accepted
       //we also need a variable for the interval to be able to stop the interval
       Interval = setInterval(RunInterval, 5000);
       console.log("timeout is over");

First, we start our counter with 1, as after the timeout the code will be executed for the first time. I added also some logging functions, so you can prove everything is running fine.

The content of the global variable “Interval” has to be declared within the timeout, otherwise we will not be able to set the interval for our function.

Once you figured all the points above, it is pretty easy to implement this into your running script.

If you want to play around with this and see what the log looks like, here is the full WAMS scheduler script:

//declaring global variables
var NumberOfRuns;
var Interval;

//function to execute in interval
function doSomething()
    //counting up the number of runs with every call of the function

    //your code runs here

//this function proves if the the interval is still valid
function RunInterval()
     if (NumberOfRuns < 6)
        console.log("interval ran " + NumberOfRuns + " times");
        //stoping the interval       
        console.log("interval stopped!");

//main script function (start)
function TestIntervalScript() 
   //setting the NumberofRuns to 1 as after timeout the first run starts
   NumberOfRuns = 1;

   //settings timeout to call the function that proves if the interval is still valid
       //interval has to be defined in this scope, otherwise it will not be accepted
       //we also need a variable for the interval to be able to stop the interval
       Interval = setInterval(RunInterval, 5000);
       console.log("timeout is over");

And here is a screen shot from the desired log file:

Screenshot (192)

As I am still pretty new to JavaScript,  there might be also other ways to achieve this. Feel free to leave a comment if you have anything to add/improve on this code.

And as always, I hope this is helpful for some of you when playing around with Windows Azure Mobile Services.

Happy coding!

Posted by msicc in Azure, Dev Stories, 0 comments

Getting productive with WAMS: respect time zone offset for every single user


In my second post about WAMS I will show you how to respect the time zone of every user.

If you have users that are from all over the world, they have all different time zones. Your Mobile Service script runs always at UTC time, and every user gets the same date & time if you send them push notifications or update a live tile for example. Users don’t want to calculate the time zone differences, so we need to handle that for them.

UTC time is the time since 01/01/1970 00:00 in milliseconds. If we know this, it is somewhat easy to show users their local date and time.

Let’s have a look at the Windows Phone code.

To get the local time zone in our Windows Phone app, we only need three lines of code:

TimeZoneInfo localZone = TimeZoneInfo.Local;
DateTime localTime = DateTime.Now;
TimeSpan offsetToUTC = localZone.GetUtcOffset(localTime);

As you can see, we are getting the local time zone first. This is essential as this one is UTC based. Then we are creating a TimeSpan on our actual DateTime object to get the offset. To make this TimeSpan working on our Azure Mobile Service, which uses JavaScript, we need to convert it to milliseconds. That is the value that has an equal value on all programming languages.

useritemLookUp.TimezoneOffset = offsetToUTC.TotalMilliseconds;

This is the final line of code, which is used to update our user’s item in our SQL table row (for example).

Let’s have a look at the Azure code.

The code is similar to our Windows Phone part. First, we need to fetch the time zone offset from our SQL table:

var sql = "select * from users";

mssql.query(sql, {
        success: function (results) {
            if (results.length > 0) {
                for (var i = 0; i < results.length; i++) {
                    userResult = {
                                        TimeZoneOffset: results[i].TimezoneOffset,

TimeZoneOffset = userResult.TimeZoneOffset;

This way, we can run through our whole table on an Azure Script and calculate the correct time, which is pretty easy to achieve:

var d = new Date();
var locald = new Date(d.getTime() + TimeZoneOffset);

These two lines generate the local time in Milliseconds for the specific user entry. You don’t have to worry whether a user is before or after UTC, it will always calculate the correct time.

If you want to use it for example to show the updated time to your users, you can format the time like I described earlier in this post.

That’s all about respecting local time for your users with Windows Azure and Windows Phone.

Happy coding everyone!

Posted by msicc in Azure, Dev Stories, 0 comments