Azure

Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

As I am moving forward in my current Linux journey, I recently came into a situation where a second user would have been handy. So I tried a few things to create the new user and allow the new user to only log in via SSH.

What the h*** is SSH?

SSH stands for Secure SHell and describes a protocol to connect via encrypted credentials. The security is provided by cryptographic keys, where the server only knows the public key and the client that wants to connect needs the matching private key. The most popular implementation is OpenSSH, which is available as an additional feature on Windows 10 since last fall. If you want to learn more about SSH, read the Wikipedia entry as well.

Create the SSH key pair on Windows

Install the OpenSSH client

First, install the OpenSSH client on your Windows machine. Proceed as follows:

  • open the start menu and type ‘apps’
  • select ‘Apps and features’ settings page
  • select ‘Optional features’
  • click on ‘Add a feature’
  • search ‘OpenSSH client’, click on it and ‘Install’

If you are scrolling down the list of installed features, you should find the entry for the OpenSSH client.

Note: If you are not able to install this feature, it may be the right time to update your Windows installation to the latest version.

Create your SSH key pair

If your user profile does not have a folder called ‘.ssh’, it is time to create it now. Type ‘%USERPROFILE%‘ in the Windows Explorer’s address bar to get to the right folder immediately and create the folder.

Now that we have the folder OpenSSH searches for, we are already able to create our new SSH keypair. Open a command prompt and type this:

ssh-keygen -t rsa -b 4096 -C "newuser@machine.com"

This will initiate the keypair creation. The -C parameter is optional and can be anything. You’ll find these keys often created with <user at address> combinations. After OpenSSH has created your keypair in memory, it will ask for a location to save the file. If you do not enter anything, it’ll save it as id_rsa into the .ssh folder created earlier. If you want to save it to another file name, you can do so:

C:\Users\username\.ssh/id_test

Please note that the file name (without any extension!) is separated by ‘/’, not by ‘\’. If you do not respect this, it will give you an error that the file name does not exist. This happens only if don’t use the default filename. After the files are created, OpenSSH asks you for a passphrase to protect your private key. Nowadays, every single keypair should be password protected (just my 2cts). Once the creation is complete, you’ll see something like this:

Your identification has been saved in C:\Users\msicc\.ssh/id_test.
Your public key has been saved in C:\Users\msicc\.ssh/id_test.pub.
The key fingerprint is:
SHA256:8LK33fWKXMkY5FtcN4uU1v9SyCSUqKNp2T/PurpZCRU newuser@machine.com
The key's randomart image is:
+---[RSA 4096]----+
|          E...   |
|          .o. o  |
|      .  .. o+.oo|
|       oo. oo=.o=|
|      .=S.  o.=.o|
|      =o.. . * o.|
|     .. ..o o * .|
|       . =o+ + o |
|        =o+=* ...|
+----[SHA256]-----+

As you can see, we are able to create SSH keys without any third-party application on Windows. You can now safely close the console window (e.g by typing ‘exit‘).

Deploying the public key to the server

Of course, the whole key pair thing has only sense if we are using it to secure our client/server communication. On Linux, there would be the easy to use command ssh-copy-id to deploy the key. Some Windows tutorials are showing the scp command, but I never got that working with my Azure VM. The only way left was to deploy it manually (which is not that difficult if you know how).

The manual way

After logging in (using Azure CLI), we are going to add a new user to the gang:

sudo useradd -m newuser

This will create a new user and its home directory. If the OS asks you for a password, it is up to you and the OS settings for empty passwords to provide one or not. Next step would be to add it to one or more groups if necessary:

sudo usermod -aG sudo newuser

The -aG parameter of the usermod command adds the specified group(s) to the user’s groups table. If you want to add the user to more than one group, separate them with a comma (no whitespace after the comma!).

To make things a little bit easier for us, we are going to log in as the new user:

sudo su newuser

Note: If you want to proceed without logging in as the new user, you’ll need to change ~ to /home/newuser for the following commands.

To make Linux accept our prior created SSH key only for our new user, we need to create the .ssh folder and the file for allowed keys in the new user’s home directory. Let’s start with the .ssh directory:

sudo mkdir ~/.ssh
sudo chmod 0700 ~/.ssh
sudo chown newuser:newuser ~/.ssh

Let’s break it down. Obviously, the mkdir command creates the .ssh folder. The chmod command with the 0700 parameter gives full access only to our new user. Finally, the chown command makes our new user the owner of the file.

Linux saves the allowed keys in a file called ‘authorized_keys‘, so that’s the next we are going to create. After creation, we change the file’s access rights to 0644, which makes it read-only for all users except our new user. Execute these commands:

sudo touch ~/.ssh/authorized_keys
sudo chmod 0644 ~/.ssh/authorized_keys

We’re getting closer… earlier, OpenSSH created two files in the .ssh folder on Windows. We are going to copy the contents of the .pub file into the authorized_keys file now. You can extract the content of the .pub file with Notepad on Windows. Once you have that one in your clipboard, open the authorized_keys file (using your favorite editor):

nano ~/.ssh/authorized_keys

Paste the content of your .pub file by right-clicking on the Azure CLI window. Save the file and close it. To secure the new user account, we need to apply some additional steps.

The first one is to delete and disable the password of our new user:

sudo passwd -d -l newuser

The -d parameter completely deletes the password. The -l parameter locks the state, preventing the user to set a new password (without using sudo, that is).

Additional security measures

Now that we have our SSH public key on our server, we are save to disable the password-based login in general. To do so, we need to modify the sshd_config file of our server. Open it with the editor of your choice:

sudo nano /etc/ssh/sshd_config

Search for PasswordAuthentication and ChallengeResponseAuthentication. Uncomment these entries if necessary by removing the ‘#‘ in front of the line an set them to ‘no‘. Some tutorials that are floating around the web tell you to even set UsePAM to no, but following this recommendation always disabled the login completely for me on the Azure VM and I always had to reset the SSH config via the Azure CLI.

Save and exit the sshd_config file. To take these changes into effect, we need to restart the SSH service on our service:

sudo service ssh restart

Wait a few seconds and then verify that the service restarted by controlling its status:

systemctl status ssh.service

That’s it, we have deployed our new SSH key to our server and took additional security measures. There’s just one thing left: try to log in via ssh as the new user. This is a pretty easy task. In the Azure CLI (or a local command prompt), enter the following command:

 ssh -i %USERPROFILE%\.ssh\id_test newuser@machine.com

After entering your password, you should be logged in just like you did with your admin user.

Bonus: add a shared directory

More often than not, you might want to create scripts or other files that are available to all users. Follow these simple steps:

sudo groupadd shared
sudo usermod -aG shared newuser

sudo mkdir -p /var/helpers
sudo chgrp -R shared /var/helpers
sudo chmod -R 2775 /var/helpers

The first two lines create a new group and assign our new user to the group. Repeat the second command for every user you want to be in that group.

Then we create the shared folder /helpers in the /var folder of our server. Utilizing the chgrp command, we give the ownership to our shared group. Last but not least, we are modifying the access rights once again for the /var/helper folder. The combination 2775 means that every new file inherits the group from the folder, allowing all members of the group to read, write and execute the file, while users outside the shared group only can read and execute the file.

Conclusion

As you can see, one does not always have to use third-party tools to get things done. Like I said at the beginning of this post, once you know the steps that are needed, it is pretty easy to create a new SSH key pair, create a new user and manually deploy the public key to the server. As always, I hope this post is helpful for some of you.

Helpful links

Title Image Credit (Pixabay)

Posted by msicc in Azure, Linux, 0 comments
Run your own Bitcoin Full Node on an Azure Linux VM

Run your own Bitcoin Full Node on an Azure Linux VM

One year ago, I began my journey in the crypto and blockchain area. Recently, multiple circumstances made me thinking about my own crypto-related server. Of course, I choose Azure for running this server (for the time being). It has been a while since I last touched Linux, so I had quite a bit to refresh and learn. In this post, I’ll show you the steps that are needed for setting up an independent full node to support the Bitcoin network.

Setting up the Virtual Machine on Azure

First, we need to install the Azure CLI on our computer. We will use this one to connect to our virtual machine later on via SSH. Follow the instructions found here.

The second prerequisite is a program to generate SSH keys. You can use either the OpenSSH client shipping with Windows 10 (latest versions), use the Azure CLI or PuTTY (follow these instructions).

Create the VM

Once you have installed the CLI and your SSH keys are created, log into your Azure account. Go to the marketplace, and search for ‘ubuntu‘. Choose Ubuntu Server 18.04 LTS and hit the ‘Create‘ button in the next window.

Fill in the details of your Azure VM on the first page of the creation module:

Do not forget to set the SSH admin user, as adding one afterward is not as easy as it seems and it often also fails (I had a hard time to learn that). Also, we need to allow traffic through the default HTTP (80) and SSH (22) ports. Once you configured everything, go to disks.

I did not select premium disks but instead went with a Standard HDD to save some money. You can change that to your needs. The important part here is to NOT use managed disks as we will need to resize the OS disk after the creation of the VM. Follow the rest of the steps in the creation wizard and create your virtual machine. Once the machine is created, you should create a DNS label (hit the ‘Configure’ link at the VM’s overview page to get to the IP settings).

Log in via SSH

Let’s try if we can log in to our Linux VM via the Azure CLI. Open the ‘Microsoft Azure Command Prompt‘ on your PC. To be able to connect to our virtual machine, we need to log in to Azure first to obtain an access token for our session:

az login 

This will open a new browser tab, where you need to log in to your account again. After that, we will be redirected back to the CLI. Once that happened, go back to the overview page of your VM and click on ‘Connect’. This will open a pane where we will see the RDP and SSH connection option. Select SSH and copy the text below ‘Login using VM local account‘:

Paste it into the Azure Command Prompt window and provide your password (you should never use a password-less SSH key). If your screen looks now similar to this, you have successfully logged in:

Now that we have verified that we are able to log in via SSH, type exit to log out again as we have one step left to perform on the virtual machine.

Resizing the OS disk

A Bitcoin full node needs to download and verify the whole blockchain. The current size of the blockchain is around 250 GB, which would never fit in the default size of the OS disk of our VM. Luckily, it is pretty easy to resize the OS disk by running some commands in the Azure CLI.

First, stop the virtual machine:

az vm stop --resource-group YourResourceGroupName --name YourVMName 

Resizing the OS disk needs the VM to be deallocated (this may take some minutes):

az vm deallocate --resource-group YourResourceGroupName --name YourVMName

Once deallocation has finished, we are able to resize the OS disk with this command:

az vm update --resource-group YourResourceGroupName --name YourVMName --set storageProfile.osDisk.diskSizeGB=1024

Once you see the new size in the returned response from Azure, we can start the VM again:

az vm start --resource-group YourResourceGroupName --name YourVMName 

Depending on the distribution you are using, you may have to perform additional steps. Ubuntu, however, mounts the new disk size without any additional action. Verifying the new disk size is pretty easy (after logging back in via SSH), as the System Information displayed after login should already reflect the change (like in the screen above).

Preparing the Bitcoin node

After completing all the steps above, we are finally able to move on with the preparations for the Bitcoin node.

Bitcoin service user

As we will run the node as a service, we need an unprivileged service account:

sudo useradd -m -s /dev/null bitcoin

Great, we just created the account, including the creation of the home directory for the service user and default shell entry. If you want to see the system’s response, just leave out the /dev/null part out when running the command.

As we are going to restrict the access to the service to the bitcoin user (and its default group, which is also bitcoin), we need to add our admin user to the group. Run the following command to do so:

sudo usermod -a -G bitcoin [admin-user]

In order to make these changes (especially the group add) persistent, we need to perform a restart before we move on. You can not only use the Azure portal or CLI, but also use this command to make the VM restart immediately:

sudo shutdown -r now

This will log you out of the current SSH session. After waiting one minute or two, just log back in to continue the preparation of the full node.

Downloading and Verifying Bitcoin binaries

The next step involves downloading the bitcoin core package and its valid signature files (as we don’t trust, but verify). Run these two commands (you will need to hit enter a second time after the first download finished). I also created a temp directory for the download and other stuff.

mkdir ~/temp
cd ~/temp
btc_version="0.18.1"
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/SHA256SUMS.asc

According to Bitcoin.org the latest releases are signed with Wladimir J. van der Laan’s releases key, which has the fingerprint we are going to verify the downloaded binaries. It is recommended to do this for all crypto-related binaries you’re downloading, no matter on which OS.

Let’s try to import the key of Mr. van der Laan:

gpg --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

You may get an error message telling you there was a server failure. In this case, run the following command to import the key:

gpg --keyserver hkp://keyserver.ubuntu.com:80 --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

If you still get errors, there might be some missing packages or other reasons for the server failure. I managed to come through with the second command more often than the first, but in the end, I had the key in my local key store.

Now let’s verify the downloaded files:

gpg --verify SHA256SUMS.asc
sha256sum --ignore-missing -c SHA256SUMS.asc

If the command tells you that the signature is good and indeed from Mr. van der Laan, everything is fine with the hash file. The second command verifies the archive we downloaded earlier and should result in ‘OK’. If not, you should immediately delete those files as they might contain malware.

Note: I have read quite a few comments on reddit and other sites that we can safely ignore those warnings …

Installing Bitcoin binaries

Now that we have our bitcoin service user and verified the bitcoin binaries, we are finally at the point to install them:

tar zxf bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
pushd bitcoin-$btc_version/bin; sudo cp bitcoind bitcoin-cli /usr/bin; popd;
bitcoind --version

After getting the install verification via the version string, it is a good practice to remove both the binaries and the hash file. If you need to reinstall, perform the steps above again to verify the authenticity of the files. Run these commands to remove those files:

rm bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
rm SHA256SUMS.asc

Create a service config file

Now that we have Bitcoin installed, we need to prepare a .conf file for our upcoming service:

vi bitcoin.conf  
or
nano bitcoin.conf

This will open a text editor on Linux. Enter the base config to get the RPC server of the Bitcoin daemon (needed for bitcoin-cli) activated and save the file to the temp disc (press ‘ESC’ + ‘:’ and write ‘wq‘ if you used vi:, ‘CTRL’+’x’ followed by ‘y’ on nano):

testnet=0
regstest=0
mainnet=1
# Global Options
server=1 #activating rpc
rpcconnect=127.0.0.1 #default
rpcport=8332 #default
rpcallowip=127.0.0.1/32 #default
rpcbind=127.0.0.1  #default
disablewallet=1 #keeping wallet off (atm)
daemon=1
# Options only for mainnet
[main]
# Options only for testnet
[test]
# Options only for regtest
[regtest]

Now we just need to copy that file into the /etc/bitcoin folder. If this folder does not yet exist, create it with the following command:

sudo mkdir -p /etc/bitcoin

Next, copy the bitcoin.conf file to it:

sudo cp bitcoin.conf /etc/bitcoin

Assign ownership to the bitcoin service user and make it readable for all users:

sudo chown bitcoin:bitcoin /etc/bitcoin/bitcoin.conf
sudo chmod 0664 /etc/bitcoin/bitcoin.conf

The last step is to create a directory for the daemon where we will find the PID (Process ID) file after starting the service:

sudo mkdir -p /run/bitcoind/
sudo chmod 0755 /run/bitcoind/
sudo chown bitcoin:bitcoin /run/bitcoind/

Create the Bitcoin service

Now we are able to set up the core service of our node, the Bitcoin daemon service. In order to start, just copy and paste the one found at Bitcoin’s Github account into a new file on your VM. Once you have that file in your temp folder, copy it over to the system’s services folder:

sudo cp bitcoind.service /lib/systemd/system

Now we need to enable the service to make it automatically starting up on reboot:

sudo systemctl enable bitcoind

And finally, we need to start the service (or reboot the machine if you want to test that part):

sudo systemctl start bitcoind

After some time, you should be able to use the bitcoin-cli to get some info of your local blockchain copy:

sudo bitcoin-cli -rpccookiefile=/var/lib/bitcoind/.cookie -datadir=/var/lib/bitcoind -getinfo

Please note you need to use sudo because the owner of the service and the files is our bitcoin user we created earlier. If you don’t want to always pass the authentication cookie path, you can create a symbolic link to your current user’s .bitcoin folder:

sudo ln -s /var/lib/bitcoind/.cookie ~/.bitcoin/.cookie

Now we are able to just call sudo bitcoin-cli -getinfo. Another way of checking if the service and the daemon are running is to read the log that gets generated:

sudo tail /var/lib/bitcoind/debug.log -f 

You should see the log scrolling through as it writes new entries. The most entries will look like this:

2019-09-07T14:54:51Z UpdateTip: new best=000000000000004fa323c7ee57b4b22272c7ea757a6a5bdb53dbda73572f559d height=239240 version=0x00000002 log2_work=70.203272 tx=18819570 date='2013-06-02T08:32:35Z' progress=0.041989 cache=675.7MiB(5032783txo)

If you arrived at this point

Congratulations! You are running a Bitcoin full node (without wallet for the time being, though). It will take some time to download and verify the whole blockchain, but you are now effectively helping and securing the Bitcoin network.

This is just the first post about my journey with Bitcoin and my own node. Make sure to follow for future blog posts. As always, I hope this post will be helpful for some of you.

Helpful links

Note: this post was reposted on my Trybe.one account.

Posted by msicc in Azure, Crypto&Blockchain, Linux, 0 comments
Handle AtomicPay’s webhook with Microsoft Azure (Part 3/3) – sending push notifications with Notification Hub

Handle AtomicPay’s webhook with Microsoft Azure (Part 3/3) – sending push notifications with Notification Hub

Series overview

  1. Handle the incoming notification with an Azure Function (first post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (second post)
  3. Send a push notification via Azure Notification Hub to all devices that have linked a merchant’s account to it (this post)

Preparations…

With Azure Notification Hub, we have one of the most powerful tools for sending push notifications out to connected devices. It is designed to handle multiple platforms, so let’s create a new Notification Hub in the Azure Portal. After logging in, select ‘Create a resource‘, followed by ‘Mobile‘ and a final click on ‘Notification Hub‘:

Azure: create a new Notification Hub

Fill in the details of the new Hub. Remember to select your already existing Resource Group and the same region that is used for your Azure Function:

Azure: new Notification Hub creation details

After about a minute, your newly created Notification Hub is ready to use. Select ‘Go to resource‘ to open it.

Azure Deployment finished

Now we are already able to connect to our first platform. For this sample, I will focus on Android. Login to the Firebase console with your Google account. Select ‘Add project‘ and configure your Android app (or add your existing, if you have already one like I do):

Firebase: Add or select project

After following all (self-explanatory) steps, scroll a bit down in the General tab of your Firebase project. You will find an entry like this:

Firebase: download google-services.json file

Download the ‘google-services.json‘-file. We will need it later in our Android app. Now select the ‘Cloud Messaging‘ tab. You will be presented with two keys – copy the upper one:

Firebase: copy server key

Go back to the Azure portal and select ‘Google (GCM/FCM)‘ in ‘Settings’ . Paste the earlier copied key and hit the ‘Save‘-Button:

Azure: paste Firebase server key and save

Now we need to authenticate our Azure Function to the Notification Hub. Select ‘Access Policies‘ in the ‘Manage’ section of your Hub and copy the lower ConnectionString with the ‘Listen,Manage,Send permission:

Azure: copy server ConnectionString

Open your Azure Function in a new tab. In the ‘Application settings‘, add a new setting and paste the ConnectionString. Add another one for the NotificationHub’s name:

Azure Function add Hub name and ConnectionString to Application settings

Go back to the Notification Hub and save the upper ConnectionString locally, as we need that one in our Android application later on

Back to code…

Now that we have prepared the Notification Hub on Azure, let’s write some code that actually sends out our notifications once our Function verified our AtomicPay invoice. In the last post, we already rewrote some of the code in preparation of the final step, our push notifications.

Triggering push notifications

Before we will be able to modify our Function code to actually send the notification, we need to install an additional NuGet package: Microsoft.Azure.Webjobs.Extensions.NotificationHub. Please note that I only was able to get this all up and running with version 1.2.0, version 1.3.0 seems to be not compatible with v1 Functions.

Triggering push notifications can be broken down into these 3 steps:

  1. create a Hub client from the ConnectionString
  2. create the content of the notification
  3. finally send the notification

This translates into this Task inside our InvoiceVerifier class:

        private static async Task TriggerPushNotification(InvoiceInfoDetails invoiceInfoDetails, string accId)
        {
            //we need full shared access connection string (server side)
            var connectionString = ConfigurationManager.AppSettings["NotifHubConnectionString"];
            var hubName = ConfigurationManager.AppSettings["NotifHubName"];

            var hub = NotificationHubClient.CreateClientFromConnectionString(connectionString, hubName);

            var templateParams = new Dictionary<string, string>();
            templateParams["body"] = $"Received payment for invoice id: {invoiceInfoDetails.InvoiceId} ({invoiceInfoDetails.Status.ToString()})";

            var outcome = await hub.SendTemplateNotificationAsync(templateParams, $"accId:{accId}");

            _traceWriter.Info($"attempted to inform merchant {accId} of payment via push");
        }

We are loading the ConnectionString and the Hub’s name from our Function’s Application settings and create a new client connection using these two safely stored properties. To keep this sample simple, I am using a templated notification that can be used across all supported platforms. The receiver is responsible for the handling of this template (we’ll see how later in this post). Finally, we are sending out the notification to the native platforms, where they will be distributed (in our case, via Firebase Cloud Messaging). By including the accId:{accId}tag, we are sending the push notification only to the devices that were registered with that specific merchant account.

Of course, this nicely written method does nothing until we actually use it. Let’s update our Run method. we only need to add one line for our test in the switch that handles the returned invoiceInfoDetails:

switch (invoiceInfoDetails.Status)
{
	case AtomicPay.Entity.InvoiceStatus.Paid:
	case AtomicPay.Entity.InvoiceStatus.PaidAfterExpiry:
	case AtomicPay.Entity.InvoiceStatus.Overpaid:
	case AtomicPay.Entity.InvoiceStatus.Complete:
		log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is paid");
		await TriggerPushNotification(invoiceInfoDetails, accId);
		break;
	default:
		log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is not yet paid");
		//todo: this will trigger additional status handling in future
		break;
}

Publish your updated Azure Function. Our Azure Function is now connected to a Notification Hub and is able to send out push notifications. Of course, push notifications ending in Nomandsland are boring. So let’s go ahead.

Receiving the push notifications

Create a new Xamarin.Android app (with XAML or without, your choice). Of course, also here we have some additional setup to perform.

Import google-services.json

Add the google-services.json we downloaded before from Firebase to your project. Set its Build Action to GoogleServicesJson in the Properties window. I needed to restart Visual Studio to be able to select this option after adding the file.

VS-google-services-build-action

Installing NuGet packages

Of course, we also need to install some additional NuGet packages:

The second step involves some changes to the AndroidManifest, giving the Firebase package some permissions and handle its intents in the application tag:

<receiver android:name="com.google.firebase.iid.FirebaseInstanceIdInternalReceiver" android:exported="false" />
<receiver android:name="com.google.firebase.iid.FirebaseInstanceIdReceiver" android:exported="true" android:permission="com.google.android.c2dm.permission.SEND">
	<intent-filter>
		<action android:name="com.google.android.c2dm.intent.RECEIVE" />
		<action android:name="com.google.android.c2dm.intent.REGISTRATION" />
		<category android:name="${applicationId}" />
	</intent-filter>
</receiver>

Next, we will add a new constants class:

public static class Constants
{
	public const string ListenConnectionString = "<Listen connection string>";
	public const string NotificationHubName = "<hub name>";
}

On the client, we are only using the lower permission ConnectionString we copied earlier from the Azure Notification Hub.

Connecting to Firebase

As our Azure Notification Hub sends notifications via Firebase, Google’s native messaging handler, we need a service that connects our app. The service requests a token (the Xamarin library does all that complex stuff for us) that we’ll need to actually register the device for the notifications. Add a new class called PlatformFirebaseIidService and decorate it with the Service attribute. Besides that, we need to register our interest on the com.google.firebase.INSTANCE_ID_EVENT, which will call into the OnTokenRefresh() method we will override. We’ll end up like this:

[Service]
[IntentFilter(new[] { "com.google.firebase.INSTANCE_ID_EVENT" })]
public class PlatformFirebaseIidService : FirebaseInstanceIdService
{
	const string TAG = "PlatformFirebaseIidService";
	NotificationHub _hub;

	public override void OnTokenRefresh()
	{
		var refreshedToken = FirebaseInstanceId.Instance.Token;
		Log.Debug(TAG, "FCM token: " + refreshedToken);		
	}
}

Now that we hold a fresh Firebase instance token, we can register our app for receiving push notifications. To do this, we’re adding an new method:

void SendRegistrationToServer(string token, List<string> tags = null)
{
	// Register with Notification Hubs
	_hub = new NotificationHub(Constants.NotificationHubName, Constants.ListenConnectionString, this);

	if (tags == null)
		tags = new List<string>() { };

	var templateBody = "{\"data\":{\"message\":\"$(body)\"}}";

	//this one registers a template that can be used cross platform
	//just make sure the template is the same on iOS, Windows, etc.
	var registerTemplate = _hub.RegisterTemplate(token, "defaultTemplate", templateBody, tags.ToArray());
	Log.Debug(TAG, $"Successful registration of Template {registerTemplate.RegistrationId}");
}

Let me break this method down. Of course, we need to connect to our Notification Hub, which is responsible for the decision if our client is a valid receiver or not. If we add tags, they will get send together with the registration. We will add the merchant’s account Id to filter the receiver. As we are sending notifications using a template, we need to register for the template that gets filled by our Azure Function. One thing is left, calling this method after obtaining a token in the OnTokenRefresh override:

SendRegistrationToServer(refreshedToken, new List<string>() { "accId:<yourAccId>" });

Handling incoming firebase messages

Now that our client is registered with both Firebase and the Azure Notification Hub, of course we want it to be able to receive the pushed messages. To achieve this, we need another Service. Add a new class called PlatformFirebaseMessagingService and decorate it once again with the Service attribute. This time, we are interested in the com.google.firebase.MESSAGING_EVENT intent, so let’s add also this one. The service is responsible for parsing our payload and actually trigger a notification helper to send the notification. Here is the code of the service:

[Service]
[IntentFilter(new[] { "com.google.firebase.MESSAGING_EVENT" })]
public class PlatformFirebaseMessagingService : FirebaseMessagingService
{
	const string TAG = "PlatformFirebaseMessagingService";

	public override void OnMessageReceived(RemoteMessage message)
	{
		Log.Debug(TAG, "From: " + message.From);
		string title = "Azure Test message";
		string body = null;

		if (message.GetNotification() != null)
		{
			//These is how most messages will be received
			body = message.GetNotification().Body;
			title = message.GetNotification().Title;
			Log.Debug(TAG, $"Notification Message Body: {body}");
		}
		else
		{
			//Only used for debugging payloads sent from the Azure portal
			body = message.Data.Values.First();
		}

		var notification = NotificationHelper.Instance.GetNotificationBuilder(title, body);
		NotificationHelper.Instance.Notify(1001, notification);

	}
}

Of course, you are curious about the NotificationHelper class. Let’s have a look. Besides being a Singleton, we need to retrieve an instance of the system’s NotificationService. As we do not have multiple notification channels in this sample, it is enough to declare both the name and its id in two constants. In the constructor of the class, we are initializing the channel:

public class NotificationHelper : ContextWrapper
{
	private static NotificationHelper _instance;

	public static NotificationHelper Instance => _instance ?? (_instance = new NotificationHelper(Application.Context));
	
	const string NOTIFICATION_CHANNEL_ID = "default";
	const string NOTIFICATION_CHANNEL_NAME = "notif_test_channel";

	NotificationManager _manager;
	NotificationManager Manager => _manager ?? (_manager = (NotificationManager)GetSystemService(NotificationService));

	public NotificationHelper(Context context) : base(context)
	{
		var channel = new NotificationChannel(NOTIFICATION_CHANNEL_ID, NOTIFICATION_CHANNEL_NAME, NotificationImportance.Default);
		channel.EnableVibration(true);
		channel.LockscreenVisibility = NotificationVisibility.Public;
		
		this.Manager.CreateNotificationChannel(channel);
	}
}

Creating the Notifications involves the Notificiation.Builder class. We’re simplifying the process for us with this method:

public Notification.Builder GetNotificationBuilder(string title, string body)
{
	var intent = new Intent(this.ApplicationContext, typeof(MainActivity));
	intent.AddFlags(ActivityFlags.ClearTop);
	var pendingIntent = PendingIntent.GetActivity(this, 0, intent, PendingIntentFlags.OneShot);

	return new Notification.Builder(this.ApplicationContext, NOTIFICATION_CHANNEL_ID)
		.SetContentTitle(title)
		.SetContentText(body)
		.SetSmallIcon(Resource.Drawable.ic_launcher)
		.SetAutoCancel(true)
		.SetContentIntent(pendingIntent);
}

To read more about creation notifications, have a look at the docs for local (= client side) notifications.

Last but not least, we need to inform the system’s notification service to show the notification. The final helper method to this looks like this:

public void Notify(int id, Notification.Builder notificationBuilder)
{
	this.Manager.Notify(id, notificationBuilder.Build());
}

To test the notification, we have two options – one is to send a test notification via the Notification Hub, the other is to use Postman once again to create to trigger our Azure Function. In both cases, your result should be a notification on your device (after you deployed and run the application without the debugger being attached).

atomicpay-webhook-notification

Conclusion

In this last post of the series, I showed you all steps that are needed to send out push notifications utilizing an Azure Notification Hub. It takes a bit of setup in the beginning, but the code involved is pretty easy and straight forward.

Now that the series is complete, you can have a look at the source code on Github. You need to add your own google-services.json file and your own keys as well to run the sample. As always, I hope this post, as well as this series, is helpful for some of you.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Azure, Dev Stories, Xamarin, 1 comment
Handle AtomicPay’s webhook with Microsoft Azure (Part 2/3) – invoice verification with Azure KeyVault

Handle AtomicPay’s webhook with Microsoft Azure (Part 2/3) – invoice verification with Azure KeyVault

Series overview

  1. Handle the incoming notification with an Azure Function (first post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (this post)
  3. Send a push notification via Azure Notificationhub to all devices that have linked a merchant’s account to it (third post)

Storing credentials securely

When we consume web services, we need to authenticate our applications against them/their API. In the past, I always saw (and in fact, I also made) samples that do ignore the security of those credentials and provide them in code, leaving additional research (if you are new to that topic) and making the samples incomplete. As the goal of this post is to verify the state of a received invoice against the AtomicPay API, the first step is to provide a secure mechanism for storing our API credentials on Azure.

Get your AtomicPay API keys

If you haven’t signed up for an AtomicPay account, you should probably do it now. Once you have logged in, open the menu and select ‘API Integration‘:

AtomicPay menu select API Integration

On this page, you find your pre-generated API-Keys. Should your keys be compromised, you will be able to regenerate your keys here as well:

AtomicPay API Integration page

Introducing Azure KeyVault

According to the Azure KeyVault documentation, it is

a tool for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. A Vault is logical group of secrets.

As we are dealing with a public/private key credential combo, Azure KeyVault fits our goal to securely store the credentials perfectly. Since November last year, Azure Functions are able to use the KeyVault via their AppSettings. This is what the first half of this post is about.

Creating a new KeyVault

Login to the Azure Portal and go to the Marketplace. Search for ‘Key Vault‘ there and select it from the results:

Azure Marketplace search for Key Vault

You will be prompted with the creation menu. Give it a unique name and select your existing resource group (which will make sure we are in the correct location automatically). Once done, click on the create button:

Azure create new key vault menu

It will take a minute or two until the deployment is done. From the notification, you’ll get a direct link to the resource:

In the overview menu, select ‘Secrets’, where we will store our credentials:

Azure KeyVault menu - select Secrets

If you are wondering why we are using the ‘Secrets‘ option over ‘Keys‘ – the keys are already generated by AtomicPay, and the import function wants a backup file. I did not have a deeper look into this option as the ‘Secrets‘ option offers all I need.

The next step is to add a secret entry for AtomicPay’s account id, public and private key:

Azure KeyVault add secret

Once you have added the Application ID and your keys into ‘Secrets‘, click on ‘Overview‘. On the right-hand side, you’ll see an entry called ‘DNS Name’. Hover with the mouse to reveal the copy button and copy it:

Azure KeyVault Url Copy

Now go to your Azure Function and select ‘Application settings‘. Scroll a bit down and add a new setting. You can name it whatever you want, I just used ‘KeyVaultUri‘. Paste the url you have copied earlier into the value field. Do not forget to hit the ‘Save‘-button after that:

Azure Function - Application settings

Now we are just one step away of being able to use the KeyVault in our code. Go back to ‘Overview‘ on your Function and click on the ‘Platform features‘ tab. Select ‘Identity‘ there:

Azure Function select Identity

We need to enable the ‘System assigned‘ managed identity feature to allow the Function to interact with the Azure KeyVault from our code. This will add our Function to an Azure AD instance that handles the access rights for us:

Azure function enable System assigned identity

Now we have everything in place to use the KeyVault to retrieve the API credentials we need for invoice verification.

Retrieving credentials in our code

Before we start to rewrite our function code, we need to install two NuGet-Packages into our Function project. Right click on the project and select ‘Manage NuGet Packages…‘. Search for these to packages and install them:

Refactoring the initial code

Whenever possible, we should take the chance to refactor our code. I have done so on adding the authentication layer for this sample. First, we will be moving the code to get the payload off the webhook into its own method:

        private static async Task<WebhookInvoiceInfo> GetPaymentPayload(HttpRequestMessage requestMessage)
        {
            _traceWriter.Info("trying to get payload object from request...");
            WebhookInvoiceInfo result = null;
            _jsonSerializerSettings = new JsonSerializerSettings()
            {
                MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
                DateParseHandling = DateParseHandling.None,
                Converters ={

                        new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                        StringToLongConverter.Instance,
                        StringToDecimalConverter.Instance,
                        StringToInvoiceStatusConverter.Instance,
                        StringToInvoiceStatusExceptionConverter.Instance
                }
            };

            _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

            using (var stream = await requestMessage.Content.ReadAsStreamAsync())
            {
                using (var reader = new StreamReader(stream))
                {
                    using (var jsonReader = new JsonTextReader(reader))
                    {
                        result = _jsonSerializer.Deserialize<WebhookInvoiceInfo>(jsonReader);
                    }
                }
            }

            return result;
        }

Next, we need are going to write another method to retrieve the credentials from the Azure KeyVault and initialize the AtomicPay SDK properly:

        private static async Task<bool> InitAtomicPay()
        {
            bool isAuthenticated = false;

            //initialize Azure Service Token Provider and authenticate this function against the KeyVault
            var serviceTokenProvider = new AzureServiceTokenProvider();
            var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(serviceTokenProvider.KeyVaultTokenCallback));

            //getting the key vault url
            var vaultBaseUrl = ConfigurationManager.AppSettings["KeyVaultUri"];

            //retrieving the appId and secrets
            var accId = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accId");
            var accPBK = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accPBK");
            var accPVK = await keyVaultClient.GetSecretAsync(vaultBaseUrl, "atomicpay-accPVK");

            //initialize AtomicPay SDK and return the result
            await AtomicPay.Config.Current.Init(accId.Value, accPBK.Value, accPVK.Value);

            if (!AtomicPay.Config.Current.IsInitialized)
            {
                isAuthenticated = false;
                _traceWriter.Info("failed to authenticate with AtomicPay");
            }
            else
            {
                isAuthenticated = true;
                _traceWriter.Info("successfully authenticated with AtomicPay.");
            }

            return isAuthenticated;
        }

Now that we authenticated against the API, we are finally able to verify the state of the invoice sent by the webhook. Let’s put everything together:

        //renamed this function (not particullary necessary)
        [FunctionName("InvoiceVerifier")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            //enable other methods to write to log
            _traceWriter = log;
            _traceWriter.Info($"arrived at function trigger for 'InvoiceVerifier'...");

            //retrieving the payload
            var payload = await GetPaymentPayload(req);

            //if payload is null, there is something wrong
            if (payload != null)
            {
                //authenticate the Function against AtomicPay
                var isAuthenticated = await InitAtomicPay();
                if (isAuthenticated)
                {
                    AtomicPay.Entity.InvoiceInfoDetails invoiceInfoDetails = null;

                    _traceWriter.Info($"trying to verify invoice id {payload.InvoiceId} ...");

                    //verifying the invoice 
                    using (var atomicPayClient = new AtomicPay.AtomicPayClient())
                    {
                        var invoiceObj = await atomicPayClient.GetInvoiceByIdAsync(payload?.InvoiceId);

                        if (invoiceObj == null)
                            return req.CreateResponse(HttpStatusCode.BadRequest, $"there was an error getting invoice with id {payload?.InvoiceId}. Response was null");

                        if (invoiceObj.IsError)
                            return req.CreateResponse(HttpStatusCode.BadRequest, $"there was an error getting invoice with id {payload?.InvoiceId}. Message from AtomicPay: {invoiceObj.Value.Message}");
                        else
                            invoiceInfoDetails = invoiceObj.Value?.Result?.FirstOrDefault();
                    }

                    //this will be the point where we will trigger the push notification in the last part
                    switch (invoiceInfoDetails.Status)
                    {
                        case AtomicPay.Entity.InvoiceStatus.Paid:
                        case AtomicPay.Entity.InvoiceStatus.PaidAfterExpiry:
                        case AtomicPay.Entity.InvoiceStatus.Overpaid:
                        case AtomicPay.Entity.InvoiceStatus.Complete:
                            log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is paid");
                            break;
                        default:
                            log.Info($"invoice with id {invoiceInfoDetails.InvoiceId} is not yet paid");
                            break;
                    }
                }
                else
                {
                    req.CreateResponse(HttpStatusCode.Unauthorized, "failed to authenticate with AtomicPay");
                }
            }
            else
                req.CreateResponse(HttpStatusCode.BadRequest, "there was an error getting the payload from request body");

            return req.CreateResponse(HttpStatusCode.OK, $"verified received invoice with id: '{payload?.InvoiceId}'");
        }

Save the changes to your code. After that, right click on the project and hit ‘Publish’ to update the function on Azure. Once you have published the updated code, you should be able to test your Function once again with Postman and receive a result like this:

Postman response from updated function

Conclusion

As you can see, Microsoft Azure provides a convenient way to store credentials and use them in your Azure Functions. This approach makes the whole process a whole lot more secure. This post also showed the ease of implementation of the AtomicPay .NET SDK.

In the third and last post of this series, we will connect our function to an Azure Notificationhub for sending push notifications to all devices that have registered with a certain merchant account. As always, I hope also this post was helpful for some of you.

Until the next post, happy coding, everyone!

Title image credit

Posted by msicc in Azure, Crypto&Blockchain, Dev Stories, Xamarin, 0 comments
Handle AtomicPay’s webhook with Microsoft Azure (Part 1/3)

Handle AtomicPay’s webhook with Microsoft Azure (Part 1/3)

In case you missed the announcement of the SDK, here is a quick way to read it now:

What is a webhook?

Webhooks are gaining popularity, and a lot of services and websites provide them for easier interactions with other websites and service. Most commonly, webhooks are triggered by an event, and send a request to subscribers of the webhook. AtomicPay provides such a webhook for invoices and their payment state.

Why Azure?

There are several reasons, with Azure being my personal choice of cloud service being the main one. Besides that, Azure provides two additional features I am a fan of, namely Azure KeyVault for credential handling and Azure Notificationhub for pushing notifications to a broad range of platforms. This new series will be split into three parts:

  1. Handle the incoming notification with an Azure Function (this post)
  2. Verify the invoice state within the Azure Function, but store the API credentials in the most secure way (second post)
  3. Send a push notification via Azure Notificationhub to all devices that have linked a merchant’s account to it (third post)

Getting started

First, login to your Azure account (or create a new one). On the left menu, select ‘Create a resource’ and search for ‘Serverless Function App’:

Azure Portal create new resource

In the next step, you’ll assign a name for your Function, create a new resource group and assign or create a storage account:

Azure new Function App settings

Once deployment is finished, click the ‘Go to resource’ button in the portal notification:

Azure notification deployment succeeded

Next step is to add the code for our function. Click on the ‘+’ sign besides ‘Functions’:

add new function

Now you have to decide how you want to move on in development. I chose Visual Studio, because we need some Nuget packages later and I enjoy having Intellisense while writing code. Just follow the instructions on screen to get your project up and running.

select ide for writing function code

Make sure you are selecting the v1 (.NET Framework) version of the project. This is needed because we want to connect to a Notificationhub later (in the third post), and as of now, v2 does not support this (according to the docs).

VS new project dialog for Azure Function

With a click on ‘OK’, Visual Studio creates the new project for you.

Seeing some code… finally!

If you look at the just generated code in the generated Function class, you may immediately be reminded at something. You’re right, as Azure functions are based on ASP.NET, so there are some similarities. Let’s have a look at the parameters of the Run method (you can change the method name as the FunctionName attribute declares the method already as such).

The first parameter is the HttpTriggerAttribute, which defines the AuthorizationLevel, the allowed methods and an optional route. The only change that I made to the default attribute parameters is to remove the “get” method, because AtomicPay’s Webhook sends a POST request.

The second parameter is the HttpRequestMessage that contains the request sent from AtomicPay. If you have been working with APIs or other web requests already, this should be familiar. We will work with this part as the sample continues.

Last but not least, we have a logging provider that enables us to write to the Azure Function’s log (which can be really useful if you’re searching an error). We will also use this as the sample continues.

AtomicPay webhook payload

In order to be able to handle the payload from AtomicPay, it is helpful to have a payload sample. Luckily, AtomicPay provides a detailed documentation for the webhook (as it does for the API). The payload looks like this:

 {
"invoice_id":"DBVZzHMxjjfdRZYeignEZC",
"order_id":"1235425",
"fiat_price":"1.00",
"fiat_currency":"USD",
"payment_currency":"BTC",
"payment_rate":"4,192.00",
"payment_address":"bc1qmtyax97phenvvs3sdg5r45kdcphd",
"payment_total":"0.00023855",
"payment_paid":"0.00023855",
"payment_due":"0.00000000",
"payment_txid":"14edecbf114f2b2e73cf7470916122286506de5",
"payment_confirmation":"3",
"status":"confirmed",
"statusException":"null"
} 

Depending on the cryptocurrency of the invoice, your function will be triggered multiple times – every time the status updates. As you probably want to have different actions for different states, it makes sense to create a class based on the JSON for deserialization and to further work with:

    public class WebhookInvoiceInfo
    {
        [JsonProperty("invoice_id")]
        public string InvoiceId { get; set; }

        [JsonProperty("order_id")]
        public string OrderId { get; set; }

        [JsonProperty("fiat_price")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal FiatPrice { get; set; }

        [JsonProperty("fiat_currency")]
        public string FiatCurrency { get; set; }

        [JsonProperty("payment_currency")]
        public string PaymentCurrency { get; set; }

        [JsonProperty("payment_rate")]
        public string PaymentRate { get; set; }

        [JsonProperty("payment_address")]
        public string PaymentAddress { get; set; }

        [JsonProperty("payment_total")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentTotal { get; set; }

        [JsonProperty("payment_paid")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentPaid { get; set; }

        [JsonProperty("payment_due")]
        [JsonConverter(typeof(StringToDecimalConverter))]
        public decimal PaymentDue { get; set; }

        [JsonProperty("payment_txid")]
        public string PaymentTxid { get; set; }

        [JsonProperty("payment_confirmation")]
        [JsonConverter(typeof(StringToLongConverter))]
        public long PaymentConfirmation { get; set; }

        [JsonProperty("status")]
        [JsonConverter(typeof(StringToInvoiceStatusConverter))]
        public InvoiceStatus Status { get; set; }

        [JsonProperty("statusException")]
        [JsonConverter(typeof(StringToInvoiceStatusExceptionConverter))]
        public InvoiceStatusException StatusException { get; set; }
    }

You may have noticed that I have converters attached to some properties above. They are needed to convert the values into their correct types, as the API generally sends just strings. The class and its converters are part of the AtomicPay .NET SDK (find it on Nuget or Github), which we will use also in the second part of this blog series. Add the library in your prefered way to the project.

Working with the webhook’s payload

For this first part of the series, we are just going to write log entries based on the status of the incoming webhook. Just add these lines to the Run method of your function:

        [FunctionName("Function1")]
        public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
        {
            log.Info($"arrived at function trigger for AtomicPay webhook.");
            log.Info("trying to get payload object from request...");
            WebhookInvoiceInfo result = null;
            _jsonSerializerSettings = new JsonSerializerSettings()
            {
                MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
                DateParseHandling = DateParseHandling.None,
                Converters ={

                        new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                        StringToLongConverter.Instance,
                        StringToDecimalConverter.Instance,
                        StringToInvoiceStatusConverter.Instance,
                        StringToInvoiceStatusExceptionConverter.Instance
                }
            };

            _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

            using (var stream = await req.Content.ReadAsStreamAsync())
            {
                using (var reader = new StreamReader(stream))
                {
                    using (var jsonReader = new JsonTextReader(reader))
                    {
                        result = _jsonSerializer.Deserialize<WebhookInvoiceInfo>(jsonReader);
                    }
                }
            }

            if (result != null)
                log.Info($"received payload for invoice id: {result.InvoiceId} with status {result.Status.ToString()}");
            else
                log.Info("there was an error getting the payload from the request body");

            return result == null ?
                req.CreateResponse(HttpStatusCode.BadRequest, "there was an error getting the payload from the request body") :
                req.CreateResponse(HttpStatusCode.OK, "OK");
        }

We are reading the response content from the HttpRequestMessage and deserialize it. If there is no payload or something went wrong, our result will be null. We are writing a log entry that indicates if we have received a payload and the status of the invoice within it. In the end, we are returning the appropriate response to the POST request.

Testing the function locally

Another big advantage of using Visual Studio to write the Azure Function is the ability to debug locally. If you haven’t done so, you’ll need to install the
Azure Functions Core (CLI) tools – Visual Studio will prompt you to do so. Once that is done, we are able to debug locally:

Visual Studio debugging Azure function

Now that our local service is running, let’s test the function with Postman (this is one of my test invoices):

Postman request to local API test

As we can see, everything went smooth and our Function is working like expected:

API log local test

Now that we have verified our Function is able to run, we are ready to publish it.

Publishing the Function to Azure

To initiate the process, right click on the project name and select ‘Publish’. As we have already defined our project in the Azure Portal, we are using the ‘Select Existing’ option in the next step:

Publish to Azure Step 1

I am not selecting the ‘run from package file’ option here. Click ‘Publish’ to continue. You will be prompted with a selection window once again (you may have to login to your account first). Select your created project:

Publish to Azure Step 2

Confirm you selection with ‘OK’. After doing some additional preparation steps, Visual Studio will allow you to click the ‘Publish’ button:

Publish to Azure Step 3

Once you have published your function, visit the Azure Portal once again. Select your published function. The portal will show you the function.json declaration file. In the upper part of the website, you will find options to ‘Save‘, ‘Run‘ and ‘ </> Get function URL ‘. Click on the last one to view your function’s url:

Azure portal get function url

I am pretty sure you noticed the code parameter in the url. This parameter is needed with every call to your Azure function as authorization parameter. To test your function on Azure, copy the url and paste it into Postman (replacing the localhost url we used for local testing). You should get the same result in Postman – optimally, an OK-response.

Conclusion

If you followed along and your function is up and running – congratulations! You just created a basic “serverless” (aka running on someone else’s server) function. In the next post in this series, we will initialize the AtomicPay SDK properly, after we added our API credentials to another feature running on Azure – the KeyVault. As always, I hope this post is helpful for some of you.

Until the next post, happy coding, everyone!

Title image credit

Posted by msicc in Azure, Crypto&Blockchain, Dev Stories, Xamarin, 1 comment

Book review: Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8 (Geoff Webber-Cross)

During the last months, I used the few times of my spare time when I wasn’t in the mood for programming to read Geoff’s latest book for diving deeper into Azure Mobile Services. Geoff is well known in the community for his Azure experience, and I absolutely recommend to follow him! I am really glad he asked me to review his book and need to apologize that it took so long to get this review up.

The book itself is very well structured with a true working XAML based game that utilizes both Windows 8 and Windows Phone 8 and connects them to one single Mobile Service.

Even if you are completely new to Azure, you will quickly get things done as the whole book is full of step-by-step instructions. Let’s have a quick look on what you will learn while reading this book:

  1. Prepare your Azure account and set up your first Mobile Service
  2. Bring your Mobile Service to life and connect Visual Studio
  3. Securing user’s data
  4. Create your own API endpoints
  5. use Git via the console for remote development
  6. manage Push Notifications for both Windows and Windows Phone apps
  7. use the advantages of the Notification hub
  8. Best practices for app development – some very useful general guilty tips!

I already use a Mobile Service with my Windows Phone App TweeCoMinder. I have already started a Windows 8 version of that app, which basically only needs to be connected to my existing Azure Mobile Service to finish it.

Screenshot (359)

While reading Geoff’s book, I learned how I effectively can achieve this and also improve my code for handling the push notifications on both systems. The book is an absolutely worthy investment if you look into Azure and Mobile Services and has a lot of sample code that can be reused in your own application.

As this is my first book review ever, feel free to leave your feedback in the comments.

You can buy the book right here.

Happy coding, everyone!

Posted by msicc in Archive, 0 comments

Getting productive with WAMS: How to call Twitter REST API 1.1 from a scheduled script

WAMS.png

Like I promised in my first post about Windows Azure Mobile Services, I will show you how to call the Twitter Rest API 1.1 from a scheduled script. The documentation of the http request object does only use Twitter API 1.0 (which is no longer available).

First, you will need a Consumer key and a Consumer secret for your app. Just go to dev.twitter.com, register with your Twitter account and then add a new application.

The second thing you will need, is the so called Access token and Access token secret. Both are user dependent, without them Twitter will give you an error that your app is not authorized to use this account for anything on Twitter.

There are several ways to obtain these values. As I am registering the user within my phone app, I am uploading these values from phone and store it in my Mobile Services database.

To generate the requested data, we need several additional data for our request to Twitter:

  • a timestamp for the oAuth Header and the signature string
  • a random number to secure the request (= nonce)
  • an oAuth signature (signed array of the user’s data)
  • a HMAC encoded Hash string

These data is used for our request to Twitter.

Let’s start with the “simple” things:

generate Timestamp:

//generating the timestamp for the OAuth Header and signature string
var timestamp  = new Date() / 1000;
timestamp = Math.round(timestamp);

generate nonce

function generateNonce() {
    var code = "";
    for (var i = 0; i < 20; i++) {
        code += Math.floor(Math.random() * 9).toString();
    }
    return code;
}

oAuth signature

//generating the oAuth signatured array for the Twitter request
function generateOAuthSignature(method, url, data) {
    //remove query string parameters
    var index = url.indexOf('?');
    if (index > 0)
        url = url.substring(0, url.indexOf('?'));

    var signingToken = encodeURIComponent(ConsumerSecret) + "&" + encodeURIComponent(twitterAccessTokenSecret);

    var keys = [];
    for (var d in data) {
        if (d != 'oauth_signature') {
            //console.log('data:', d);
            keys.push(d);
        }
    }

    keys.sort();
    var output = "GET&" + encodeURIComponent(url) + "&";
    var params = "";
    keys.forEach(function (k) {
        params += "&" + encodeURIComponent(k) + "=" + encodeURIComponent(data[k]);
    });
    params = encodeURIComponent(params.substring(1));

    return hashString(signingToken, output + params, "base64");
}

generate the HMAC encoded hash string

//generate Hash-string, encoded in HMAC-SHA1 as required by Twitter's API v1.1
function hashString(key, str, encoding) {
    //console.log('basestring:', str);
    var hmac = crypto.createHmac("sha1", key);
    hmac.update(str);
    return hmac.digest(encoding);
}

Now we have prepared all of these functions, we are well prepared to call the Twitter API. In this example we are calling the user’s profile data:

function requestToTwitter()
{

    //the url declaration has to be in this function to make the request working!
    //declaring it in another function would cause an error 401 from Twitter's API
    url = 'https://api.twitter.com/1.1/users/show.json?user_id=' + twitterId;

    //generate data for sending the request to Twitter
    //this is the data used in the signature string as well as in the Authorization header
    var oAuthData = {oauth_consumer_key: ConsumerKey, oauth_nonce: nonce, oauth_signature: null, oauth_signature_method: "HMAC-SHA1", oauth_timestamp: timestamp, oauth_token: twitterAccessToken, oauth_version: "1.0"};
    var sigData = {};
    for (var k in oAuthData) {
        sigData[k] = oAuthData[k];
    }
    sigData['user_id'] = twitterId;

    var sig = generateOAuthSignature('GET', url, sigData);
    oAuthData.oauth_signature = sig;

    var oAuthHeader = "";
    for (k in oAuthData) {
        oAuthHeader += "," + encodeURIComponent(k) + "=\"" + encodeURIComponent(oAuthData[k]) + "\"";
    }
    oAuthHeader = oAuthHeader.substring(1);
    //very important to not miss the space after OAuth!
    authHeader = 'OAuth '+oAuthHeader;

    var reqOptions = {
            uri: url,
            headers: { 'Accept': 'application/json', 'Authorization': authHeader }
    };

    var httpRequest = require('request');
        httpRequest(reqOptions,callback );

}

var callback = function(err, response, body) {
    //console.log("in requestToTwitter = callback"); 
            if (err) {
            console.log(err)
            } else if (response.statusCode !== 200) {
                console.log("from twitter callback " + response.statusCode + " response: " + response.body);
            } else {
                var userProfile = JSON.parse(body);
                UserIdFromTwitter = userProfile.id;
                twitterScreenName = userProfile.screen_name;

}
}

You may have noticed that there are several variables that are not declared within these functions. Just declare them globally in your scheduled script.

You can read more about the oAuth authorization process at http://oauth.net/.

There are more services out there that use the oAuth process, so you should be able to convert this for other requests, like getPocket.com (formerly Read It later) and others.

As always, I hope this post was helpful for some of you.

Happy coding!

Posted by msicc in Azure, Dev Stories, 0 comments