Azure

Sending push notifications to your Xamarin app from WordPress with Azure, Part II – the Function

Sending push notifications to your Xamarin app from WordPress with Azure, Part II – the Function

First, let’s have a look at the lineup of this series once again:

  • Preparing your WordPress (blog/site)
  • Preparing the Azure Function and connect the Webhook (this post)
  • Preparing the Notification Hub
  • Send the notification to Android
  • Send the notification to iOS
  • Adding in Xamarin.Forms

Creating a new Azure Function in Visual Studio

The most simple approach to create a new Azure Function (if you already have an Azure account) is adding a new project to your Xamarin solution:

After the project is loaded, double click on the .csproj file in the Solution Explorer to open the file for editing it. Make sure you have the following two PropertyGroup entries:

  <PropertyGroup>
    <TargetFramework>net461</TargetFramework>
    <AzureFunctionsVersion>v1</AzureFunctionsVersion>
  </PropertyGroup>
  <ItemGroup>
    <!--DO NOT UPDATE THE AZURE PACKAGES, IT WILL BREAK EVERYTHING!!!!-->
    <PackageReference Include="Microsoft.Azure.NotificationHubs" Version="1.0.9" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.NotificationHubs" Version="1.3.0" />
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.31" />
    <PackageReference Include="Newtonsoft.Json" Version="9.0.1" />
  </ItemGroup>

You may notice that I made an all caps comment into the second PropertyGroup entry. As I am using a v1 Function, these are the latest packages that I am able to use. They are doing their job, and allow us to use an easy way to bind the Function to the Azure NotificationHub , which we are going to implement in the next post. I delayed updating the whole setup to use a v2 function intentionally at this point.

Processing the Webhook payload

In order to be able to process the payload (remember, we are getting a JSON string) from our WordPress Webhook, we need to deserialize it. Let’s create the class that holds all information about it:

using Newtonsoft.Json;

namespace NewPostHandler
{
    public class PublishedPostNotification
    {
        [JsonProperty("id")]
        [JsonConverter(typeof(StringToLongConverter))]
        public long Id { get; set; }

        [JsonProperty("title")]
        public string Title { get; set; }

        [JsonProperty("status")]
        public string Status { get; set; }

        [JsonProperty("featured_media")]
        public string FeaturedMedia { get; set; }
    }
}

The class gets it pretty straight forward, we will use this implementation as-is for the payload we are sending to Android later on. The use of the StringToLongConverter is optional. For completeness, here is the implementation:

using Newtonsoft.Json;
using System;

namespace NewPostHandler
{
    public class StringToLongConverter : JsonConverter
    {
        public override bool CanConvert(Type t) => t == typeof(long) || t == typeof(long?);

        public override object ReadJson(JsonReader reader, Type t, object existingValue, JsonSerializer serializer)
        {
            if (reader.TokenType == JsonToken.Null) return default(long);
            var value = serializer.Deserialize<string>(reader);
            if (long.TryParse(value, out var l))
            {
                return l;
            }

            return default(long);
        }

        public override void WriteJson(JsonWriter writer, object untypedValue, JsonSerializer serializer)
        {
            if (untypedValue == null)
            {
                serializer.Serialize(writer, null);
                return;
            }
            var value = (long)untypedValue;
            serializer.Serialize(writer, value.ToString());
            return;
        }

        public static readonly StringToLongConverter Instance = new StringToLongConverter();
    }
}

Now that we prepared our data transferring object, it is time to finally have a look at the processor code.

[FunctionName("HandleNewPostHook")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)]HttpRequestMessage req, TraceWriter log)
{
    _log = log;
    _log.Info("arrived at 'HandleNewPostHook' function trigger.");

    //ignoring any query parameters, only using POST body

    PublishedPostNotification result = null;

    try
    {
        _jsonSerializerSettings = new JsonSerializerSettings()
        {
            MetadataPropertyHandling = MetadataPropertyHandling.Ignore,
            DateParseHandling = DateParseHandling.None,
            Converters =
            {
                new IsoDateTimeConverter { DateTimeStyles = DateTimeStyles.AssumeUniversal },
                StringToLongConverter.Instance
            }
        };

        _jsonSerializer = JsonSerializer.Create(_jsonSerializerSettings);

        using (var stream = await req.Content.ReadAsStreamAsync())
        {
            using (var reader = new StreamReader(stream))
            {
                using (var jsonReader = new JsonTextReader(reader))
                {
                    result = _jsonSerializer.Deserialize<PublishedPostNotification>(jsonReader);
                }
            }
        }

        if (result == null)
        {
            _log.Error("There was an error processing the request (serialization result is NULL)");
            return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing the post body");
        }

       //subject of the next post
      //await TriggerPushNotificationAsync(result);
    }
    catch (Exception ex)
    {
        _log.Error("There was an error processing the request", ex);
        return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing the post body");
    }

    if (result.Id != default)
    {
        _log.Info($"initiated processing of published post with id {result.Id}");
        return req.CreateResponse(HttpStatusCode.OK, "Processing new published post...");
    }
    else
    {
        _log.Error("There was an error processing the request (cannot process result Id with default value)");
        return req.CreateResponse(HttpStatusCode.BadRequest, "There was an error processing (postId not valid)");
    }
}

Let’s go through the code. The first thing I want to know is if we ever enter the Function, so I log the entrance. The second step is setting up the JsonSerializer to deserialize the payload into the DTO class I created before.

There are several scenarios that I am handling and returning different responses. Ideally, we would run through and arrive at the TriggerPushNotificationAsync call, followed by a jump the ‘OK‘- response if the post id received from our Webhook is valid. During testing, however, I ran into other situations as well, where I return a ‘Bad Request‘ response with a hint that something went wrong.

The implementation of the TriggerPushNotificationAsync method is not shown in this post as it will be subject of the next post in this series.

Testing the code locally

One of the reasons I chose to start the Function in Visual Studio is its ability to debug it locally. If you don’t have the necessary tools installed, Visual Studio will prompt you to do so. After installing them, you’ll be able to follow along.

Once the service is running, we will be able to test our function. If you haven’t already heard about it, Postman will be the easiest tool for that. Copy the function url and paste it into the url field in Postman. Next, add a sample JSON payload to the body (settings: raw, JSON) and hit the ‘Send’ button:

If all goes well, Postman will give you a success response:

The Azure CLI will also write some output:

As you can see, all of our log entries were written to the CLI, plus some additional information from Azure itself. Don’t worry for the moment about the anonymous authorization state, this is just because we are running locally. In theory, we could already publish the function to Azure now. As we know that we will extend the Function in the next post, however, we will not do this right now.

Conclusion

As you can see, writing an Azure Function isn’t as complicated as it sounds. Visual Studio brings all the tools you need to get started pretty fast. The ability to test the Function code locally is another big advantage that comes with Visual Studio.

In the next post, we will configure the NotificationHub on Azure and extend our Function to call into it and fire the notifications.

Until the next post, happy coding, everyone!
Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 1 comment
Sending push notifications to your Xamarin app from WordPress with Azure, Part I [new series]

Sending push notifications to your Xamarin app from WordPress with Azure, Part I [new series]

Overview

Choosing the “right” solution for sending push notifications isn’t easy if you have a WordPress blog. There are quite a bunch of options to choose from, and the right one for you might differ from my decision. I am using the most generic solution – a Webhook that triggers an Azure Function, which triggers the notification via Azure Notification Hubs. This series will grow as follows:

  • Preparing your WordPress (blog/site) (this post)
  • Preparing the Azure Function and connect the Webhook
  • Preparing the Notification Hub
  • Send the notification to Android
  • Send the notification to iOS
  • Adding in Xamarin.Forms

The app implementations are very platform-specific, but it is quite easy to integrate the post notifications in a Xamarin.Forms app (which will be the last post in this series). If you want to see the whole integration already in action, feel free to download my blog reader app:

WordPress plugins for the win

If you have a self-hosted blog like I do, you may know that the plugin ecosystem is there to help you with a lot of things that WordPress hasn’t out of the box. While a WordPress-hosted site as Webhook integration without an additional plugin, we need one to create such a Webhook on a self-hosted WordPress blog. The plugin I am using is simply called “Notification” and can be found here.

To install the plugin, follow these simple steps:

  • choose “Plugins” on your WordPress dashboard
  • select “Add New” and type in “notification”
  • Hit the “Install Now” button
  • Activate the plugin

Exploring the options

Once you have installed and activated the plugin, you will have a new option in the dashboard menu. Let’s have a look at the options.

  • Notifications – this shows you a list of your currently active notifications
  • Add New Notification – lets you create a new notification
  • Extensions – the plugin allows you to extend your notifications with external services like Slack, Twitter or SendGrid to engage even more users. We do not need these for the webhook, however.
  • Settings – the control panel for the plugin – this is where we will be for the rest of this blog

Enabling the Webhook

On the Settings page, select the “CARRIERS” option. The plugin uses so-called carriers to send out the notifications. By default, the Email carrier is active. I do not need this one for the moment, so I deactivated it an activated the Webhook carrier instead:

Setting Post Triggers

The next step is to verify we have the trigger for posts active:

You can modify the other triggers as well, but for the moment, I am focusing just on posts. I am thinking about integrating comments in the future, which will allow even more interaction from within my app.

Add a new notification

Let’s create our first notification. Select the “Add New Notification” action, which will bring up this page:

Select the “Add New Carrier” option and add the Webhook carrier:

Next, select the Trigger for the Webhook. During development, I am using the saving draft option as it allows me to easily trigger a notification without annoying anyone:

This will enable the “Merge Tags” list on the right-hand side. To create the Webhook payload, we need to add some arguments (using the “Add argument” button). Tip: you can copy the merge tag by just clicking on it and paste it into the “Value” box:

Don’t forget to activate the JSON format – we do not want it to be sent as XML. Make sure the Carrier is enabled and hit the save button on the upper right.

Testing the Webhook

Now that we finished the setup of our Webhook, let’s test it. To do so, go to the “Settings” page again and select “DEBUGGING”. Check the “Enable Notification logging” box and click the “Save Changes” button:

To test the notification, just create a new blog post and save the draft. Go back to the “DEBUGGING” setting, where you will be presented a new Notification log entry. Expanding this log entry, you will see some common data about the notification:

If you scroll a bit further down, you will see the payload of the Webhook. Sadly, you won’t get the raw JSON string, but a structured overview of the payload:

Verify that the payload contains all the data you need and adjust the settings if necessary. Once that is done, we are ready to go to the next blog post (coming soon).

Conclusion

In this post, I showed you how to create a Webhook that will trigger our upcoming Azure Function. Thanks to the “Notification” plugin, the process is pretty straight forward. In the next post, we will have a look at the Azure Function that will handle the Webhook.

Until the next post, happy coding, everyone!

Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 1 comment
MSicc’s Blog version 1.6.0 out now for Android and iOS

MSicc’s Blog version 1.6.0 out now for Android and iOS

Here are the new features:

Push Notifications

With version 1.6.0 of the app, you can opt-in to receive push notifications once I publish a new blog post. I use an Azure Function (v1, for the ease of bindings – at least for now), and of course, an Azure NotificationHub. The Function gets called from a WebHook (via a plugin on WordPress), which triggers it to run (the next blog posts I write will be about how I achieved this, btw.)

New Design using Xamarin.Forms Shell

I also overhauled the design of the application. Initially, it was a MasterDetail app, but I never felt happy with that. Using Xamarin.Forms.Shell, I optimized the app to only show the last 30 posts I wrote. If you need older articles, you’ll be able to search for them within the app. The new design is a “v1” and will be constantly improved along with new features.

Bugs fixed in this release
  • fixed a bug where code snippets were not correctly displayed
  • fixed a bug where the app did not refresh posts after cleaning the cache
  • other minor fixes and improvements

I hope some of you will use the application and give me some feedback.

You can download the app using these links:
iOS | Android

Until the next post, happy coding, everyone!

Posted by msicc in Android, Azure, Dev Stories, iOS, Xamarin, 0 comments
Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

As I am moving forward in my current Linux journey, I recently came into a situation where a second user would have been handy. So I tried a few things to create the new user and allow the new user to only log in via SSH.

What the h*** is SSH?

SSH stands for Secure Shell and describes a protocol to connect via encrypted credentials. The security is provided by cryptographic keys, where the server only knows the public key and the client that wants to connect needs the matching private key. The most popular implementation is OpenSSH, which is available as an additional feature on Windows 10 since last fall. If you want to learn more about SSH, read the Wikipedia entry as well.

Create the SSH key pair on Windows

Install the OpenSSH client

First, install the OpenSSH client on your Windows machine. Proceed as follows:

  • open the start menu and type ‘apps’
  • select ‘Apps and features’ settings page
  • select ‘Optional features’
  • click on ‘Add a feature’
  • search ‘OpenSSH client’, click on it and ‘Install’

If you are scrolling down the list of installed features, you should find the entry for the OpenSSH client.

Note: If you are not able to install this feature, it may be the right time to update your Windows installation to the latest version.

Create your SSH key pair

If your user profile does not have a folder called ‘.ssh’, it is time to create it now. Type ‘%USERPROFILE%‘ in the Windows Explorer’s address bar to get to the right folder immediately and create the folder.

Now that we have the folder OpenSSH searches for, we are already able to create our new SSH keypair. Open a command prompt and type this:

ssh-keygen -t rsa -b 4096 -C "newuser@machine.com"

This will initiate the keypair creation. The -C parameter is optional and can be anything. You’ll find these keys often created with <user at address> combinations. After OpenSSH has created your keypair in memory, it will ask for a location to save the file. If you do not enter anything, it’ll save it as id_rsa into the .ssh folder created earlier. If you want to save it to another file name, you can do so:

C:\Users\username\.ssh/id_test

Please note that the file name (without any extension!) is separated by ‘/’, not by ‘\’. If you do not respect this, it will give you an error that the file name does not exist. This happens only if don’t use the default filename. After the files are created, OpenSSH asks you for a passphrase to protect your private key. Nowadays, every single keypair should be password protected (just my 2cts). Once the creation is complete, you’ll see something like this:

Your identification has been saved in C:\Users\msicc\.ssh/id_test.
Your public key has been saved in C:\Users\msicc\.ssh/id_test.pub.
The key fingerprint is:
SHA256:8LK33fWKXMkY5FtcN4uU1v9SyCSUqKNp2T/PurpZCRU newuser@machine.com
The key's randomart image is:
+---[RSA 4096]----+
|          E...   |
|          .o. o  |
|      .  .. o+.oo|
|       oo. oo=.o=|
|      .=S.  o.=.o|
|      =o.. . * o.|
|     .. ..o o * .|
|       . =o+ + o |
|        =o+=* ...|
+----[SHA256]-----+

As you can see, we are able to create SSH keys without any third-party application on Windows. You can now safely close the console window (e.g by typing ‘exit‘).

Deploying the public key to the server

Of course, the whole key pair thing has only sense if we are using it to secure our client/server communication. On Linux, there would be the easy to use command ssh-copy-id to deploy the key. Some Windows tutorials are showing the scp command, but I never got that working with my Azure VM. The only way left was to deploy it manually (which is not that difficult if you know how).

The manual way

After logging in (using Azure CLI), we are going to add a new user to the gang:

sudo useradd -m newuser

This will create a new user and its home directory. If the OS asks you for a password, it is up to you and the OS settings for empty passwords to provide one or not. Next step would be to add it to one or more groups if necessary:

sudo usermod -aG sudo newuser

The -aG parameter of the usermod command adds the specified group(s) to the user’s groups table. If you want to add the user to more than one group, separate them with a comma (no whitespace after the comma!).

To make things a little bit easier for us, we are going to log in as the new user:

sudo su newuser

Note: If you want to proceed without logging in as the new user, you’ll need to change ~ to /home/newuser for the following commands.

To make Linux accept our prior created SSH key only for our new user, we need to create the .ssh folder and the file for allowed keys in the new user’s home directory. Let’s start with the .ssh directory:

sudo mkdir ~/.ssh
sudo chmod 0700 ~/.ssh
sudo chown newuser:newuser ~/.ssh

Let’s break it down. Obviously, the mkdir command creates the .ssh folder. The chmod command with the 0700 parameter gives full access only to our new user. Finally, the chown command makes our new user the owner of the file.

Linux saves the allowed keys in a file called ‘authorized_keys‘, so that’s the next we are going to create. After creation, we change the file’s access rights to 0644, which makes it read-only for all users except our new user. Execute these commands:

sudo touch ~/.ssh/authorized_keys
sudo chmod 0644 ~/.ssh/authorized_keys

We’re getting closer… earlier, OpenSSH created two files in the .ssh folder on Windows. We are going to copy the contents of the .pub file into the authorized_keys file now. You can extract the content of the .pub file with Notepad on Windows. Once you have that one in your clipboard, open the authorized_keys file (using your favorite editor):

nano ~/.ssh/authorized_keys

Paste the content of your .pub file by right-clicking on the Azure CLI window. Save the file and close it. To secure the new user account, we need to apply some additional steps.

The first one is to delete and disable the password of our new user:

sudo passwd -d -l newuser

The -d parameter completely deletes the password. The -l parameter locks the state, preventing the user to set a new password (without using sudo, that is).

Additional security measures

Now that we have our SSH public key on our server, we are save to disable the password-based login in general. To do so, we need to modify the sshd_config file of our server. Open it with the editor of your choice:

sudo nano /etc/ssh/sshd_config

Search for PasswordAuthentication and ChallengeResponseAuthentication. Uncomment these entries if necessary by removing the ‘#‘ in front of the line an set them to ‘no‘. Some tutorials that are floating around the web tell you to even set UsePAM to no, but following this recommendation always disabled the login completely for me on the Azure VM and I always had to reset the SSH config via the Azure CLI.

Save and exit the sshd_config file. To take these changes into effect, we need to restart the SSH service on our service:

sudo service ssh restart

Wait a few seconds and then verify that the service restarted by controlling its status:

systemctl status ssh.service

That’s it, we have deployed our new SSH key to our server and took additional security measures. There’s just one thing left: try to log in via ssh as the new user. This is a pretty easy task. In the Azure CLI (or a local command prompt), enter the following command:

 ssh -i %USERPROFILE%\.ssh\id_test newuser@machine.com

After entering your password, you should be logged in just like you did with your admin user.

Bonus: add a shared directory

More often than not, you might want to create scripts or other files that are available to all users. Follow these simple steps:

sudo groupadd shared
sudo usermod -aG shared newuser

sudo mkdir -p /var/helpers
sudo chgrp -R shared /var/helpers
sudo chmod -R 2775 /var/helpers

The first two lines create a new group and assign our new user to the group. Repeat the second command for every user you want to be in that group.

Then we create the shared folder /helpers in the /var folder of our server. Utilizing the chgrp command, we give the ownership to our shared group. Last but not least, we are modifying the access rights once again for the /var/helper folder. The combination 2775 means that every new file inherits the group from the folder, allowing all members of the group to read, write and execute the file, while users outside the shared group only can read and execute the file.

Conclusion

As you can see, one does not always have to use third-party tools to get things done. Like I said at the beginning of this post, once you know the steps that are needed, it is pretty easy to create a new SSH key pair, create a new user and manually deploy the public key to the server. As always, I hope this post is helpful for some of you.

Helpful links

Title Image Credit (Pixabay)

Posted by msicc in Azure, Linux, 0 comments
Run your own Bitcoin Full Node on an Azure Linux VM

Run your own Bitcoin Full Node on an Azure Linux VM

One year ago, I began my journey in the crypto and blockchain area. Recently, multiple circumstances made me thinking about my own crypto-related server. Of course, I choose Azure for running this server (for the time being). It has been a while since I last touched Linux, so I had quite a bit to refresh and learn. In this post, I’ll show you the steps that are needed for setting up an independent full node to support the Bitcoin network.

Setting up the Virtual Machine on Azure

First, we need to install the Azure CLI on our computer. We will use this one to connect to our virtual machine later on via SSH. Follow the instructions found here.

The second prerequisite is a program to generate SSH keys. You can use either the OpenSSH client shipping with Windows 10 (latest versions), use the Azure CLI or PuTTY (follow these instructions).

Create the VM

Once you have installed the CLI and your SSH keys are created, log into your Azure account. Go to the marketplace, and search for ‘ubuntu‘. Choose Ubuntu Server 18.04 LTS and hit the ‘Create‘ button in the next window.

Fill in the details of your Azure VM on the first page of the creation module:

Do not forget to set the SSH admin user, as adding one afterward is not as easy as it seems and it often also fails (I had a hard time to learn that). Also, we need to allow traffic through the default HTTP (80) and SSH (22) ports. Once you configured everything, go to disks.

I did not select premium disks but instead went with a Standard HDD to save some money. You can change that to your needs. The important part here is to NOT use managed disks as we will need to resize the OS disk after the creation of the VM. Follow the rest of the steps in the creation wizard and create your virtual machine. Once the machine is created, you should create a DNS label (hit the ‘Configure’ link at the VM’s overview page to get to the IP settings).

Log in via SSH

Let’s try if we can log in to our Linux VM via the Azure CLI. Open the ‘Microsoft Azure Command Prompt‘ on your PC. To be able to connect to our virtual machine, we need to log in to Azure first to obtain an access token for our session:

az login 

This will open a new browser tab, where you need to log in to your account again. After that, we will be redirected back to the CLI. Once that happened, go back to the overview page of your VM and click on ‘Connect’. This will open a pane where we will see the RDP and SSH connection option. Select SSH and copy the text below ‘Login using VM local account‘:

Paste it into the Azure Command Prompt window and provide your password (you should never use a password-less SSH key). If your screen looks now similar to this, you have successfully logged in:

Now that we have verified that we are able to log in via SSH, type exit to log out again as we have one step left to perform on the virtual machine.

Resizing the OS disk

A Bitcoin full node needs to download and verify the whole blockchain. The current size of the blockchain is around 250 GB, which would never fit in the default size of the OS disk of our VM. Luckily, it is pretty easy to resize the OS disk by running some commands in the Azure CLI.

First, stop the virtual machine:

az vm stop --resource-group YourResourceGroupName --name YourVMName 

Resizing the OS disk needs the VM to be deallocated (this may take some minutes):

az vm deallocate --resource-group YourResourceGroupName --name YourVMName

Once deallocation has finished, we are able to resize the OS disk with this command:

az vm update --resource-group YourResourceGroupName --name YourVMName --set storageProfile.osDisk.diskSizeGB=1024

Once you see the new size in the returned response from Azure, we can start the VM again:

az vm start --resource-group YourResourceGroupName --name YourVMName 

Depending on the distribution you are using, you may have to perform additional steps. Ubuntu, however, mounts the new disk size without any additional action. Verifying the new disk size is pretty easy (after logging back in via SSH), as the System Information displayed after login should already reflect the change (like in the screen above).

Preparing the Bitcoin node

After completing all the steps above, we are finally able to move on with the preparations for the Bitcoin node.

Bitcoin service user

As we will run the node as a service, we need an unprivileged service account:

sudo useradd -m -s /dev/null bitcoin

Great, we just created the account, including the creation of the home directory for the service user and default shell entry. If you want to see the system’s response, just leave out the /dev/null part out when running the command.

As we are going to restrict the access to the service to the bitcoin user (and its default group, which is also bitcoin), we need to add our admin user to the group. Run the following command to do so:

sudo usermod -a -G bitcoin [admin-user]

In order to make these changes (especially the group add) persistent, we need to perform a restart before we move on. You can not only use the Azure portal or CLI, but also use this command to make the VM restart immediately:

sudo shutdown -r now

This will log you out of the current SSH session. After waiting one minute or two, just log back in to continue the preparation of the full node.

Downloading and Verifying Bitcoin binaries

The next step involves downloading the bitcoin core package and its valid signature files (as we don’t trust, but verify). Run these two commands (you will need to hit enter a second time after the first download finished). I also created a temp directory for the download and other stuff.

mkdir ~/temp
cd ~/temp
btc_version="0.18.1"
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/SHA256SUMS.asc

According to Bitcoin.org the latest releases are signed with Wladimir J. van der Laan’s releases key, which has the fingerprint we are going to verify the downloaded binaries. It is recommended to do this for all crypto-related binaries you’re downloading, no matter on which OS.

Let’s try to import the key of Mr. van der Laan:

gpg --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

You may get an error message telling you there was a server failure. In this case, run the following command to import the key:

gpg --keyserver hkp://keyserver.ubuntu.com:80 --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

If you still get errors, there might be some missing packages or other reasons for the server failure. I managed to come through with the second command more often than the first, but in the end, I had the key in my local key store.

Now let’s verify the downloaded files:

gpg --verify SHA256SUMS.asc
sha256sum --ignore-missing -c SHA256SUMS.asc

If the command tells you that the signature is good and indeed from Mr. van der Laan, everything is fine with the hash file. The second command verifies the archive we downloaded earlier and should result in ‘OK’. If not, you should immediately delete those files as they might contain malware.

Note: I have read quite a few comments on reddit and other sites that we can safely ignore those warnings …

Installing Bitcoin binaries

Now that we have our bitcoin service user and verified the bitcoin binaries, we are finally at the point to install them:

tar zxf bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
pushd bitcoin-$btc_version/bin; sudo cp bitcoind bitcoin-cli /usr/bin; popd;
bitcoind --version

After getting the install verification via the version string, it is a good practice to remove both the binaries and the hash file. If you need to reinstall, perform the steps above again to verify the authenticity of the files. Run these commands to remove those files:

rm bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
rm SHA256SUMS.asc

Create a service config file

Now that we have Bitcoin installed, we need to prepare a .conf file for our upcoming service:

vi bitcoin.conf  
or
nano bitcoin.conf

This will open a text editor on Linux. Enter the base config to get the RPC server of the Bitcoin daemon (needed for bitcoin-cli) activated and save the file to the temp disc (press ‘ESC’ + ‘:’ and write ‘wq‘ if you used vi:, ‘CTRL’+’x’ followed by ‘y’ on nano):

testnet=0
regstest=0
mainnet=1
# Global Options
server=1 #activating rpc
rpcconnect=127.0.0.1 #default
rpcport=8332 #default
rpcallowip=127.0.0.1/32 #default
rpcbind=127.0.0.1  #default
disablewallet=1 #keeping wallet off (atm)
daemon=1
# Options only for mainnet
[main]
# Options only for testnet
[test]
# Options only for regtest
[regtest]

Now we just need to copy that file into the /etc/bitcoin folder. If this folder does not yet exist, create it with the following command:

sudo mkdir -p /etc/bitcoin

Next, copy the bitcoin.conf file to it:

sudo cp bitcoin.conf /etc/bitcoin

Assign ownership to the bitcoin service user and make it readable for all users:

sudo chown bitcoin:bitcoin /etc/bitcoin/bitcoin.conf
sudo chmod 0664 /etc/bitcoin/bitcoin.conf

The last step is to create a directory for the daemon where we will find the PID (Process ID) file after starting the service:

sudo mkdir -p /run/bitcoind/
sudo chmod 0755 /run/bitcoind/
sudo chown bitcoin:bitcoin /run/bitcoind/

Create the Bitcoin service

Now we are able to set up the core service of our node, the Bitcoin daemon service. In order to start, just copy and paste the one found at Bitcoin’s Github account into a new file on your VM. Once you have that file in your temp folder, copy it over to the system’s services folder:

sudo cp bitcoind.service /lib/systemd/system

Now we need to enable the service to make it automatically starting up on reboot:

sudo systemctl enable bitcoind

And finally, we need to start the service (or reboot the machine if you want to test that part):

sudo systemctl start bitcoind

After some time, you should be able to use the bitcoin-cli to get some info of your local blockchain copy:

sudo bitcoin-cli -rpccookiefile=/var/lib/bitcoind/.cookie -datadir=/var/lib/bitcoind -getinfo

Please note you need to use sudo because the owner of the service and the files is our bitcoin user we created earlier. If you don’t want to always pass the authentication cookie path, you can create a symbolic link to your current user’s .bitcoin folder:

sudo ln -s /var/lib/bitcoind/.cookie ~/.bitcoin/.cookie

Now we are able to just call sudo bitcoin-cli -getinfo. Another way of checking if the service and the daemon are running is to read the log that gets generated:

sudo tail /var/lib/bitcoind/debug.log -f 

You should see the log scrolling through as it writes new entries. The most entries will look like this:

2019-09-07T14:54:51Z UpdateTip: new best=000000000000004fa323c7ee57b4b22272c7ea757a6a5bdb53dbda73572f559d height=239240 version=0x00000002 log2_work=70.203272 tx=18819570 date='2013-06-02T08:32:35Z' progress=0.041989 cache=675.7MiB(5032783txo)

If you arrived at this point

Congratulations! You are running a Bitcoin full node (without wallet for the time being, though). It will take some time to download and verify the whole blockchain, but you are now effectively helping and securing the Bitcoin network.

This is just the first post about my journey with Bitcoin and my own node. Make sure to follow for future blog posts. As always, I hope this post will be helpful for some of you.

Helpful links

Note: this post was reposted on my Trybe.one account.

Posted by msicc in Azure, Crypto&Blockchain, Linux, 0 comments

Book review: Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8 (Geoff Webber-Cross)

During the last months, I used the few times of my spare time when I wasn’t in the mood for programming to read Geoff’s latest book for diving deeper into Azure Mobile Services. Geoff is well known in the community for his Azure experience, and I absolutely recommend to follow him! I am really glad he asked me to review his book and need to apologize that it took so long to get this review up.

The book itself is very well structured with a true working XAML based game that utilizes both Windows 8 and Windows Phone 8 and connects them to one single Mobile Service.

Even if you are completely new to Azure, you will quickly get things done as the whole book is full of step-by-step instructions. Let’s have a quick look on what you will learn while reading this book:

  1. Prepare your Azure account and set up your first Mobile Service
  2. Bring your Mobile Service to life and connect Visual Studio
  3. Securing user’s data
  4. Create your own API endpoints
  5. use Git via the console for remote development
  6. manage Push Notifications for both Windows and Windows Phone apps
  7. use the advantages of the Notification hub
  8. Best practices for app development – some very useful general guilty tips!

I already use a Mobile Service with my Windows Phone App TweeCoMinder. I have already started a Windows 8 version of that app, which basically only needs to be connected to my existing Azure Mobile Service to finish it.

Screenshot (359)

While reading Geoff’s book, I learned how I effectively can achieve this and also improve my code for handling the push notifications on both systems. The book is an absolutely worthy investment if you look into Azure and Mobile Services and has a lot of sample code that can be reused in your own application.

As this is my first book review ever, feel free to leave your feedback in the comments.

You can buy the book right here.

Happy coding, everyone!

Posted by msicc in Archive, 0 comments

Getting productive with WAMS: How to call Twitter REST API 1.1 from a scheduled script

WAMS.png

Like I promised in my first post about Windows Azure Mobile Services, I will show you how to call the Twitter Rest API 1.1 from a scheduled script. The documentation of the http request object does only use Twitter API 1.0 (which is no longer available).

First, you will need a Consumer key and a Consumer secret for your app. Just go to dev.twitter.com, register with your Twitter account and then add a new application.

The second thing you will need, is the so called Access token and Access token secret. Both are user dependent, without them Twitter will give you an error that your app is not authorized to use this account for anything on Twitter.

There are several ways to obtain these values. As I am registering the user within my phone app, I am uploading these values from phone and store it in my Mobile Services database.

To generate the requested data, we need several additional data for our request to Twitter:

  • a timestamp for the oAuth Header and the signature string
  • a random number to secure the request (= nonce)
  • an oAuth signature (signed array of the user’s data)
  • a HMAC encoded Hash string

These data is used for our request to Twitter.

Let’s start with the “simple” things:

generate Timestamp:

//generating the timestamp for the OAuth Header and signature string
var timestamp  = new Date() / 1000;
timestamp = Math.round(timestamp);

generate nonce

function generateNonce() {
    var code = "";
    for (var i = 0; i < 20; i++) {
        code += Math.floor(Math.random() * 9).toString();
    }
    return code;
}

oAuth signature

//generating the oAuth signatured array for the Twitter request
function generateOAuthSignature(method, url, data) {
    //remove query string parameters
    var index = url.indexOf('?');
    if (index > 0)
        url = url.substring(0, url.indexOf('?'));

    var signingToken = encodeURIComponent(ConsumerSecret) + "&" + encodeURIComponent(twitterAccessTokenSecret);

    var keys = [];
    for (var d in data) {
        if (d != 'oauth_signature') {
            //console.log('data:', d);
            keys.push(d);
        }
    }

    keys.sort();
    var output = "GET&" + encodeURIComponent(url) + "&";
    var params = "";
    keys.forEach(function (k) {
        params += "&" + encodeURIComponent(k) + "=" + encodeURIComponent(data[k]);
    });
    params = encodeURIComponent(params.substring(1));

    return hashString(signingToken, output + params, "base64");
}

generate the HMAC encoded hash string

//generate Hash-string, encoded in HMAC-SHA1 as required by Twitter's API v1.1
function hashString(key, str, encoding) {
    //console.log('basestring:', str);
    var hmac = crypto.createHmac("sha1", key);
    hmac.update(str);
    return hmac.digest(encoding);
}

Now we have prepared all of these functions, we are well prepared to call the Twitter API. In this example we are calling the user’s profile data:

function requestToTwitter()
{

    //the url declaration has to be in this function to make the request working!
    //declaring it in another function would cause an error 401 from Twitter's API
    url = 'https://api.twitter.com/1.1/users/show.json?user_id=' + twitterId;

    //generate data for sending the request to Twitter
    //this is the data used in the signature string as well as in the Authorization header
    var oAuthData = {oauth_consumer_key: ConsumerKey, oauth_nonce: nonce, oauth_signature: null, oauth_signature_method: "HMAC-SHA1", oauth_timestamp: timestamp, oauth_token: twitterAccessToken, oauth_version: "1.0"};
    var sigData = {};
    for (var k in oAuthData) {
        sigData[k] = oAuthData[k];
    }
    sigData['user_id'] = twitterId;

    var sig = generateOAuthSignature('GET', url, sigData);
    oAuthData.oauth_signature = sig;

    var oAuthHeader = "";
    for (k in oAuthData) {
        oAuthHeader += "," + encodeURIComponent(k) + "=\"" + encodeURIComponent(oAuthData[k]) + "\"";
    }
    oAuthHeader = oAuthHeader.substring(1);
    //very important to not miss the space after OAuth!
    authHeader = 'OAuth '+oAuthHeader;

    var reqOptions = {
            uri: url,
            headers: { 'Accept': 'application/json', 'Authorization': authHeader }
    };

    var httpRequest = require('request');
        httpRequest(reqOptions,callback );

}

var callback = function(err, response, body) {
    //console.log("in requestToTwitter = callback"); 
            if (err) {
            console.log(err)
            } else if (response.statusCode !== 200) {
                console.log("from twitter callback " + response.statusCode + " response: " + response.body);
            } else {
                var userProfile = JSON.parse(body);
                UserIdFromTwitter = userProfile.id;
                twitterScreenName = userProfile.screen_name;

}
}

You may have noticed that there are several variables that are not declared within these functions. Just declare them globally in your scheduled script.

You can read more about the oAuth authorization process at http://oauth.net/.

There are more services out there that use the oAuth process, so you should be able to convert this for other requests, like getPocket.com (formerly Read It later) and others.

As always, I hope this post was helpful for some of you.

Happy coding!

Posted by msicc in Azure, Dev Stories, 0 comments