Azure

all my Azure posts

Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

Create an additional SSH-login enabled user for your Azure Linux VM without third-party tools

As I am moving forward in my current Linux journey, I recently came into a situation where a second user would have been handy. So I tried a few things to create the new user and allow the new user to only log in via SSH.

What the h*** is SSH?

SSH stands for Secure Shell and describes a protocol to connect via encrypted credentials. The security is provided by cryptographic keys, where the server only knows the public key and the client that wants to connect needs the matching private key. The most popular implementation is OpenSSH, which is available as an additional feature on Windows 10 since last fall. If you want to learn more about SSH, read the Wikipedia entry as well.

Create the SSH key pair on Windows

Install the OpenSSH client

First, install the OpenSSH client on your Windows machine. Proceed as follows:

  • open the start menu and type ‘apps’
  • select ‘Apps and features’ settings page
  • select ‘Optional features’
  • click on ‘Add a feature’
  • search ‘OpenSSH client’, click on it and ‘Install’

If you are scrolling down the list of installed features, you should find the entry for the OpenSSH client.

Note: If you are not able to install this feature, it may be the right time to update your Windows installation to the latest version.

Create your SSH key pair

If your user profile does not have a folder called ‘.ssh’, it is time to create it now. Type ‘%USERPROFILE%‘ in the Windows Explorer’s address bar to get to the right folder immediately and create the folder.

Now that we have the folder OpenSSH searches for, we are already able to create our new SSH keypair. Open a command prompt and type this:

ssh-keygen -t rsa -b 4096 -C "newuser@machine.com"

This will initiate the keypair creation. The -C parameter is optional and can be anything. You’ll find these keys often created with <user at address> combinations. After OpenSSH has created your keypair in memory, it will ask for a location to save the file. If you do not enter anything, it’ll save it as id_rsa into the .ssh folder created earlier. If you want to save it to another file name, you can do so:

C:\Users\username\.ssh/id_test

Please note that the file name (without any extension!) is separated by ‘/’, not by ‘\’. If you do not respect this, it will give you an error that the file name does not exist. This happens only if don’t use the default filename. After the files are created, OpenSSH asks you for a passphrase to protect your private key. Nowadays, every single keypair should be password protected (just my 2cts). Once the creation is complete, you’ll see something like this:

Your identification has been saved in C:\Users\msicc\.ssh/id_test.
Your public key has been saved in C:\Users\msicc\.ssh/id_test.pub.
The key fingerprint is:
SHA256:8LK33fWKXMkY5FtcN4uU1v9SyCSUqKNp2T/PurpZCRU newuser@machine.com
The key's randomart image is:
+---[RSA 4096]----+
|          E...   |
|          .o. o  |
|      .  .. o+.oo|
|       oo. oo=.o=|
|      .=S.  o.=.o|
|      =o.. . * o.|
|     .. ..o o * .|
|       . =o+ + o |
|        =o+=* ...|
+----[SHA256]-----+

As you can see, we are able to create SSH keys without any third-party application on Windows. You can now safely close the console window (e.g by typing ‘exit‘).

Deploying the public key to the server

Of course, the whole key pair thing has only sense if we are using it to secure our client/server communication. On Linux, there would be the easy to use command ssh-copy-id to deploy the key. Some Windows tutorials are showing the scp command, but I never got that working with my Azure VM. The only way left was to deploy it manually (which is not that difficult if you know how).

The manual way

After logging in (using Azure CLI), we are going to add a new user to the gang:

sudo useradd -m newuser

This will create a new user and its home directory. If the OS asks you for a password, it is up to you and the OS settings for empty passwords to provide one or not. Next step would be to add it to one or more groups if necessary:

sudo usermod -aG sudo newuser

The -aG parameter of the usermod command adds the specified group(s) to the user’s groups table. If you want to add the user to more than one group, separate them with a comma (no whitespace after the comma!).

To make things a little bit easier for us, we are going to log in as the new user:

sudo su newuser

Note: If you want to proceed without logging in as the new user, you’ll need to change ~ to /home/newuser for the following commands.

To make Linux accept our prior created SSH key only for our new user, we need to create the .ssh folder and the file for allowed keys in the new user’s home directory. Let’s start with the .ssh directory:

sudo mkdir ~/.ssh
sudo chmod 0700 ~/.ssh
sudo chown newuser:newuser ~/.ssh

Let’s break it down. Obviously, the mkdir command creates the .ssh folder. The chmod command with the 0700 parameter gives full access only to our new user. Finally, the chown command makes our new user the owner of the file.

Linux saves the allowed keys in a file called ‘authorized_keys‘, so that’s the next we are going to create. After creation, we change the file’s access rights to 0644, which makes it read-only for all users except our new user. Execute these commands:

sudo touch ~/.ssh/authorized_keys
sudo chmod 0644 ~/.ssh/authorized_keys

We’re getting closer… earlier, OpenSSH created two files in the .ssh folder on Windows. We are going to copy the contents of the .pub file into the authorized_keys file now. You can extract the content of the .pub file with Notepad on Windows. Once you have that one in your clipboard, open the authorized_keys file (using your favorite editor):

nano ~/.ssh/authorized_keys

Paste the content of your .pub file by right-clicking on the Azure CLI window. Save the file and close it. To secure the new user account, we need to apply some additional steps.

The first one is to delete and disable the password of our new user:

sudo passwd -d -l newuser

The -d parameter completely deletes the password. The -l parameter locks the state, preventing the user to set a new password (without using sudo, that is).

Additional security measures

Now that we have our SSH public key on our server, we are save to disable the password-based login in general. To do so, we need to modify the sshd_config file of our server. Open it with the editor of your choice:

sudo nano /etc/ssh/sshd_config

Search for PasswordAuthentication and ChallengeResponseAuthentication. Uncomment these entries if necessary by removing the ‘#‘ in front of the line an set them to ‘no‘. Some tutorials that are floating around the web tell you to even set UsePAM to no, but following this recommendation always disabled the login completely for me on the Azure VM and I always had to reset the SSH config via the Azure CLI.

Save and exit the sshd_config file. To take these changes into effect, we need to restart the SSH service on our service:

sudo service ssh restart

Wait a few seconds and then verify that the service restarted by controlling its status:

systemctl status ssh.service

That’s it, we have deployed our new SSH key to our server and took additional security measures. There’s just one thing left: try to log in via ssh as the new user. This is a pretty easy task. In the Azure CLI (or a local command prompt), enter the following command:

 ssh -i %USERPROFILE%\.ssh\id_test newuser@machine.com

After entering your password, you should be logged in just like you did with your admin user.

Bonus: add a shared directory

More often than not, you might want to create scripts or other files that are available to all users. Follow these simple steps:

sudo groupadd shared
sudo usermod -aG shared newuser

sudo mkdir -p /var/helpers
sudo chgrp -R shared /var/helpers
sudo chmod -R 2775 /var/helpers

The first two lines create a new group and assign our new user to the group. Repeat the second command for every user you want to be in that group.

Then we create the shared folder /helpers in the /var folder of our server. Utilizing the chgrp command, we give the ownership to our shared group. Last but not least, we are modifying the access rights once again for the /var/helper folder. The combination 2775 means that every new file inherits the group from the folder, allowing all members of the group to read, write and execute the file, while users outside the shared group only can read and execute the file.

Conclusion

As you can see, one does not always have to use third-party tools to get things done. Like I said at the beginning of this post, once you know the steps that are needed, it is pretty easy to create a new SSH key pair, create a new user and manually deploy the public key to the server. As always, I hope this post is helpful for some of you.

Helpful links

Title Image Credit (Pixabay)

Posted by msicc in Azure, Linux, 3 comments
Run your own Bitcoin Full Node on an Azure Linux VM

Run your own Bitcoin Full Node on an Azure Linux VM

One year ago, I began my journey in the crypto and blockchain area. Recently, multiple circumstances made me thinking about my own crypto-related server. Of course, I choose Azure for running this server (for the time being). It has been a while since I last touched Linux, so I had quite a bit to refresh and learn. In this post, I’ll show you the steps that are needed for setting up an independent full node to support the Bitcoin network.

Setting up the Virtual Machine on Azure

First, we need to install the Azure CLI on our computer. We will use this one to connect to our virtual machine later on via SSH. Follow the instructions found here.

The second prerequisite is a program to generate SSH keys. You can use either the OpenSSH client shipping with Windows 10 (latest versions), use the Azure CLI or PuTTY (follow these instructions).

Create the VM

Once you have installed the CLI and your SSH keys are created, log into your Azure account. Go to the marketplace, and search for ‘ubuntu‘. Choose Ubuntu Server 18.04 LTS and hit the ‘Create‘ button in the next window.

Fill in the details of your Azure VM on the first page of the creation module:

Do not forget to set the SSH admin user, as adding one afterward is not as easy as it seems and it often also fails (I had a hard time to learn that). Also, we need to allow traffic through the default HTTP (80) and SSH (22) ports. Once you configured everything, go to disks.

I did not select premium disks but instead went with a Standard HDD to save some money. You can change that to your needs. The important part here is to NOT use managed disks as we will need to resize the OS disk after the creation of the VM. Follow the rest of the steps in the creation wizard and create your virtual machine. Once the machine is created, you should create a DNS label (hit the ‘Configure’ link at the VM’s overview page to get to the IP settings).

Log in via SSH

Let’s try if we can log in to our Linux VM via the Azure CLI. Open the ‘Microsoft Azure Command Prompt‘ on your PC. To be able to connect to our virtual machine, we need to log in to Azure first to obtain an access token for our session:

az login 

This will open a new browser tab, where you need to log in to your account again. After that, we will be redirected back to the CLI. Once that happened, go back to the overview page of your VM and click on ‘Connect’. This will open a pane where we will see the RDP and SSH connection option. Select SSH and copy the text below ‘Login using VM local account‘:

Paste it into the Azure Command Prompt window and provide your password (you should never use a password-less SSH key). If your screen looks now similar to this, you have successfully logged in:

Now that we have verified that we are able to log in via SSH, type exit to log out again as we have one step left to perform on the virtual machine.

Resizing the OS disk

A Bitcoin full node needs to download and verify the whole blockchain. The current size of the blockchain is around 250 GB, which would never fit in the default size of the OS disk of our VM. Luckily, it is pretty easy to resize the OS disk by running some commands in the Azure CLI.

First, stop the virtual machine:

az vm stop --resource-group YourResourceGroupName --name YourVMName 

Resizing the OS disk needs the VM to be deallocated (this may take some minutes):

az vm deallocate --resource-group YourResourceGroupName --name YourVMName

Once deallocation has finished, we are able to resize the OS disk with this command:

az vm update --resource-group YourResourceGroupName --name YourVMName --set storageProfile.osDisk.diskSizeGB=1024

Once you see the new size in the returned response from Azure, we can start the VM again:

az vm start --resource-group YourResourceGroupName --name YourVMName 

Depending on the distribution you are using, you may have to perform additional steps. Ubuntu, however, mounts the new disk size without any additional action. Verifying the new disk size is pretty easy (after logging back in via SSH), as the System Information displayed after login should already reflect the change (like in the screen above).

Preparing the Bitcoin node

After completing all the steps above, we are finally able to move on with the preparations for the Bitcoin node.

Bitcoin service user

As we will run the node as a service, we need an unprivileged service account:

sudo useradd -m -s /dev/null bitcoin

Great, we just created the account, including the creation of the home directory for the service user and default shell entry. If you want to see the system’s response, just leave out the /dev/null part out when running the command.

As we are going to restrict the access to the service to the bitcoin user (and its default group, which is also bitcoin), we need to add our admin user to the group. Run the following command to do so:

sudo usermod -a -G bitcoin [admin-user]

In order to make these changes (especially the group add) persistent, we need to perform a restart before we move on. You can not only use the Azure portal or CLI, but also use this command to make the VM restart immediately:

sudo shutdown -r now

This will log you out of the current SSH session. After waiting one minute or two, just log back in to continue the preparation of the full node.

Downloading and Verifying Bitcoin binaries

The next step involves downloading the bitcoin core package and its valid signature files (as we don’t trust, but verify). Run these two commands (you will need to hit enter a second time after the first download finished). I also created a temp directory for the download and other stuff.

mkdir ~/temp
cd ~/temp
btc_version="0.18.1"
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
wget https://bitcoincore.org/bin/bitcoin-core-$btc_version/SHA256SUMS.asc

According to Bitcoin.org the latest releases are signed with Wladimir J. van der Laan’s releases key, which has the fingerprint we are going to verify the downloaded binaries. It is recommended to do this for all crypto-related binaries you’re downloading, no matter on which OS.

Let’s try to import the key of Mr. van der Laan:

gpg --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

You may get an error message telling you there was a server failure. In this case, run the following command to import the key:

gpg --keyserver hkp://keyserver.ubuntu.com:80 --receive-key 0x01EA5486DE18A882D4C2684590C8019E36C2E964

If you still get errors, there might be some missing packages or other reasons for the server failure. I managed to come through with the second command more often than the first, but in the end, I had the key in my local key store.

Now let’s verify the downloaded files:

gpg --verify SHA256SUMS.asc
sha256sum --ignore-missing -c SHA256SUMS.asc

If the command tells you that the signature is good and indeed from Mr. van der Laan, everything is fine with the hash file. The second command verifies the archive we downloaded earlier and should result in ‘OK’. If not, you should immediately delete those files as they might contain malware.

Note: I have read quite a few comments on reddit and other sites that we can safely ignore those warnings …

Installing Bitcoin binaries

Now that we have our bitcoin service user and verified the bitcoin binaries, we are finally at the point to install them:

tar zxf bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
pushd bitcoin-$btc_version/bin; sudo cp bitcoind bitcoin-cli /usr/bin; popd;
bitcoind --version

After getting the install verification via the version string, it is a good practice to remove both the binaries and the hash file. If you need to reinstall, perform the steps above again to verify the authenticity of the files. Run these commands to remove those files:

rm bitcoin-$btc_version-x86_64-linux-gnu.tar.gz
rm SHA256SUMS.asc

Create a service config file

Now that we have Bitcoin installed, we need to prepare a .conf file for our upcoming service:

vi bitcoin.conf  
or
nano bitcoin.conf

This will open a text editor on Linux. Enter the base config to get the RPC server of the Bitcoin daemon (needed for bitcoin-cli) activated and save the file to the temp disc (press ‘ESC’ + ‘:’ and write ‘wq‘ if you used vi:, ‘CTRL’+’x’ followed by ‘y’ on nano):

testnet=0
regstest=0
mainnet=1
# Global Options
server=1 #activating rpc
rpcconnect=127.0.0.1 #default
rpcport=8332 #default
rpcallowip=127.0.0.1/32 #default
rpcbind=127.0.0.1  #default
disablewallet=1 #keeping wallet off (atm)
daemon=1
# Options only for mainnet
[main]
# Options only for testnet
[test]
# Options only for regtest
[regtest]

Now we just need to copy that file into the /etc/bitcoin folder. If this folder does not yet exist, create it with the following command:

sudo mkdir -p /etc/bitcoin

Next, copy the bitcoin.conf file to it:

sudo cp bitcoin.conf /etc/bitcoin

Assign ownership to the bitcoin service user and make it readable for all users:

sudo chown bitcoin:bitcoin /etc/bitcoin/bitcoin.conf
sudo chmod 0664 /etc/bitcoin/bitcoin.conf

The last step is to create a directory for the daemon where we will find the PID (Process ID) file after starting the service:

sudo mkdir -p /run/bitcoind/
sudo chmod 0755 /run/bitcoind/
sudo chown bitcoin:bitcoin /run/bitcoind/

Create the Bitcoin service

Now we are able to set up the core service of our node, the Bitcoin daemon service. In order to start, just copy and paste the one found at Bitcoin’s Github account into a new file on your VM. Once you have that file in your temp folder, copy it over to the system’s services folder:

sudo cp bitcoind.service /lib/systemd/system

Now we need to enable the service to make it automatically starting up on reboot:

sudo systemctl enable bitcoind

And finally, we need to start the service (or reboot the machine if you want to test that part):

sudo systemctl start bitcoind

After some time, you should be able to use the bitcoin-cli to get some info of your local blockchain copy:

sudo bitcoin-cli -rpccookiefile=/var/lib/bitcoind/.cookie -datadir=/var/lib/bitcoind -getinfo

Please note you need to use sudo because the owner of the service and the files is our bitcoin user we created earlier. If you don’t want to always pass the authentication cookie path, you can create a symbolic link to your current user’s .bitcoin folder:

sudo ln -s /var/lib/bitcoind/.cookie ~/.bitcoin/.cookie

Now we are able to just call sudo bitcoin-cli -getinfo. Another way of checking if the service and the daemon are running is to read the log that gets generated:

sudo tail /var/lib/bitcoind/debug.log -f 

You should see the log scrolling through as it writes new entries. The most entries will look like this:

2019-09-07T14:54:51Z UpdateTip: new best=000000000000004fa323c7ee57b4b22272c7ea757a6a5bdb53dbda73572f559d height=239240 version=0x00000002 log2_work=70.203272 tx=18819570 date='2013-06-02T08:32:35Z' progress=0.041989 cache=675.7MiB(5032783txo)

If you arrived at this point

Congratulations! You are running a Bitcoin full node (without wallet for the time being, though). It will take some time to download and verify the whole blockchain, but you are now effectively helping and securing the Bitcoin network.

This is just the first post about my journey with Bitcoin and my own node. Make sure to follow for future blog posts. As always, I hope this post will be helpful for some of you.

Helpful links

Note: this post was reposted on my Trybe.one account.

Posted by msicc in Azure, Crypto&Blockchain, Linux, 0 comments
Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Use NuGets for your common Xamarin (Forms) code (and automate the creation process)

Internal libraries

Writing (or copy and pasting) the same code over and over again is one of those things I try to avoid when writing code. For quite some time, I already organize such code in libraries. Until last year, this required quite some work managing all libraries for each Xamarin platform I used. Luckily, the MSBuild SDK Extras extensions showed up and made everything a whole lot easier, especially after James Montemagno did a detailed explanation on how to get the most out of it for Xamarin plugins/libraries.

Getting started

Even if I repeat some of the steps of James’ post, I’ll start from scratch on the setup part here. I hope to make the whole process straight forward for everyone – that’s why I think it makes sense to show each and every step. Please make sure you are using the new .csproj type. If you need a refresh on that, you can check my post about migrating to it (if needed).

MSBuild.Sdk.Extras

The first step is pulling in MSBuild.Sdk.Extras, which will enable us to target multiple platforms in one single library. For this, we need a global.json file in the solution folder. Right click on the solution name and select ‘Open Folder in File Explorer‘, then just add a new text file and name it appropriately.

The next step is to define the version of the MSBuild.SDK.Extras library we want to use. The current version is 1.6.65, so let’s define it in the file. Just click the ‘Solution and Folders‘ button to find the file in Visual Studio:

switch to folder view

Add these lines into the file and save it:

{
  "msbuild-sdks": {
    "MSBuild.Sdk.Extras": "1.6.65"
  }
}

Modifying the project file

Switch back to the Solution view and right click on the .csproj file. Select ‘Edit [ProjectName].csproj‘. Let’s modify and add the project definitions. We’ll start right in the first line. Replace the first line to pull in the MSBuild.Sdk.Extras:

<Project Sdk="MSBuild.Sdk.Extras">

Next, we’re separating the Version tag. This will ensure that we’ll find it very quickly in future within the file:

  <!--separated for accessibility-->
  <PropertyGroup>
    <Version>1.0.0.0</Version>
  </PropertyGroup>

Now we are enabling multiple targets, in this case our Xamarin platforms. Please note that there are two separated versions – one that includes UWP and one that does not. I thought I would be fine to remove the non-UWP one if I include UWP and was precent with some strange build errors that where resolved only by re-adding the deleted line. I do not remember the reason, but I made a comment in my template to not remove it – so let’s just keep it that way.

  <!--make it multi-platform library!-->
  <PropertyGroup>
    <UseFullSemVerForNuGet>false</UseFullSemVerForNuGet>
    <!--we are handling compile items ourselves below with a custom naming scheme-->
    <EnableDefaultCompileItems>false</EnableDefaultCompileItems>
    <KEEP ALL THREE IF YOU ADD UWP!-->
    <TargetFrameworks></TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' == 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;uap10.0.16299;</TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' != 'Windows_NT' ">netstandard2.0;MonoAndroid81;Xamarin.iOS10;</TargetFrameworks>
  </PropertyGroup>

Now we will add some default NuGet packages into the project and make sure our file get included only on the correct platform. We follow a simple file naming scheme (Xamarin.Essentials uses the same):

[Class].[platform].cs

This way, we are able to add all platform specific code together with the shared entry point in a single folder. Let’ start with shared items. These will be available on all platforms listed in the PropertyGroup above:

  <!--shared items-->
  <ItemGroup>
    <!--keeping this one ensures everything goes smooth-->
    <PackageReference Include="MSBuild.Sdk.Extras" Version="1.6.65" PrivateAssets="All" />

    <!--most commonly used (by me)-->
    <PackageReference Include="Xamarin.Forms" Version="3.4.0.1029999" />
    <PackageReference Include="Xamarin.Essentials" Version="1.0.1" />

    <!--include content, exclude obj and bin folders-->
    <None Include="**\*.cs;**\*.xml;**\*.axml;**\*.png;**\*.xaml" Exclude="obj\**\*.*;bin\**\*.*;bin;obj" />
    <Compile Include="**\*.shared.cs" />
  </ItemGroup>

The ‘**\‘ part in the Include property of the Compile tag ensures MSBuild includes also classes in subfolders. Now let’s add some platform specific rules to the project:

  <ItemGroup Condition=" $(TargetFramework.StartsWith('netstandard')) ">
    <Compile Include="**\*.netstandard.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('uap10.0')) ">
    <PackageReference Include="Microsoft.NETCore.UniversalWindowsPlatform" Version="6.1.9" />
    <Compile Include="**\*.uwp.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('MonoAndroid')) ">
    <!--need to reference all those libs to get latest minimum Android SDK version (requirement by Google)... #sigh-->
    <PackageReference Include="Xamarin.Android.Support.Annotations" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Compat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Core.Utils" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.CustomTabs" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v4" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Design" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.AppCompat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.CardView" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.Palette" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.MediaRouter" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Core.UI" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Fragment" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Media.Compat" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.v7.RecyclerView" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Transition" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="28.0.0.1" />
    <PackageReference Include="Xamarin.Android.Support.Vector.Drawable" Version="28.0.0.1" />
    <Compile Include="**\*.android.cs" />
  </ItemGroup>

  <ItemGroup Condition=" $(TargetFramework.StartsWith('Xamarin.iOS')) ">
    <Compile Include="**\*.ios.cs" />
  </ItemGroup>

Two side notes:

  • Do not reference version 6.2.2 of the Microsoft.NETCore.UniversalWindowsPlatform NuGet. There seems to be bug in there that will lead to rejection of your app from the Microsoft Store. Just keep 6.1.9 (for the moment).
  • You may not need all of the Xamarin.Android packages, but there are a bunch of dependencies between them and others, so I decided to keep them all

If you have followed along, hit the save button and close the .csproj file. Verifying everything went well is pretty easy – your solution structure should look like this:

multi-targeting-project

Before we’ll have a look at the NuGet creation part of this post, let’s add some sample code. Just insert this into static partial classes with the appropriate naming scheme for every platform and edit the code to match the platform. The .shared version of this should be empty (for this sample).

     public static partial class Hello
    {
        public static string Name { get; set; }

        public static string Platform { get; set; }

        public static  void Print()
        {
            if (!string.IsNullOrEmpty(Name) && !string.IsNullOrEmpty(Platform))
                System.Diagnostics.Debug.WriteLine($"Hello {Name} from {Platform}");
            else
                System.Diagnostics.Debug.WriteLine($"Hello unkown person from {Device.Android}");
        }
    }

Normally, this would be a Renderer or other platform specific code. You should get the idea.

Preparing NuGet package creation

We will now prepare our solution to automatically generate NuGet packages both for DEBUG and RELEASE configurations. Once the packages are created, we will push it to a local (or network) file folder, which serves as our local NuGet-Server. This will fit for most Indie-developers – which tend to not replicate a full blown enterprise infrastructure for their DevOps needs. I will also mention how you could push the packages to an internal NuGet server on a sideline (we are using a similar setup at work).

Adding NuGet Push configurations

One thing we want to make sure is that we are not going to push packages on every compilation of our library. That’s why we need to separate configurations. To add new configurations, open the Configuration Manager in Visual Studio:

In the Configuration Manager dialog, select the ‘<New…>‘ option from the ‘Active solution configuration‘ ComboBox:

Name the new config to fit your needs, I just use DebugNuget which will signal that we are pushing the NuGet package for distribution. I am copying the settings from the Debug configuration and let Visual Studio add the configurations to project files within the solution. Repeat the same for Release configuration.

The result should look like this:

Modifying the project file (again)

If you head over to your project file, you will see the Configurations tag has new entries:

  <PropertyGroup>
    <Configurations>Debug;Release;DebugNuget;ReleaseNuget</Configurations>
  </PropertyGroup>

Next, add the properties of your assembly and package:

    <!--assmebly properties-->
  <PropertyGroup>
    <AssemblyName>XamarinNugets</AssemblyName>
    <RootNamespace>XamarinNugets</RootNamespace>
    <Product>XamarinNugets</Product>
    <AssemblyVersion>$(Version)</AssemblyVersion>
    <AssemblyFileVersion>$(Version)</AssemblyFileVersion>
    <NeutralLanguage>en</NeutralLanguage>
    <LangVersion>7.1</LangVersion>
  </PropertyGroup>

  <!--nuget package properties-->
  <PropertyGroup>
    <PackageId>XamarinNugets</PackageId>
    <PackageLicenseUrl>https://github.com/MSiccDevXamarinNugets</PackageLicenseUrl>
    <PackageProjectUrl>https://github.com/MSiccDevXamarinNugets</PackageProjectUrl>
    <RepositoryUrl>https://github.com/MSiccDevXamarinNugets</RepositoryUrl>

    <PackageReleaseNotes>Xamarin Nugets sample package</PackageReleaseNotes>
    <PackageTags>xamarin, windows, ios, android, xamarin.forms, plugin</PackageTags>

    <Title>Xamarin Nugets</Title>
    <Summary>Xamarin Nugets sample package</Summary>
    <Description>Xamarin Nugets sample package</Description>

    <Owners>MSiccDev Software Development</Owners>
    <Authors>MSiccDev Software Development</Authors>
    <Copyright>MSiccDev Software Development</Copyright>
  </PropertyGroup>

Configuration specific properties

Now we will add some configuration specific PropertyGroups that control if a package will be created.

Debug and DebugNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Debug' ">
    <DefineConstants>DEBUG</DefineConstants>
    <!--making this pre-release-->
    <PackageVersion>$(Version)-pre</PackageVersion>
    <!--needed for debugging!-->
    <DebugType>full</DebugType>
    <DebugSymbols>true</DebugSymbols>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(Configuration)'=='DebugNuget' ">
    <DefineConstants>DEBUG</DefineConstants>
    <!--enable package creation-->
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <!--making this pre-release-->
    <PackageVersion>$(Version)-pre</PackageVersion>
    <!--needed for debugging!-->
    <DebugType>full</DebugType>
    <DebugSymbols>true</DebugSymbols>
    <GenerateDocumentationFile>false</GenerateDocumentationFile>
    <!--this makes msbuild creating src folder inside the symbols package-->
    <IncludeSource>True</IncludeSource>
    <IncludeSymbols>True</IncludeSymbols>
  </PropertyGroup>

The Debug configuration enables us to step into the Debug code while we are referencing the project directly during development, while the DebugNuget configuration will also generate a NuGet package including Source and Symbols. This is helpful once you find a bug in the NuGet package and allows us to step into this code also if we reference the NuGet instead of the project. Both configurations will add ‘-pre‘ to the version, making these packages only appear if you tick the ‘Include prerelease‘ CheckBox in the NuGet Package Manager.

Release and ReleaseNuget

  <PropertyGroup Condition=" '$(Configuration)'=='Release' ">
    <DefineConstants>RELEASE</DefineConstants>
    <PackageVersion>$(Version)</PackageVersion>
  </PropertyGroup>

  <PropertyGroup Condition=" '$(Configuration)'=='ReleaseNuget' ">
    <DefineConstants>RELEASE</DefineConstants>
    <PackageVersion>$(Version)</PackageVersion>
    <!--enable package creation-->
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <!--include pdb for analytic services-->
    <DebugType>pdbonly</DebugType>
    <GenerateDocumentationFile>true</GenerateDocumentationFile>
  </PropertyGroup>

The relase configuration goes well with less settings. We do not generate a separated symbols-package here, as the .pdb-file without the source will do well in most cases.

Adding Build Targets

We are close to finish our implementation already. Of course, we want to make sure we push only the latest packages. To ensure this, we are cleaning all generated NuGet packages before we build the project/solution:

  <!--cleaning older nugets-->
  <Target Name="CleanOldNupkg" BeforeTargets="Build">
    <ItemGroup>
      <FilesToDelete Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></FilesToDelete>
    </ItemGroup>
    <Delete Files="@(FilesToDelete)" />
    <Message Text="Old nupkg in $(ProjectDir)$(BaseOutputPath)$(Configuration) deleted." Importance="High"></Message>
  </Target>

MSBuild provides a lot of options to configure. We are setting the BeforeTargets property of the target to Build, so once we Clean/Build/Rebuild, all old packages will be deleted by the Delete command. Finally, we are printing a message to confirm the deletion.

Pushing the packages

After completing all these steps above, we are ready to distribute our packages. In our case, we are copying the packages to a local folder with the Copy command.

  <!--pushing to local folder (or network path)-->
  <Target Name="PushDebug" AfterTargets="Pack" Condition="'$(Configuration)'=='DebugNuget'">
    <ItemGroup>
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg"></PackageToCopy>
    </ItemGroup>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>
  </Target>

  <Target Name="PushRelease" AfterTargets="Pack" Condition="'$(Configuration)'=='ReleaseNuget'">
    <ItemGroup>
      <PackageToCopy Include="$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.nupkg"></PackageToCopy>
    </ItemGroup>
    <Copy SourceFiles="@(PackageToCopy)" DestinationFolder="C:\TempLocNuget" />
    <Message Text="Copied '@(PackageToCopy)' to local Nuget folder" Importance="High"></Message>
  </Target>

Please note that the local folder could be replaced by a network path. You have to ensure the availability of that path – which adds in some additional work if you choose this route.

If you’re running a full NuGet server (as often happens in Enterprise environments), you can push the packages with this command (instead of the Copy command):

<Exec Command="NuGet push "$(ProjectDir)$(BaseOutputPath)$(Configuration)\$(AssemblyName).*.symbols.nupkg" [YourPublishKey] -Source [YourNugetServerUrl]" />

The result

If we now select the DebugNuget/ReleaseNuget configuration, Visual Studio will create our NuGet package and push it to our Nuget folder/server:

Let’s have a look into the NuGet package as well. Open your file location defined above and search your package:

As you can see, the Copy command executed successfully. To inspect NuGet packages, you need the NuGet Package Explorer app. Once installed, just double click the package to view its contents. Your result should be similar to this for the DebugNuGet package:

As you can see, we have both the .pdb files as well as the source in our package as intended.

Conclusion

Even as an Indie developer, you can take advantage of the DevOps options provided with Visual Studio and MSBuild. The MSBuild.Sdk.Extras package enables us to maintain a multi-targeting package for our Xamarin(.Forms) code. The whole process needs some setup, but once you have performed the steps above, extending your libraries is just fast forward.

I planned to write this post for quite some time, and I am happy with doing it as my contribution to the #XamarinMonth (initiated by Luis Matos). As always, I hope this post is helpful for some of you. Feel free to clone and play with the full sample I uploaded on Github.

Until the next post, happy coding, everyone!

Helpful links:

Title image credit

P.S. Feel free to download the official app for my blog (that uses a lot of what I am blogging about):
iOS | Android | Windows 10

Posted by msicc in Azure, Dev Stories, iOS, Windows, Xamarin, 3 comments

Getting productive with WAMS: How to handle erroneous push channels

WAMS

As I wrote already in my former article ‘Getting productive with WAMS- about the mpns object (push data to the user’s Windows Phone)’, I needed to add some more detailed error handling to the mpns object on my Mobile Service.

There are two error codes that appear frequently: 404 (Not Found) and 412 (Precondition Failed).

The 404 error code

happens when the push channel gets invalid. Reasons for that can be uninstall or reinstall of the app, or also a hard reset of the users device. In this case, we are following the recommendation of Microsoft to stop sending push notifications via this channel.

There might be several ways, but I prefer to work with SQL queries.  This is how I delete those channels within error: function(error):

if (error.statusCode === 404)
{
 var sqlDelInvalidChannel = "DELETE from pushChannel WHERE id = " + channel.id;
 mssql.query(sqlDelInvalidChannel, {
 success: function(){
	console.log("deleted invalid push channel with id: " + channel.id);
	},
	error: function(err)
	 {
           console.log("there was a problem deleting push channel with id: " + channel.id + ", " + err)
         }
});

As you can see, this is a very simple approach to get rid of those invalid channels.

The 412 error code

needs some more advanced handling. Microsoft recommends to send the push notification as normal, but the code recommends a delay of 61 minutes to resend. Also, with some research on the web, I found out that often the 412 will turn into a 404 after those 61 minutes (at least I found a lot of developers stating this). This is why I went with a different approach. I am going to wait those 61 minutes, and do not send a push notification to those devices.

For that, I do a simple trick. I am saving the time the error shows up in my push channel table, as well as the  time after those 61 minutes. On top, I use a Boolean to determine if the channel is in the delay phase.

Here is the simple code for that, again within error: function(error):

if (error.statusCode === 412)
{
	var t = new Date();
	var tnow = t.getTime();
	var tnow61 = tnow + 3660000;

	var sqlSave412TimeStamp = "UPDATE pushChannel SET Found412Time=" +  tnow + ", EndOf412Hour=" + tnow61 + ", IsPushDelayed = 'true' WHERE id=" + channel.id;

	mssql.query(sqlSave412TimeStamp);
}

Now if my script runs the next time over this push channel, I need to check if the push channel is within the delay phase. Here’s the code:

if (channel.IsPushDelayed === true)
{
	var t = new Date();
	var tnow = t.getTime();
	var EndofHour = channel.EndOf412Hour;
	var tdelay = EndofHour - tnow;

	if (tdelay > 0)
	{
	 console.log("push delivery on id: " + channel.id + " is delayed for: " + (Math.floor(tdelay/1000/60)) + " minutes");
	}
	else if (tdelay < 0)
	{
	 var sqlDelete412TimeStamp = "UPDATE pushChannel SET Found412Time=0 , EndOf412Hour= 0, IsPushDelayed = 'false' WHERE id=" + channel.id;
	 mssql.query(sqlDelete412TimeStamp);
	}
}

As you can see, I am checking my Boolean that I added before. If it is still true, I am writing a log entry. If the time has passed already, I am resetting those time values to 0 respective the Boolean to false.

The script will now check if the push channel is still a 412 or turned in to a 404, and so everything starts over again.

Other error codes

There might be other error codes as well. I did not see any other than those two in my logs, but for the case there would be another, I simply added this code to report them:

else
{
 console.error("error in Toast Push Channel: " + channel.twitterScreenName, channel.id, error)
}

This way, you can easily handle push channel errors in your Mobile Service.

If you have other error codes in your logs, check this list from Microsoft to determine what you should do: http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff941100(v=vs.105).aspx#BKMK_PushNotificationServiceResponseCodes

Note: there might be better ways to handle those errors. If you are using such a way, feel free to leave a comment with your approach.

Otherwise, I hope this post will be helpful for some of you.

Happy coding!

Posted by msicc in Azure, Dev Stories, 0 comments

Getting productive with WAMS: How to call Twitter REST API 1.1 from a scheduled script

WAMS.png

Like I promised in my first post about Windows Azure Mobile Services, I will show you how to call the Twitter Rest API 1.1 from a scheduled script. The documentation of the http request object does only use Twitter API 1.0 (which is no longer available).

First, you will need a Consumer key and a Consumer secret for your app. Just go to dev.twitter.com, register with your Twitter account and then add a new application.

The second thing you will need, is the so called Access token and Access token secret. Both are user dependent, without them Twitter will give you an error that your app is not authorized to use this account for anything on Twitter.

There are several ways to obtain these values. As I am registering the user within my phone app, I am uploading these values from phone and store it in my Mobile Services database.

To generate the requested data, we need several additional data for our request to Twitter:

  • a timestamp for the oAuth Header and the signature string
  • a random number to secure the request (= nonce)
  • an oAuth signature (signed array of the user’s data)
  • a HMAC encoded Hash string

These data is used for our request to Twitter.

Let’s start with the “simple” things:

generate Timestamp:

//generating the timestamp for the OAuth Header and signature string
var timestamp  = new Date() / 1000;
timestamp = Math.round(timestamp);

generate nonce

function generateNonce() {
    var code = "";
    for (var i = 0; i < 20; i++) {
        code += Math.floor(Math.random() * 9).toString();
    }
    return code;
}

oAuth signature

//generating the oAuth signatured array for the Twitter request
function generateOAuthSignature(method, url, data) {
    //remove query string parameters
    var index = url.indexOf('?');
    if (index > 0)
        url = url.substring(0, url.indexOf('?'));

    var signingToken = encodeURIComponent(ConsumerSecret) + "&" + encodeURIComponent(twitterAccessTokenSecret);

    var keys = [];
    for (var d in data) {
        if (d != 'oauth_signature') {
            //console.log('data:', d);
            keys.push(d);
        }
    }

    keys.sort();
    var output = "GET&" + encodeURIComponent(url) + "&";
    var params = "";
    keys.forEach(function (k) {
        params += "&" + encodeURIComponent(k) + "=" + encodeURIComponent(data[k]);
    });
    params = encodeURIComponent(params.substring(1));

    return hashString(signingToken, output + params, "base64");
}

generate the HMAC encoded hash string

//generate Hash-string, encoded in HMAC-SHA1 as required by Twitter's API v1.1
function hashString(key, str, encoding) {
    //console.log('basestring:', str);
    var hmac = crypto.createHmac("sha1", key);
    hmac.update(str);
    return hmac.digest(encoding);
}

Now we have prepared all of these functions, we are well prepared to call the Twitter API. In this example we are calling the user’s profile data:

function requestToTwitter()
{

    //the url declaration has to be in this function to make the request working!
    //declaring it in another function would cause an error 401 from Twitter's API
    url = 'https://api.twitter.com/1.1/users/show.json?user_id=' + twitterId;

    //generate data for sending the request to Twitter
    //this is the data used in the signature string as well as in the Authorization header
    var oAuthData = {oauth_consumer_key: ConsumerKey, oauth_nonce: nonce, oauth_signature: null, oauth_signature_method: "HMAC-SHA1", oauth_timestamp: timestamp, oauth_token: twitterAccessToken, oauth_version: "1.0"};
    var sigData = {};
    for (var k in oAuthData) {
        sigData[k] = oAuthData[k];
    }
    sigData['user_id'] = twitterId;

    var sig = generateOAuthSignature('GET', url, sigData);
    oAuthData.oauth_signature = sig;

    var oAuthHeader = "";
    for (k in oAuthData) {
        oAuthHeader += "," + encodeURIComponent(k) + "=\"" + encodeURIComponent(oAuthData[k]) + "\"";
    }
    oAuthHeader = oAuthHeader.substring(1);
    //very important to not miss the space after OAuth!
    authHeader = 'OAuth '+oAuthHeader;

    var reqOptions = {
            uri: url,
            headers: { 'Accept': 'application/json', 'Authorization': authHeader }
    };

    var httpRequest = require('request');
        httpRequest(reqOptions,callback );

}

var callback = function(err, response, body) {
    //console.log("in requestToTwitter = callback"); 
            if (err) {
            console.log(err)
            } else if (response.statusCode !== 200) {
                console.log("from twitter callback " + response.statusCode + " response: " + response.body);
            } else {
                var userProfile = JSON.parse(body);
                UserIdFromTwitter = userProfile.id;
                twitterScreenName = userProfile.screen_name;

}
}

You may have noticed that there are several variables that are not declared within these functions. Just declare them globally in your scheduled script.

You can read more about the oAuth authorization process at http://oauth.net/.

There are more services out there that use the oAuth process, so you should be able to convert this for other requests, like getPocket.com (formerly Read It later) and others.

As always, I hope this post was helpful for some of you.

Happy coding!

Posted by msicc in Azure, Dev Stories, 0 comments

Getting productive with WAMS: how to set up a timeout before running code in an interval

WAMS

I truly love Windows Azure Mobile Services, as it provides an easy way to connect Windows Phone apps to the cloud and also manages Live Tiles and Toast Notifications via Push. This way, we don’t have an impact on the users battery life by running background agents.

However, scheduled scripts are only able to run in predefined time windows, with 15 minutes at lowest.

It may happen that you want to run your script in shorter intervals, as I am currently implementing into mine.

The functions:  setTimeout() and setInterval()

The difference between those two is pretty simple:

  • setTimeout() is running the function included only once after the time in milliseconds has passed
  • setInterval() is running the function included in an interval set up in milliseconds until it gets stopped

Knowing this, it is pretty simple to set up those two functions each for itself:

setTimeout(function() { 
//code to run 
}, time in milliseconds);

setInterval(function() {
//code to run
}, time in milliseconds);

I highly recommend to declare an interval as a variable. If you won’t do that, the interval will run forever as you never can stop it!

But what if we need first a timeout and then an interval?

That is the problem I was trying to solve over the last two days. I was playing around with all kinds of code constellations to achieve this goal. And I found a solution.

We need to set up a few things to achieve that goal:

  • global variables for the number of runs and the interval
  • a function that only proves if the interval is still valid to run or needs to be stopped
  • our function that should be run in an interval

In my example, I want the code to be executed five times, then the interval should be stopped (cleared). I achieve this goal with a pretty simple if/else clause, like you can see:

if (NumberOfRuns < 6)
{
        console.log("interval ran " + NumberOfRuns + " times");
        doSomething();
        }
        else
        {
        //stoping the interval       
        clearInterval(Interval);
        console.log("interval stopped!");
}

As this is a sample script, I use the function doSomething() to be executed. This represents your code that should be executed if the interval is still valid. Only important thing: counting up the number of runs every time the function is hit.

    //counting up the number of runs with every call of the function
    NumberOfRuns++;

    //your code runs here

And in our starting point (the function that has the same name like your script), we finally declare the the timeout as well as we start to count:

   //setting the NumberofRuns to 1 as after timeout the first run starts
   NumberOfRuns = 1;
   console.log("start")

   //settings timeout to call the function that proves if the interval is still valid
   setTimeout(function(){
       //interval has to be defined in this scope, otherwise it will not be accepted
       //we also need a variable for the interval to be able to stop the interval
       Interval = setInterval(RunInterval, 5000);
       console.log("timeout is over");
   },15000);

First, we start our counter with 1, as after the timeout the code will be executed for the first time. I added also some logging functions, so you can prove everything is running fine.

The content of the global variable “Interval” has to be declared within the timeout, otherwise we will not be able to set the interval for our function.

Once you figured all the points above, it is pretty easy to implement this into your running script.

If you want to play around with this and see what the log looks like, here is the full WAMS scheduler script:

//declaring global variables
var NumberOfRuns;
var Interval;

//function to execute in interval
function doSomething()
{
    //counting up the number of runs with every call of the function
    NumberOfRuns++;

    //your code runs here
}

//this function proves if the the interval is still valid
function RunInterval()
{
     if (NumberOfRuns < 6)
        {
        console.log("interval ran " + NumberOfRuns + " times");
        doSomething();
        }
        else
        {
        //stoping the interval       
        clearInterval(Interval);
        console.log("interval stopped!");
        }
}

//main script function (start)
function TestIntervalScript() 
{
   //setting the NumberofRuns to 1 as after timeout the first run starts
   NumberOfRuns = 1;
   console.log("start")

   //settings timeout to call the function that proves if the interval is still valid
   setTimeout(function(){
       //interval has to be defined in this scope, otherwise it will not be accepted
       //we also need a variable for the interval to be able to stop the interval
       Interval = setInterval(RunInterval, 5000);
       console.log("timeout is over");
   },15000);
}

And here is a screen shot from the desired log file:

Screenshot (192)

As I am still pretty new to JavaScript,  there might be also other ways to achieve this. Feel free to leave a comment if you have anything to add/improve on this code.

And as always, I hope this is helpful for some of you when playing around with Windows Azure Mobile Services.

Happy coding!

Posted by msicc in Azure, Dev Stories, 0 comments

Getting productive with WAMS: respect time zone offset for every single user

time_Azure

In my second post about WAMS I will show you how to respect the time zone of every user.

If you have users that are from all over the world, they have all different time zones. Your Mobile Service script runs always at UTC time, and every user gets the same date & time if you send them push notifications or update a live tile for example. Users don’t want to calculate the time zone differences, so we need to handle that for them.

UTC time is the time since 01/01/1970 00:00 in milliseconds. If we know this, it is somewhat easy to show users their local date and time.

Let’s have a look at the Windows Phone code.

To get the local time zone in our Windows Phone app, we only need three lines of code:

TimeZoneInfo localZone = TimeZoneInfo.Local;
DateTime localTime = DateTime.Now;
TimeSpan offsetToUTC = localZone.GetUtcOffset(localTime);

As you can see, we are getting the local time zone first. This is essential as this one is UTC based. Then we are creating a TimeSpan on our actual DateTime object to get the offset. To make this TimeSpan working on our Azure Mobile Service, which uses JavaScript, we need to convert it to milliseconds. That is the value that has an equal value on all programming languages.

useritemLookUp.TimezoneOffset = offsetToUTC.TotalMilliseconds;

This is the final line of code, which is used to update our user’s item in our SQL table row (for example).

Let’s have a look at the Azure code.

The code is similar to our Windows Phone part. First, we need to fetch the time zone offset from our SQL table:

var sql = "select * from users";

mssql.query(sql, {
        success: function (results) {
            if (results.length > 0) {
                for (var i = 0; i < results.length; i++) {
                    userResult = {
                                        TimeZoneOffset: results[i].TimezoneOffset,
                                        }

TimeZoneOffset = userResult.TimeZoneOffset;

This way, we can run through our whole table on an Azure Script and calculate the correct time, which is pretty easy to achieve:

var d = new Date();
var locald = new Date(d.getTime() + TimeZoneOffset);

These two lines generate the local time in Milliseconds for the specific user entry. You don’t have to worry whether a user is before or after UTC, it will always calculate the correct time.

If you want to use it for example to show the updated time to your users, you can format the time like I described earlier in this post.

That’s all about respecting local time for your users with Windows Azure and Windows Phone.

Happy coding everyone!

Posted by msicc in Azure, Dev Stories, 0 comments

Getting productive with WAMS: how to update data for a specific row in a table

WAMS.png

Like I promised, I will share some of the Azure goodness I learned during creating my last app.

This post is all about how to update a specific table entry (like a user’s data) in a Azure SQL table from an Windows Phone app.

First, we need to make sure that there is some data from the user we want to update. I used the LookupAsync () method to achieve that.

IMobileServiceTable<userItems> TableToUpdate = App.MobileService.GetTable<userItems>();
IMobileServiceTableQuery<userItems> query = TableToUpdate.Where(useritem => useritem.TwitterId == App.TwitterId);

var useritemFromAzure = await query.ToListAsync();
var useritemLookUp = await TableToUpdate.LookupAsync(useritemFromAzure.FirstOrDefault<userItems>().Id);

If we want a specific entry, we need a search criteria to find our user and fetch the id of the user’s table entry. In my case, I used the Twitter Id for the query as every user has this on my project.

Now that we have the table row id, we can easily update the data of this specific row with the UpdateAsync() method.

We need to declare which columns should be updated and asign the values to it first. After that, we simply call the UpdateAsync() method.

useritemLookUp.TwitterId = App.TwitterId;
useritemLookUp.LastCheckedAt = DateTime.Now;
useritemLookUp.OSVersion = "WP8";
useritemLookUp.AppVersion = App.VersionNumber;

await TableToUpdate.UpdateAsync(useritemLookUp);

Please note that you need a items class/model to create the update data (which you should have already before thinking about updating the data).

You should wrap this code in a try{}/catch{} block to be able to react to the Exceptions that possibly can be thrown and display a matching message to the user.

That’s already all about updating a specific row in a WAMS SQL table.

As always I hope this post is helpful for some of you.

Happy coding!

Posted by msicc in Azure, Dev Stories, 1 comment

New Series: Getting productive with Windows Azure Mobile Services (WAMS)

WAMS

Now that my current app project is near to go live, I will start a new series about how to get productive with Windows Azure Mobile Services (WAMS).

I will cover some interesting topics in this series, which are not really documented in the very well written “Getting started” series from the Azure team itself.

These topics are (list is subject to be updated if needed):

First, if you want to get  started, you should check out this link: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-wp8/

The tutorial makes you very easy and fast using WAMS.

What can you expect from this series? As always, I will add some of my personal experiences during my journey of creating my app. There were a lot of small stones in my way, and I will also tell you how to remove them. And of course I hope that my posts will help some of you to get their own WAMS story started.

Happy coding everyone!

Posted by msicc in Azure, Dev Stories, 0 comments

How to format Date and Time on Windows Azure

time_Azure

Phew, my first post about my journey on starting development on Windows Azure. I started a few weeks ago using the Mobile Services from Windows Azure, and I did learn a lot about it.

This post is about formatting Date and Time strings, because Azure uses a different format than my Windows Phone app.

If we upload a DateTime String to Windows Azure from a Windows Phone app, it looks like this: 2013-05-04T06:45:12.042+00:00

If we translate this, you have “YYYY-MM-DD” for the date. The letter “T” declares that the time string  is starting now, formatted “HH:MM:ss.msmsms”. The part “+00:00” is the timezone offset.

So far, probably nothing new for you.

Now let’s get to Azure. Azure by standard uses the GMT time for Date strings (DateTime() in JavaScript = Date()). I have written a scheduler which fetches data from another web service and puts it into my table. Naturally, I wanted to know when the data were last checked, so I added a column for it.

Then I did what everyone that is new to JavaScript has done and added a variable with a new Date(). And now the trouble begins. The output of new Date() is a totally different string: Sat, 04 May 2013 07:02:51 GMT.

Sure, we can parse and convert it within our app, but that would need (although not much) additional resources. So I decided to let to Azure the conversion to a Windows Phone readable string.

How do we  manipulate the Date()-string?

I binged a bit and finally found a very helpful page, that explains all about the JavaScript Date() object: http://www.elated.com/articles/working-with-dates/

I then started off with the following code:

var d = new Date();
var formattedDate = d.getFullYear() + "-" + d.getMonth() + "-" + d.getDate();
var formattedTime  = d.getHours() + ':' d.getMinutes() + ':' + d.getSeconds();
var checkedDateTime = formattedDate + "T" + t;

Those of you that are familiar with JavaScript will immediately see what I did wrong. Let me explain for the newbies:

First thing, date().getMonth is zerobased. So we will always get a result that is one month behind. We have to get it this way for the correct month:

d.getMonth()+1

But that is not all. If you will use the code above, your result will look like this: 2013-5-4T7:2:51

JavaScript does not use leading zeros. If you want to insert it into a date formatted column, you will get the following error from Azure:

Error occurred executing query: Error: [Microsoft][SQL Server Native Client 10.0][SQL Server]Conversion failed when converting date and/or time from character string.

So we need to add the leading zero before inserting it. Luckily we are able to that very easy. Here is my implementation:

var d = new Date();
var formattedDate = d.getFullYear() + "-" + ('0' + (d.getMonth()+1)).slice(-2) + "-" + ('0' + d.getDate()).slice(-2);
var formattedTime  = ('0' + d.getHours()).slice(-2) + ':' + ('0' + d.getMinutes()).slice(-2) + ':' + ('0' + d.getSeconds()).slice(-2);
var checkedDateTime = formattedDate + "T" + t;

What have we done here?

We are adding the leading 0 to each object string. The slice(-2) is for only picking the last two numbers. To make it more clear: if we have 9 as hour, adding the zero in front results in 09. Picking only the last two numbers by .slice(-2) results in still in 09. If we have 10 as hour, adding the leading zero results in 010. But the .slice(-2) operation will cut it back to 10. Easy enough, right?

If we run the code above to get the Date and Time, the result will look like this: 2013-05-04T7:02:51

The timezone offset is automatically added to the date when we update the table. If we now send the data to our Windows Phone or Windows 8 app, no conversion is needed as we already have a correctly formatted string.

I hope this is helpful for some of you and will save you some time.

Happy coding everyone!

Posted by msicc in Azure, Dev Stories, 0 comments