As you can see above, we are searching for already paired devices with the Proximity API of Windows Phone. If we don’t have any of our already paired devices reachable, and we don’t throw an exception with the HResult of “0x8007048F”, Bluetooth is on. If the exception is raised, Bluetooth is off.
In a very similar way we need to check if the location setting is on:
This would work technically, but PositionStatus has 6 enumarations. Also, as stated here in the Nokia Developer Wiki, this can be a battery intese call (depends on the implementation). I leave it to you which one you want to use.
Back to the header of this post. Catching an exception to determine the Status of wireless connections just seems wrong to me. I know this is a working “solution” and we can use that. But it could have been better implemented (for example like the networking API).
I made the first steps with speech recognition in the last few days with speech recognition, coming to the point where I had to select Items from a (Rad)ListPicker control (tried both, the Windows Phone Toolkit one as well as the Telerik one).
I was then realizing that calling the SelectedItem or the SelectedIndex does not work. By my app’s design the ListPicker has the IsExpanded property set to true, so there was one problem: in this state, it accepts only touch input.
Today I found a quick solution for this problem. If you rely on the input of the ListPicker control, just make sure that its IsExpanded state is false. You then will be able to set the SelectedItem or the SelectedIndex via code.
As always, I hope this will be helpful for some of you.
In one of my recent projects, I added speech recognition to start the app as well as for using the app. This blog post gives you a short overview on what’s possible and how to do it.
How to start your app with certain conditions:
Any app can be started by the simple voice command “open [AppName]” on Windows Phone. But what if we want to start certain functions when calling our app via speech? Then you’ll need a so called Voice Command Definition (VCD) file. This is a xml file that tells the OS to launch a certain function of your app.
Before we can start, please make sure you enabled ID_CAP_SPEECH_RECOGNITION, ID_CAP_MICROPHONE, and ID_CAP_NETWORKING capabilities in your app manifest.
Let’s have a look at a VCD with two different launch arguments:
<?xml version="1.0" encoding="utf-8"?>
<CommandPrefix>speech test app</CommandPrefix>
<Example> speech test app</Example>
<Command Name="open MainPage">
<Example> open the MainPage </Example>
<ListenFor> open the MainPage </ListenFor>
<ListenFor> bring me to Mainpage </ListenFor>
<ListenFor> let me see my MainPage </ListenFor>
<Feedback> loading Mainpage....</Feedback>
<Command Name="open SettingsPage">
<Example> open the settings page</Example>
<ListenFor>bring me to settings page</ListenFor>
<ListenFor>let me see my settings page</ListenFor>
<Feedback>opening settings ...</Feedback>
As you can see, the syntax is pretty easy to use. Here is some short explanation about the commands:
<CommandPrefix> is for telling the OS how to launch your app
<Command Name=”xxx”> is for telling the OS which launch parameter should be passed to your app
<ListenFor> defines which speech cases you want to allow to let the OS launch your app (pretty important point)
<Example> Text entered here is shown in TellMe’s speech Window and on the “What can I say?” page
<Feedback> this is the answer of TellMe to the user
The next part is to load the VCD file once the app starts. Therefore, you will need to add the VCD file in the App constructor. I created an async Task for that:
It is recommended that you set “Copy to Output Directory” property of the VCD file to “Copy if newer”.
Now that we have implemented and loaded our VCD file, let’s have a look on how we can open our app based on the arguments. Based on that VCD file, our app gets key/value pairs of the launch arguments. The implementation is pretty easy in a few lines of code:
string voiceCommandName = NavigationContext.QueryString["voiceCommandName"];
case "open MainPage":
//let the App launch normally
//you can also pass other launcharguments via the VCD file that could launch other functions
case "open SettingsPage":
//let the App lauch immediately to the settings page
NavigationService.Navigate(new Uri("/SettingsPage.xaml", UriKind.Relative));
And with this few lines we already added our app with individual voice commands to the speech function of the OS.
Using your app with in app voice commands:
We now learned how to start our app with certain conditions. However, this is not how we are using the speech recognition within our app.
There are several ways on how you can implement in app voice commands. I will show you two of them.
First of all, using the Windows button does not work in app. You will need to add a dedicated button to start the voice recognition.
Let’s start with a List<string> or string array, and launch the SpeechRecognizerUI:
//initalize speech recognition
SpeechRecognizerUI speech = new SpeechRecognizerUI();
List<string> InAppCommandList = new List<string>
//add more to that list as you need
//load the List as a Grammar
//show some examples what the user can say:
speech.Settings.ExampleText = " goto home, goto settings, set name, set age ";
//get the result
SpeechRecognitionUIResult result = await speech.RecognizeWithUIAsync();
//delegate the results
case "goto home":
//your code here
case "goto settings":
//your code here
case "set name":
//your code here
case "set age":
//your code here
//do something with the Exception
As you can see, there are only a few steps needed:
initalize speech recognition
load the List<string>/string as a Grammar
get the result
delegate the results
You might have noticed that I wrapped the code into a try/catch block. I needed to do that as I often got a NullReferenceException on closing without voice input or also if no text was heard. This way, your app will not crash.
There is also another way to add a Grammar to the speech recognition engine: SRGS (Speech Recognition Grammar Specification), which is again a XML file that contains which input is accepted.
While I was working on this app, I noticed that it is pretty powerful – and complex. This is why I decided to mix both methods to get things done (you know, timelines and such). I will dive into that topic deeper and then write another blog post about it. But for now, I’ll show you a simple example.
In this case, I wanted the SpeechRecognizerUI to only accept numbers as input. This is how my SRGS file looks like:
If you add a SRGS file as to your project, the correct structure is already implemented. You only need to set the root to the first rule you want to be executed. In my case, I defined a rule with the id Command in a public scope (this way, you would also be able to call it from another Grammar).
The rule returns items that will occur minimum 1 time and maximum 20 times. Basically, this defines the range of the input length.
To make the SpeechRecognitionUI only accepting numbers, I defined another rule that contains a list of numbers. the <one-of> property defines here that any of the numbers is accepted. With <ruleref uri=”#Numbers”/> I am telling my app to accept any of the numbers as often as they occur until the count of 20.
Now let’s have a look on how to implement this Grammar into our app and connect it to our SpeechRecognizerUI:
SpeechRecognizerUI speech = new SpeechRecognizerUI();
//generate a Numberonly Input Scope based on a SRGS file
//the ms-appx:/// prefix will not work here and return a FileNotFoundException!
Uri NumberGrammarUriPath = new Uri("file://" + Windows.ApplicationModel.Package.Current.InstalledLocation.Path + @"/NumberOnlyGrammar.xml", UriKind.RelativeOrAbsolute);
SpeechRecognitionUIResult speechRecognitionResult = await speech.RecognizeWithUIAsync();
if (speechRecognitionResult.ResultStatus == SpeechRecognitionUIStatus.Succeeded)
//using Regex to remove all whitespaces
string NumberInputString = Regex.Replace(speechRecognitionResult.RecognitionResult.Text, @"\s+", "");
// do somethin with the string
We are again calling the SpeechRecognizerUI, but this time we are loading our XML formatted SRGS file as Grammar. Notice that loading the file only works with the way above. Using the “ms-appx:///” prefix will return a FileNotFoundException and crash your app.
Instead of AddGrammarFromList we are using AddGrammarFromUri to load the file.
By checking the SpeechRecognitionUIStatus, we are performing actions only if the recognition was succesful based on our SRGS Grammar. If you want to perform other actions on non-success status, you will be able to implement the following enumerations:
As you can see, basic speech recognition is fast to implement. As always, I hope this post is helpful for some of you.
I am currently working on my NFC app, and I want to make it easier for the end user to search for the AppId which you need for the LaunchApp record. So I thought about a possible solution for this, and of course the easiest way is to search the app.
If you only want to search the Windows Phone Store and let show some search results, there is the MarketplaceSearchTask, which you can call as a launcher. The only problem is, you cannot get any values into your app this way. But I found a way to get the results into my app. This post is about how I did it.
The first thing you will need to add to your project is the HTMLAgilitypack. It helps you parsing links from an HTML-based document. Huge thanks to @AWSOMEDEVSIGNER (follow him!), who helped me to get started with it and understand XPath. Xpath is also important for the HAP to work with Windows Phone. You will need to reference to System.Xml.Xpath.dll, which you will find in
%ProgramFiles(x86)%Microsoft SDKsMicrosoft SDKsSilverlightv4.0LibrariesClient or
%ProgramFiles%Microsoft SDKsMicrosoft SDKsSilverlightv4.0LibrariesClient
Ok, if we have this, we can continue in creating the search. Add a TextBox, a Button and a ListBox to your XAML:
After creating this, we will hook up the Click-Event of our Button to create our search. We are going to use the search of windowsphone.com to get all the information we want. You can parse any information that you need like ratings etc., but we focus on AppId, Name of the App and of course the store Logo of each app.
First, we need to create the search Uri. The Uri is country dependent like your phone. This is how we create the Uri:
After that, we are using a WebClient to get the HTML-String of the search. I used the WebClient as I want to make it usable on WP7.x and WP8.
//start WebClient (this way it will work on WP7 & WP8)
WebClient MyMPSearch = new WebClient();
//Add this header to asure that new results will be downloaded, also if the search term has not changed
// otherwise it would not load again the result string (because of WP cashing)
MyMPSearch.Headers[HttpRequestHeader.IfModifiedSince] = DateTime.Now.ToString();
//Download the String and add new EventHandler once the Download has completed
MyMPSearch.DownloadStringCompleted += new DownloadStringCompletedEventHandler(MyMPSearch_DownloadStringCompleted);
In our DownloadStringCompletedEvent we now are parsing the HTML-String. First thing we need to do is to create a HTML-Document that loads our string:
//HAP needs a HTML-Document as it is based on Linq/Xpath
HtmlDocument doc = new HtmlDocument();
The next step is a bit tricky if you do not know Xpath. You need to got through all the HTML elements to find the one that holds the data you want to parse. In our case it is the class called “medium” within the result table “appList”.
var nodeList = doc.DocumentNode.SelectNodes("//table[@class='appList']/tbody/tr/td[@class='medium']/a");
Note that you have to use ‘ instead of ” for the class names in Xpath. I recommend to open a sample search page in an internet browser and look into the code view of the page to find the right path.
Now that we have a NodeList, we can parse the data we want:
foreach(var node in nodeList)
//get AppId from Attributes
cutAppID = node.Attributes["data-ov"].Value.Substring(5, 36);
//get ImageUri for Image Source
var ImageMatch = Regex.Match(node.OuterHtml, "src");
cutAppLogo = node.OuterHtml.Substring(ImageMatch.Index + 5, 92);
// get AppTitle from node
//Beginning of the AppTitle String
var AppTitleMatch = Regex.Match(node.InnerHtml, "alt=");
var StringToCut = Regex.Replace(node.InnerHtml.Substring(AppTitleMatch.Index),"alt="",string.Empty);
//End of the AppTitle String
var searchForApptitleEnd = Regex.Match(StringToCut, "">");
//FinalAppName - cutting away the rest of the Html String at index of searchForApptitelEnd
// if we won't do that, it would not display the name correctly
cutAppName = StringToCut.Remove(searchForApptitleEnd.Index);
As you can see, we need to perform some String operations, but this is the easiest way I got the result I want – within my app. As always I hope this will be helpful for some of you.
As you may have noticed, I am currently working on an NFC app. Development goes pretty well at the moment, thanks to the absolutely awesome and easy to use NDEF library by Andreas Jakl.
If you want to open apps from your app or from an NFC tag, you will need to use the AppId of the desired app. If you have an installed app from the Windows Phone Store, this is pretty easy. You can go to the application list on your phone, long tap and hit “send”. If you now choose mail or SMS, you can obtain the AppId very easy, as it is the last part behind “appId=” on the web address.
With the built in apps, it is a bit more difficult. Luckily, the app NFC interactor for Windows Phone 8, which is aimed at developers, has a solution. The app is written also by Andreas Jakl, who provides a huge tool with this app to support you on developing your own app and it is worth every cent.
I made it through all records for built in apps and extracted the following list, which might come handy for some of you: