.NET

How to use the .NET CLI clean-up tools on macOS

How to use the .NET CLI clean-up tools on macOS

Earlier this week, the .NET SDK and Runtimes received some updates. Together with that, also Visual Studio for Mac was updated. Once I got past the installation of all updates, both Visual Studio and Rider were no longer restoring the required NuGet packages for my .NET MAUI project running on .NET 6.

I eventually fixed that issue by cleaning up all the .NET SDKs, Runtimes, workloads and NuGet caches on my MacBook Pro. Read on to learn about the tools I used.

.NET uninstall tool

I have been using the .NET uninstall tool in the past. Unlike on Windows, you have to download the executable from GitHub in its zipped form.

While the releases page shows some terminal commands to unpack and run the tool, they never worked for me as stated there. While I was able to make the new directory with the mkdir command, the unpacking always shows an error. So I opened up Finder and unzipped it manually with the Archive Utility app that ships with macOS.

dotnet-core-uninstall unzipping

After switching to the folder in Terminal, the tool is supposed to show the help. Instead, I got an error showing me that I am not allowed to run this app for security reasons. The OS blocks the execution. If the same happens for you, right click on the extracted executable and select “Open With” followed by “Terminal.app (default)“. This will prompt you with this screen:

app downloaded from the internet message

Once you click on “Open“, a new Terminal window appears. Close this window, it is unusable as we are already in the exited state. Instead, open a new Terminal and change to the installation folder and call the help command:

cd ~/dotnet-core-uninstall
./dotnet-core-uninstall -h
Terminal with dotnet-core-uninstall help

Now that we are able to run the tool, let’s have a look what we have installed by running the dotnet --list command. We need to call the command twice, once for the installed -sdks and once for the installed -runtimes:

dotnet --list-sdks
dotnet --list-runtimes

You may be as surprised (I was, at least) how many versions you are accumulating over time. They never get removed by newer versions (it’s by design, according to Microsoft). To get rid of all versions except the latest, run the following commands with the uninstall tool (again once for — sdk, once for –runtime):

sudo ./dotnet-core-uninstall remove --all-but-latest --sdk
sudo ./dotnet-core-uninstall remove --all-but-latest --runtime

After uninstalling all previous versions, you may have to reinstall the latest .NET 6 SDK again. You could also use the –-all-but [Versions] command to specify the versions explicitly. No matter which way you’re going, if you run the dotnet --list commands again, you should see something similar to this:

dotnet --list command

Download: https://github.com/dotnet/cli-lab/releases

Documentation: https://learn.microsoft.com/en-us/dotnet/core/additional-tools/uninstall-tool?tabs=macos#step-3—uninstall-net-sdks-and-runtimes

dotnet workload command

As I had problems getting the required NuGet packages for my MAUI app, I decided to uninstall all .NET MAUI workloads as well. First, I had a look what is installed with the list command:

dotnet workload list
dotnet workload list result

Once you have that list, you need to call the uninstall command for every single installed workload:

sudo dotnet workload uninstall macos maui-maccatalyst maui-ios maui-android ios maccatalyst maui tvos android

Once they are uninstalled, I cleared the Terminal and installed them all again using the install command:

sudo dotnet workload install macos maui-maccatalyst maui-ios maui-android ios maccatalyst maui tvos android
dotnet workload install result in Terminal

Now we have the latest .NET MAUI workload installed as well as the platform specific workloads as well.

Documentation: https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-workload

dotnet nuget locals

The final clean-up step involves all NuGet caches on your machine. Yes, you read that right, multiple caches. To see them all, run the following command:

dotnet nuget locals all --list

This will get you something like this:

dotnet nuget locals cache results in terminal

Now let’s get rid of all those old NuGet packages:

sudo dotnet nuget locals all --clear

If you’re lucky, you will see this message:

local nuget caches cleared in terminal

My first attempt was not that successful. I needed to open the global packages’ folder in Finder and delete some remaining packages manually. Only after that, I was able to run the clear command with success.

Conclusion

Neither Visual Studio nor the .NET installer perform clean-up tasks on macOS. Until Microsoft changes their mind here, we will have to clean-up old packages manually to keep our system smoothly running. Luckily, there are at least CLI tools around to help us with that job. As always, I hope this blog post will be helpful for some of you.

Until the next post, happy coding, everyone!

Posted by msicc in Dev Stories, macOS, MAUI, Xamarin, 1 comment
#CASBAN6: Setting up an Azure Functions project for the API

#CASBAN6: Setting up an Azure Functions project for the API

Now that we have our Entity Framework project in place and our DTO mappings ready, it is finally time to create the API for our blogging engine. I am using Azure Functions for this.

Out-of-Process vs. In-Process

Azure Functions can be run in-process or in an isolated process. The isolated project decouples the function app from the underlying process, which enables additional features like custom middleware and Dependency Injection. Besides that, it allows you to run non-LTS versions, which can be helpful sometimes. These were the main reasons for choosing the Out-of-Process model. If you want to learn more about that topic, I recommend reading the docs.

Create the project

As you may have noticed, I recently became kind of a fan of JetBrains’ Rider IDE. Some steps may be different if done in Visual Studio, but you will be able to follow along.

First, make sure you have the Azure plugin installed. Go to the Settings, and select Plugins on the list at the left-hand side. Search for ‘Azure’ and install the Azure Toolkit for Rider. You will need to restart the application.

JetBrains Rider Plugin Settings

Once you have the plugin installed, open your solution and create a new project in it (I made it in a separate folder). Select the Azure Functions template on the left.

JetBrains Rider New Project dialog with Azure Functions selected.

I named the project BlogFunctions. Select the Isolated worker runtime option, and as Framework, we keep it on .NET 6 for the time being.

NuGet packages and project references

To enable all the functionalities we are going to use and add, we need some NuGet packages:

  • Microsoft.Azure.Functions.Worker, Version=1.10.0
  • Microsoft.Azure.Functions.Worker.Sdk, Version=1.7.0
  • Microsoft.Azure.Functions.Worker.Extensions.Http, Version=3.0.13
  • Microsoft.Azure.Functions.Worker.Extensions.ServiceBus, Version=5.7.0
  • Microsoft.Extensions.DependencyInjection, Version=6.0.1
  • Microsoft.Azure.Core.NewtonsoftJson, Version=1.0.0

Please note that I use the latest version that support .NET 6 and not the .NET 7. We also need to reference the projects we already created before, as you can see in this picture:

Project and Package references in the Function app

Program.cs

Now we finally can have a look at our Program.cs file. I am not using top-level statements here, but feel free if you want to, the code doesn’t change, just the surroundings.

To make our application running, we need to create a new HostBuilder object. I still prefer Newtonsoft.Json over System.Text.Json, so let’s add that one first:

IHost? host = new HostBuilder().ConfigureFunctionsWorkerDefaults(worker => worker.UseNewtonsoftJson()).Build();

host.Run();

In order to be able to use our Entity Framework project we created already earlier, we need to add a ConnectionString and also configure the application to instantiate our DBContext. Update the code above as follows:

string? sqlConnectionString = Environment.GetEnvironmentVariable("SqlConnectionString");

IHost? host = 
	new HostBuilder().
		ConfigureFunctionsWorkerDefaults(worker => worker.UseNewtonsoftJson()).              
		ConfigureServices(services =>
		{
			if (!string.IsNullOrWhiteSpace(sqlConnectionString))
				services.AddDbContext<BlogContext>(options =>
					options.UseSqlServer(sqlConnectionString));	              
		}).
		Build();

host.Run();

The connection string will be read from the local.settings.json file locally and from the Function app’s configuration on Azure. For the moment, just add your local ConnectionString:

{
  "IsEncrypted": false,
    "Values": {
        "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
        "AzureWebJobsStorage": "UseDevelopmentStorage=true",
        "SqlConnectionString": "Data Source=localhost;Initial Catalog=localDB;User ID=sa;Password=thisShouldB3Stronger!",
        
    }
}

Side note: If you have a look into the GitHub repo, you will see that there are some entries for OpenAPI in both files. The OpenAPI integration will get its own blog post later in this blog series, but for this post, they are not important.

Conclusion

In this post, we had a look at how to set up an Azure Functions app with Newtonsoft.Json and our Entity.Framework DbContext. In the next post, we will have a look at some Extensions that will be helpful, as well as the base class implementation of most functions within the app. As always, I hope this post was helpful for some of you.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Dev Stories, 2 comments
My annual review (2022) [Editorial]

My annual review (2022) [Editorial]

In the beginning of 2022, I would never have thought of such a turbulent year. After all, we went more or less over the pandemic, and there were at least some positive signs that were sparking some hope. Pretty fast, the year turned into a memorable one, but for very ugly reasons.

Ukraine war

Russia began a war in Ukraine. We all have to battle the consequences of this war. Because of the ongoing war, prices of living have raised in all areas, be it groceries, energy costs and also entertainment costs. No day passes by without getting notified about the horrible things Russian troops are doing in Ukraine. I still pray (and you should too, if you’re into that) for the war to be finally over soon.

Twitter take-over

The second impacting event was the take-over of Twitter by Elon Musk. While he made Tesla a profitable company by focusing on the product outcome and made space travel less expensive with SpaceX, he is currently about to destroy Twitter. While I still was somehow neutral back in April when the deal became more real, I lost hope for my favourite social network the day he fired half of the company’s staff.

At the time of the take-over, I was actively working on my app TwistReader, which was a reader app for Twitter lists. I had already a beta running on TestFlight when things began to turn bad on Twitter. After UniShare (which was in the process of being ported to Android and iOS when it died), I had to take the though decision to let go also this app. I cancelled the domain I bought for the app and shut down all Azure resources already. If someone wants to continue the project, I am open to talk about it.

TwistReader promotional image

This is now the second time I had to stop an app for social media. Ultimately, I decided I will not develop against any of the social networks from now on (even though I have several ideas to improve my social flow).

As we all know, things on Twitter aren’t becoming better. My presence on the bird site serves now solely as a guide to other social media I am active on. I decided to not delete my two main accounts, but to lock them for new followers, and stopped using the service. I am mostly active on Mastodon, followed by LinkedIn (although the later one needs some more attention).

NASA is flying to the moon again

Besides all the negative stuff, there were also some good news for all of us space fans. The NASA finally sent a space-ship to the moon again. They are playing the save game and did an unmanned launch, letting the capsule orbit the Moon and come back to Earth. They made some really awesome photos along the way, and the mission was a full success.

New blog series #CASBAN6

Besides working on TwistReader, I also started to port my portfolio website away from WordPress to a self written website in ASP.NET Core with Razor pages. The site itself is already published, with links to my apps in the stores, but the news section still needs a blog. I evaluated all the options, like existing CMS plugins and other blogging platforms.

In the end, I opted into learning something new by using some bits of what I already know – and I started my recent #CASBAN6 blog series about creating a serverless blog engine on Azure. This is now my main side project.

Other dev stuff

While I am focusing on the serverless blog engine, I also have some libraries I made and use internally for Xamarin.Forms that I need to port to .NET MAUI. Some parts can be easily removed and replaced with Essentials and CommunityToolkit. There is still plenty of code worth porting left, though.

At work, I broke up the internally used libraries to be more modular and finished implementing the service templates that use them. I also continued to push source control management within the team. Besides that, I wrote some interfaces for our customers that took advantage of these things, but needed additional items as well. Over all, I was able to use some of my learnings at work and vice versa.

I also decided to not cancel my Parallels subscription. I used it around 10 time throughout the year, which is not worth paying more than 100 bucks for the yearly licence.

Furthermore, I will use the freed budget to buy a Jetbrains Ultimate licence instead, which I started to use recently. The experience in writing code is far ahead of what Microsoft offers with Visual Studio on Mac, so I guess that’s a good investment.

Sports

If you have been following along for some time, you may know that I only became a non-smoker again (after 25 years of chain-smoking) two years ago. In terms of sports, I took part in three challenges this year (Run4Fun 6,8km, 10km at Winterthur marathon and Kyburglauf 2022 10.3 km (including 425 stairs just at the end of km 10). If you want to follow my running adventures, you can find me here on Strava.

Me running the 10 km at Winterthur Marathon 2022

Outlook into 2023

Next year, the roller coaster continues to ride. I will start a new role in March as a .NET mobile developer at Galliker Switzerland, which is one of the leading companies in logistics. They have a Xamarin.Forms code base and started the transition to .NET MAUI. There will be projects where I will have to do API and Web stuff as well, so this new position will help me to move towards my goal of becoming a full stack .NET developer as well. Another plus is that I am free to choose my preferred IDE – which will be most probably RIDER after my recent experiences with it.

Of course, I will continue to with my #CASBAN6 project as well. As I stated in my last post in the series, the Azure functions part is coming up next. I will have some posts on that topic alone, but I will also keep developing it further until the final product is ready to be used in production.

Besides that, I will start to port my Fishing Knots app to .NET MAUI, which will help me to learn the upgrade process and make the app ready for the future.

In terms of sports, I will continue with running, starting up with a focus on improving my average pace to get permanently below 5 min/km. On top of that, I want to run a half-marathon at the end of the next season. I will give runningCoach another try – hopefully they will be able to import my Strava results correctly this time.

Conclusion

What was your 2022 like? What are you all looking forward in 2023? Feel free to get in contact via my social media accounts or the comments section below.

What’s left is to wish all of you a

Merry Christmas and a Happy New Year!

Posted by msicc in Editorials, 0 comments
#CASBAN6: Implementing the data model using EntityFramework Core (separate libraries)

#CASBAN6: Implementing the data model using EntityFramework Core (separate libraries)

EntityModel library

When I started the project, I started with creating the classes for all the tables that I need for my blog engine. I have put them into their own library to keep things clean.

The classes also use ICollection references for relationships whenever required. Let’s have a look at the Blog class:

public class Blog 
{
    public Guid BlogId { get; set; }

    public string Name { get; set; }

    public string Slogan { get; set; }

    public Uri LogoUrl { get; set; }

    public ICollection<Post> Posts { get; set; }

    public ICollection<Author> Authors { get; set; }

    public ICollection<Tag> Tags { get; set; }

    public ICollection<Medium> Media { get; set; }

}

The other classes are implemented similarly to reflect the data model I showed you in my last post. You can have a look at the other class implementations in the GitHub repo (folder: EntityModel).

EFCore library

The EFCore library has three main components:

  1. BlogContext
  2. Configurations
  3. Seed extension

BlogContext

The BlogContext is straight forward and follows the pattern described here in the documentation for using a factory (spoiler: we will do that later):

public sealed class BlogContext : DbContext
{
    public BlogContext(DbContextOptions<BlogContext> options) : base(options)
    {

    }

    public DbSet<Blog> Blogs { get; set; }
    public DbSet<Post> Posts { get; set; }
    public DbSet<Author> Authors { get; set; }
    public DbSet<MSiccDev.ServerlessBlog.EntityModel.Medium> Media { get; set; }
    public DbSet<MediumType> MediaTypes { get; set; }
    public DbSet<Tag> Tags { get; set; }
}

The class is declaring a constructor that uses the DBContextOptions<DBContext> parameter that allows our factory to configure the context later on for the migrations. Of course, we need references to all the possible DbSets as well to be able to access them via the BlogContext instance.

Configurations

In Entity Framework, we can configure our tables with configurations. By implementing the IEntityTypeConfiguration interface for all of our models in a separate file for each, we are continuing to keep our code clean and easily maintainable. Here is how the implementation for the Blog table:

public class BlogConfiguration : IEntityTypeConfiguration<Blog>
{
    public void Configure(EntityTypeBuilder<Blog> builder)
    {
        builder.Property(nameof(Blog.BlogId)).
            IsRequired();

        builder.Property(nameof(Blog.BlogId)).
            ValueGeneratedOnAdd();

        builder.HasKey(blog => blog.BlogId).
            HasName($"PK_{nameof(Blog.BlogId)}");

        builder.Property(nameof(Blog.Name)).
            HasMaxLength(255).
            IsRequired();

        builder.Property(nameof(Blog.Slogan)).
            HasMaxLength(255).
            IsRequired();

        builder.Property(nameof(Blog.LogoUrl)).
            IsRequired();
    }
}

Most of the fluent implementations above are self-explaining. The other classes of the project are implemented in a similar way, you can find them here in the Github repository.

I implemented my configurations by following the docs, which I absolutely recommend reading:

To apply the configurations, we need to override the OnModelCreating method:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.ApplyConfiguration(new BlogConfiguration());
    modelBuilder.ApplyConfiguration(new MediumypeConfiguration());
    modelBuilder.ApplyConfiguration(new MediumConfiguration());
    modelBuilder.ApplyConfiguration(new AuthorConfiguration());
    modelBuilder.ApplyConfiguration(new TagConfiguration());
    modelBuilder.ApplyConfiguration(new PostConfiguration());
}

Seed extension

To verify our configurations are working, we need some test data. This is where the seeding feature of EF Core comes in handy, and it helped me to improve my data model a lot. I am using the extension method approach here to implement a blog with three test posts, including all relations, constraints, and property configurations. You can find the full implementation here in the Github repository.

Besides the docs on EF Core data seeding, these links helped me to understand and write my seed implementation:

With this extension method in place, applying the seed is just one line of code at the end of the OnModelCreating override away:

modelBuilder.Seed();

Now we have everything together, we finally can turn to actually migrate our code to database.

EFCore.DesignDummy library

Because I am running this whole thing on a Mac, I need to use the CLI tools for all migrations. To keep also this step in its own library, I created a DesignDummy library which I use for pushing the migrations to my local database.

The library requires a reference to the EFCore library as well as to the EntityModel library. On top of that, we need the Microsoft.EntityFrameworkCore.Design NuGet package.

Now that our dependencies are in place, we just need to create an implementation of the IDesignTimeDbContextFactory interface as described here in the docs:

public class BlogContextFactory : IDesignTimeDbContextFactory<BlogContext>
{
    public BlogContext CreateDbContext(string[] args)
    {
        BlogContext? instance = null;

        var optionsBuilder = new DbContextOptionsBuilder<BlogContext>();

        optionsBuilder.UseSqlServer(dbContextBuilder =>
            dbContextBuilder.MigrationsAssembly("EFCore.DesignDummy")).
            EnableSensitiveDataLogging();

        instance = new BlogContext(optionsBuilder.Options);

        return instance;
    }
}

Now let’s create our first migration and push it to the database (find the docs here). In your terminal window, change to the folder of your dummy project. Once you’re in the correct folder, create a new migration with the add command:

dotnet ef migrations add {MigrationName}

To push the migration you just created to the database, use the update command with the connection parameter:

dotnet ef database update --connection 'Data Source=localhost;Initial Catalog=localDB;User ID=sa;Password=thisShouldB3Stronger!'

If all goes well, you should now be able to view your database with the seeded data:

database seeded

Conclusion

In this post, I showed you how to create the model for the database and their matching IEntityTypeConfiguration implementations. We learned how to create a IDesignTimeDbContextFactory and how to add migrations and push them to the database. The full code is on GitHub for your further exploration.

As always, I hope this post is helpful for some of you.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 3 comments
#CASBAN6: the data model explained

#CASBAN6: the data model explained

Preface

Initially, this post should have been about the direct implementation and design of the data model for my serverless blog engine. As the data model became a bit more complex, I decided to split the data model post into two posts. The first aims to explain the data model, while the second post is for the implementation with Entity Framework Core.

The data model

A picture is worth a thousand words, they say. So here is a complete picture of the data model:

CASBAN6 data model

I will go through the model table by table and tell you a sentence or two on each of it for the rest of this post.

The tables

__EFMigrationsHistory

This table just stores all the MigrationIds and is handled by the Entity Framework. I recommend you to not touch this table.

Blogs

In theory, the database could hold more than one blog. By adding a new row to this table, you are creating a new blog in the database. The BlogId is essential for a range of other tables.

Authors

To be able to issue blog posts, our blog needs at least one author. As you may have more than one person to fill your blog, a collection of authors can be saved within this table. One author can only be assigned to one blog (at least for now, this is intentional).

Posts

The content fuelling our blog is in the posts table. One post can only be on one blog. The published date will be set on insert, all subsequent changes will modify the LastModified column. Also, one post can only have one author. The author can be replaced on updating the row in the table.

The slug can be used to create a human-readable URL for the post (instead of the PostId, which is a GUID). The slug must be unique across all blogs.

Tags

In order to group and categorize posts, I decided to go with a tags-only approach (unlike other platforms, which allow both categories and tags). Tags are unique to a blog, but can be used in multiple posts.

Media

Of course, our blog should support also media content like images, videos and other types. I opted in for a URL-based approach, which is making it easier to add content from other platforms (like videos hosted on dedicated platforms). A medium can be used in several posts.

MediaTypes

To make it a bit easier to determine a medium’s type, I added the MediaTypes table. It holds information about the MIME-Type and possibly also the encoding of a file. The uniqueness is based off the MIME-Type.

Mapping tables

As we learned already above, both tags and media can be used in multiple posts. At the same time, posts can have multiple media and also multiple tags. To cover this many-to-many relationships, I use two mapping-tables with a composite primary key to ensure their uniqueness across the blog.

Conclusion

In this post, I outlined the data model of my serverless blog engine. In the next post, I will show you the implementation of this model with Entity Framework Core.

Until the next post, happy coding, everyone!

Posted by msicc in Azure, Database, Dev Stories, 2 comments
#CASBAN6: Creating A Serverless Blog on Azure with .NET 6 (new series)

#CASBAN6: Creating A Serverless Blog on Azure with .NET 6 (new series)

Motivation

I was planning to run my blog without WordPress for quite some time. For one, because WordPress is really blown up as a platform. The second reason is more of a practical nature – this project gives me lots of stuff to improve my programming skills. I already started to move my developer website away from WordPress with ASP.NET CORE and Razor Pages. Eventually I arrived at the point where I needed to implement a blog engine for the news section. So, I have two websites (including this one here) that will take advantage of the outcome of this journey.

High Level Architecture

Now that the ‘why’ is clear, let’s have a look at the ‘how’:

There are several layers in my concept. The data layer consists of a serverless MS SQL instance on Azure, on which I will work with the help of Entity Framework Core and Azure Functions for all the CRUD operations of the blog. I will use the powers of Azure API Management, which will allow me to provide a secure layer for the clients – of course, an ASP.NET CORE Website with RazorPages, flanked by a .NET MAUI admin client (no web administration). Once the former two are done, I will also add a mobile client for this blog. It will be the next major update for my existing blog reader that is already in the app stores.

For comments, I will use Disqus. This way, I have a proven comment system where anyone can use his/her favorite account to participate in discussions. They also have an API, so there is a good chance that I will be able to implement Disqus in the Desktop and Mobile clients.

Last but not least, there are (for now) two open points – performance measuring/logging and notifications. I haven’t decided yet how to implement these – but I guess there will be an Azure based implementation as well (until there are good reasons to use another service).

Open Source

Most of the software I will write and blog about in this series will be available publicly on GitHub. You can find the repository already there, including stuff for the next two upcoming blog posts already in there.

Index

I will update this blog post regularly with a link new entries of the series.

Additional note

Please note that I am working on this in my spare time. This may result in delays between the blog posts and the updates committed into the repository on GitHub.

Until the next post – happy coding, everyone!


Title Image by Roman from Pixabay

Posted by msicc in Android, Azure, Dev Stories, iOS, MAUI, Web, 2 comments
Invoke platform code in a MAUI app using the built-in Dependency Injection

Invoke platform code in a MAUI app using the built-in Dependency Injection

In Xamarin.Forms, my internal libraries for MVVM helped me to keep my applications cleanly structured and abstracted. I recently started the process of porting them over to .NET MAUI. I was quickly reaching the point where I needed to invoke platform specific code, so I read up the documentation.

The suggested way

The documentation suggests creating a partial class with partial method definitions and a corresponding partial classes with the partial method implementations (like described in the docs). I tried to follow the above-mentioned MAUI documentation and copy/pasted the code sample in there and thought everything is going to be fine. Well, it was not. I wasn’t even able to compile the solution with that code on my Mac in the first place.

In search for a possible cause of this, I did not find a solution immediately. In the end, it turned out that I needed to implement the partial class method on all platforms specified in the TargetFrameworks within the .csproj file. It should have been obvious due to the fact that MAUI is a single project with multiple target frameworks, but it wasn’t on that day.

On top of that, Visual Studio did some strange changes to the .csproj file specifying unnecessary None, Compile and Include targets that should not be generated explicitly, which added a lot to my confusion as well. After removing them from the project file and adding an implementation for all platforms, I was able to compile and test the code from the docs.

But I love my interfaces!

Likewise, that’s why I did not stop there. Following the abstraction approach, interfaces allow us to define the common surface of the API without worrying about the implementation details. That’s not the case for the partial classes approach, like the problems I had showed.

Luckily for us, .NET MAUI supports multi targeting. This means an interface can have a platform specific implementation while being defined in the shared part of the application. If you have used the MSBuildExtras package before, you know already how that works. Best part – .NET MAUI already provides the multi targeting configuration out of the box.

Show me some code!

First, let’s define a simple interface for this exercise:

namespace MAUIDITest.InterfaceDemo
{
    public interface IPlatformDiTestService
    {
        string SayYourPlatformName();
    }
}

Now we are going to implement the platform specific implementations. Go to the first platforms folder and add a new class named PlatformDiTestService. Then – and this is really important to make it work – adjust the namespace to be the same as the one of the interface. Last, but not least, implement the interface, for example like this:

namespace MAUIDITest.InterfaceDemo
{
    public class PlatformDiTestService : IPlatformDiTestService
    {
        public string SayYourPlatformName()
        {
            return "I am MacOS!";
        }
    }
}

Repeat this for all platforms, and replace the platform’s name accordingly.

Using Dependency Injection in MAUI

If you have been following along my past blog posts, you know that I recently switched to the CommunityToolkit MVVM (read more here and here). I already heard that MAUI will get the same DI container built-in, so the choice was obvious. Now let’s have a look how easy we can inject our interface into our ViewModel. Head over to your MauiProgram.cs file and update the CreateMauiApp method:

	public static MauiApp CreateMauiApp()
	{
		var builder = MauiApp.CreateBuilder();
		builder
			.UseMauiApp<App>()
			.ConfigureFonts(fonts =>
			{
				fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
				fonts.AddFont("OpenSans-Semibold.ttf", "OpenSansSemibold");
			});

		//lowest dependency
		builder.Services.AddSingleton<IPlatformDiTestService, PlatformDiTestService>();
		//relies on IPlatformDiTestService
		builder.Services.AddTransient<MainPageViewModel>();
		//relies on MainPageViewModel
		builder.Services.AddTransient<MainPage>();

		return builder.Build();
	}

First, add the registration of the interface and the implementation. In the sample above, the MainPageViewModel relies on the interface and gets it automatically injected by the DI handler. For testing purposes, I even inject the MainViewModel into the MainPage‘s constructor. This is very likely to change in a real world application.

For completeness, here is the MainPageViewModel class:

using System;
using System.ComponentModel;
using System.Windows.Input;
using MAUIDITest.InterfaceDemo;

namespace MAUIDITest.ViewModel
{
    public class MainPageViewModel : INotifyPropertyChanged
    {
        private readonly IPlatformDiTestService _platformDiTestService;
        private string sayYourPlatformNameValue = "Click the 'Reveal platform' button";
        private Command _sayYourPlatformNameCommand;

        public event PropertyChangedEventHandler PropertyChanged;


        public MainPageViewModel(IPlatformDiTestService platformDiTestService)
        {
            _platformDiTestService = platformDiTestService;
        }

        public void OnPropertyChanged(string propertyName)
        {
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
        }

        public string SayYourPlatformNameValue
        {
            get => sayYourPlatformNameValue;
            set
            {
                sayYourPlatformNameValue = value;
                OnPropertyChanged(nameof(this.SayYourPlatformNameValue));
            }
        }

        public Command SayYourPlatformNameCommand => _sayYourPlatformNameCommand ??=
            new Command(() => { this.SayYourPlatformNameValue = _platformDiTestService.SayYourPlatformName(); });
    }
}

And of course, you want to see the MainPage.xaml.cs file as well:

using MAUIDITest.InterfaceDemo;
using MAUIDITest.ViewModel;

namespace MAUIDITest;

public partial class MainPage : ContentPage
{
    private readonly IPlatformDiTestService _platformDiTestService;
    int count = 0;

	public MainPage(MainPageViewModel mainPageViewModel)
	{
		InitializeComponent();

		this.BindingContext = mainPageViewModel;
    }

	private void OnCounterClicked(object sender, EventArgs e)
	{
		count++;

		if (count == 1)
			CounterBtn.Text = $"Clicked {count} time";
		else
			CounterBtn.Text = $"Clicked {count} times";

		SemanticScreenReader.Announce(CounterBtn.Text);
	}
}

Last, but not least, the updated MainPage.xaml file:

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="MAUIDITest.MainPage">
			 
    <ScrollView>
        <VerticalStackLayout 
            Spacing="25" 
            Padding="30,0" 
            VerticalOptions="Center">

            <Image
                Source="dotnet_bot.png"
                SemanticProperties.Description="Cute dot net bot waving hi to you!"
                HeightRequest="200"
                HorizontalOptions="Center" />
                
            <Label 
                Text="Hello, World!"
                SemanticProperties.HeadingLevel="Level1"
                FontSize="32"
                HorizontalOptions="Center" />
            
            <Label 
                Text="Welcome to .NET Multi-platform App UI"
                SemanticProperties.HeadingLevel="Level2"
                SemanticProperties.Description="Welcome to dot net Multi platform App U I"
                FontSize="18"
                HorizontalOptions="Center" />

            <Button 
                x:Name="CounterBtn"
                Text="Click me"
                SemanticProperties.Hint="Counts the number of times you click"
                Clicked="OnCounterClicked"
                HorizontalOptions="Center" />


            <Label Text="Test of built in DI:" FontSize="Large" HorizontalOptions="Center"></Label>
            <Label x:Name="PlatformNameLbl" FontSize="Large" HorizontalOptions="Center" Text="{Binding SayYourPlatformNameValue}" />

            <Button Text="Reveal platform" HorizontalOptions="Center" Command="{Binding SayYourPlatformNameCommand}"/>

        </VerticalStackLayout>
    </ScrollView>
 
</ContentPage>

I did not change the default code that comes with the template. You can easily recreate this by using the default MAUI template of Visual Studio and copy/paste the code snippets above to play around with it.

Conclusion

I only started my journey to update my internal libraries to .NET MAUI. I stumbled pretty fast with that platform invoking code, but luckily, I was able to move along. Platform specific code can be handled pretty much the same as before, which I hope I was able to show you in this post. I’ll write more posts on my updating experiences to MAUI as they happen.

Until the next post, happy coding, everyone!
Posted by msicc in Dev Stories, MAUI, Xamarin, 4 comments