Mobile First Cloud First

A blog by Geert van der Cruijsen on Apps, Cloud & ALM

Adding Azure Active Directory Authentication to connect an Angular app to Asp.Net Core Web API using MSAL

Integrating your application with Azure Active Directory using OAuth shouldn’t be to hard at first sight. I have done this many times with different development technologies like Asp.Net, Xamarin etc, but this week i had to do it for an Angular app for the first time. There is quite some information and docs to be found on this subject but a lot of them are outdated and it took me longer than expected so that’s why i decided to write up how i got it to work, step by step.

angular - active directory - dotnetcore web api - oauth

So here is a description of what we’ll create in this post:

  • Create an angular app from scratch using the Angular Cli and make it authenticate the user in Azure Active Directory using the MSAL library.
  • Create an Asp.Net Core Web Api from scratch and connect it to Azure Active Directory as well
  • Enable the angular app able to communicate with the web api in an authenticated way using access tokens.

Setting up Azure Active Directory

In Azure Active Directory we have to register 2 applications. You can add an application in the Azure Portal by going to “Azure Active Directory -> App Registrations -> New Registration”

Front End App Registration

We’ll call the first application “demoapp-frontend” and it will contain the configuration for our frontend application.

In here you can also select which AD should be used. If it should only allow users from your tentant or you also want to allow multiple tenants or microsoft accounts.

Lastly we fill in the Redirect URI where we enter “http://localhost:4200” because that is where our Angular application will expect it.


After that we press Register and wait for the application to be created. As soon as it is created we can go into the details and write down the Client ID and Tenant ID because we will need it later.


Go to the Authentication menu item and check the boxes for Access Tokens and ID Tokens and save the configuration.



The last step in this app registration is enabling the Oauth Implicit flow. To do this open the manifest and  set “oauth2AllowImplicitFlow” to true



The last step is enabling the app registration to be used by end users when logging in. You can do this by going to the “API Permissions” menu and Grant consent for the application.


Now the app registration is ready and we can continue with the app registration for the API.

API App Registration

We create another app registration called “demoapp-api”. We only need to enter the name and don’t need a redirect url since this app will only check for logged in users and won’t log in the users itself.

Write down the client ID again because we’re going to use it later on.

After we’ve created the app registration go to “Expose an API“. In here we’re going to add a scope by pressing “Add a Scope“.


As a scope we’re going to add a scope called “api://<cliendID>/api-access.

Note you can come up with your own scope name or add more scopes later on.


After adding the scope we’re going to add the front end app registration as “Authorized Client Application”. Press “Add a Client Application” and enter the client id of the Angular app registration we added.



This is all we need to do in Azure AD to enable our API and front end application to make use of Azure Active Directory. Now we can start coding our applications. We’ll make use of the MSAL library to connect the angular app to our Web API. Let’s create the Asp.Net Core Web API first that will check for logged in users for all its requests or otherwise will throw a 401 unauthorized.


Creating the Asp.Net Core Web API

We’ll be creating a brand new Asp.Net Core 2.2  Web API in this sample by using the CLI. “dotnet new webapi“.

Add an “AzureActiveDirectory” object to your appsettings.json (or add them using secrets ) and fill in your AAD Domain name, Tenant ID and Client ID. (of the API app registration)

 "AzureActiveDirectory": {
 "Instance": "",
 "Domain": "<>",
 "TenantId": "<yourtenantid>",
 "ClientId": "api://<yourclientid>"

please note to make the client ID be in the form of api://<yourclientid>.

After creating these settings we only need to update the startup.cs to add authentication here to set up AAD integration.

There are a few things to add here (see example startup.cs below)

  • ConfigureServices:
    • Add the services.AddAuthentication method to load our settings to point to the correct AAD app registration.
    • Add Cors. In this example we’ve taken the simples approach by allowing every source. You might want to make this more specific in your own application.
  • Configure
    • Add app.UseCors
    • Add app.UseAuthentication.

This is everything we need to do to have a working Asp.Net Core Web API with AAD integration. whenever you create a new API Controller just add an [Authorize] attribute to make sure your API calls are authenticated.

Creating the Angular App

We’ll also start with a brand new Angular app creating by using the Angular CLI. Create a new app using “ng new” In the Angular App we will use the MSAL library from Microsoft to connect to Azure Active Directory. MSAL is a new library which should replace the ADAL library Microsoft created earlier. MSAL is created to work with the new v2 endpoints of Azure Active Directory while ADAL only works with the v1 endpoints. Microsoft has created a npm package for MSAL to be used in Angular which makes using MSAL a lot easier. Install this package using “npm i @azure/msal-angular” After installing this package we only need to enable Azure Active Directory in our app.module.ts.   A sample is shown below. What do we need to add:

      • Add the MSAL Module with the correct client ID, and domain (<tenantid>


  • Create a protected resources map. This will function as a guard so each time a resource from one of these URLs is called the right access tokens will be sent along with it.



  • Fill the Consent Scopes: a list of all the scopes you would like to get access tokens for. This could be User.Read to retrieve the users login name from AD and specific API scopes for your API calls.



  • Add a HTTP Interceptor so MSAL will add the right tokens and headers to your requests when needed whenever you use a HttpClient.



Now we can run the application and as soon as we do a network call to an url listed in the protected resources map we will get prompted to log in with our Azure AD Credentials.

In the end connecting your Angular App with Azure Active Directory isn’t that hard, you just have to know exactly what id’s to use where.

Hopefully this will help others in making the connection work smoothly. It took me a few hours to long but managed to get it working with the help and all seeing eyes of my great colleagues Chris, Niels and Thijs

Happy Coding!

Please follow and like:
Follow by Email

Observing distributed application health using Azure Application Insights & Azure Log Analytics

Most people who use Azure Application insights to monitor their applications will not look at it until something is wrong and only then will they look for what exceptions are thrown to see what is going on. In my opinion if you want to build high available systems you also want to be able to see if everything is working as normal when there are no problems.

When you build a monolithic application it’s often quite easy to find where certain performance bottle necks are by monitoring cpu and memory usage. When we look into distributed systems and microservice architectures an application will often span multiple services with even more instances running into thousands of machines, service busses, APIs you name it. How do we monitor this by looking into CPU, memory and all other traditional monitoring measures. You simply can’t.

In these types of scenarios where you have several instances or maybe even thousands of instances we have to look for other things. One thing you could do is come up with a KPI of measuring a service that your application is providing and seeing how often this is completed. To make this a bit simpler to understand lets look at an example:

Netflix is famous for their micro service architecture spanning thousands of machines and they monitor on SPS (Starts per second). With the millions of subscribers they have this number is something that should be fairly predictable. That’s why they monitor for this and if this number is affected something must be wrong (If people start more often maybe playback isn’t working so they keep pressing play? If less people press play maybe the UI is broken and the event is not coming down to the server or something else might be wrong.) By just monitoring 1 number they can use this if the overall health of the system is OK or not.  You can learn more at the Netflix technology blog.

So how do you start with something like this yourself?

Finding the right KPI

There is not 1 solution to find the right KPI that is best for measuring. But there are some things you might consider. First of all it has to be important for your business. Next to that it would be nice if the number was somewhat stable or has clear patterns. This all depends on your business and application.

Maybe it’s best to start with another example we used for one of our clients. We’ll take this example from the initial idea to how we actually monitor it using Azure Log Analytics and Application Insights.

The application we worked on had to do calculations every few minutes and these calculations could take up from 10 seconds to about a minute. It was really important that the end result of these calculations were send customers / other systems every X minutes. Because of this the development team added logging to Application Insights that stored the calculation time for each cycle. During the day the calculation time ranged from fast (10 seconds) to slow (1 minute) because of several parameters. I’ve drawn a picture of what the graph looked that took all the App Insights calculation times and plotted it over time.



The Graph looked like this. Initially the dev team only created this view to monitor health of the calculation times. A big problem in here is that it provides no information of what is “normal”. As humans we are quite good at recognizing patterns and after showing this picture to several people they all noted. Wow somewhere between 9:00 and 12:00 in the morning there must be something wrong.



The problem is that this data is only the data of 1 day. It does not even have a pattern. There are several external influences that have impact on calculation times. One of them is customer orders being created. This application is a business to business application and a majority of orders is created during the morning of european working hours. This is why we need more data in our graph so we can actually see if there are some patterns.

In the next graph I’ve plotted the data of a full work week on the same area to see if we can find patterns.




When we plot this full week of calculation times we can see that there is quite a pattern to be found. Next to that we can also very easily spot where something is not following our pattern. Is the high curve just before 12:00 still an anomaly? Guess not… But what is happening in the afternoon? Data that first looked like being part of some pattern in our heads does stand out all of a sudden. I think we’ve found our KPI that we want to measure.


When developing an application adding counters and logging information is important to be able to create these kinds of dashboards. If you are not sure on what to measure. Just start with business functions start/completed and each service start/completed/retried. This gives you a starting point. from there on you can come up with new measures and counters.

An important area of DevOps is as developers we have to start thinking more like Ops. what are good things to measure, monitor etc. In the past few years I often come across Devs telling Ops to become more like Developers by adding automation and doing stuff as code but it’s also important to focus on the other way around. Devs taking ownership of what they are building and making it easy to see if the application is still working like it is supposed to.

As Devs you have far more knowledge of what could cause certain delays, outages etc because you know how the application is working internally. So join forces and work together.

Implement it using Azure Application Insights and Azure Log Analytics

So now we have a pretty good idea of what we want on a dashboard. How do we implement this? Since the title of this post is about Application insights and Azure log analytics i’m assuming you already have Azure Application insights in place. If not here is a guide.  When we have access to an Application insights instance we can start doing our custom measurements. In this post we’ll focus on measuring calculation times similar to the example above but you could do this with any type of measurement.

How to track timing in app insights?
We can use the code above to track custom timing of pieces of code.  We’ll create a DependencyTelemetry object, Fill in the name and type properties call Start, do your calculation and if it succeeds  set the success to true and then finally call the Stop method so the timer is always stopped. This is all the code you need. When you run your app now and go to Application insights open the Analytics tab and run a query showing all “dependencies with name “CalculationCycle”.  Since we haven’t logged anything else we’ll just query all dependencies and voila there are our timings in the duration field. appinsights So our application is logging the calculation times. Now it is time to create a dashboard that shows the “normal” state and values from the last 24 hours.

Creating a kusto Query in log analytics:

We want to create a similar graph as i drew in this post earlier. We could have all these colored lines for all the different days but what is even better is that we can take the data for the last month and combine it. When we create the query we’re actually building 2 series and we will combine them in the end to display a graph. The first series we will create will be called “Today” and will show all the values of the calculation time and will summarize them per hour. The second series we create is called “LastMonth” and will take all the values of the last 30 days and will group them by hours of the day as well. We also only take the 90 percentile of the values so we remove values that are special cases.

Run the query to get the graph below. You can pin this graph to a dashboard and now you can see your calculation times compared to average calculation times of the last month on a per hour basis.

For our scenario this worked really well. If you create something similar make sure that the last 30 days is a good comparison. Should calculations be the same every day of the week or are your calculations taking longer on a Monday compared to the Friday? if that is the case you might want to tweak your query so you are actually comparing to your “normal” state.


Hopefully this post helped you set up a dashboard view of viewing a “normal” state of your application that you could have displayed near your team working area to see if everything is still working as you expected it to.

Finally i would like to do a shout out to my colleagues Rene and Jasper who created this with me from idea to final result.

Happy Coding (and observing)

Geert van der Cruijsen

Please follow and like:
Follow by Email

Containerized build pipeline in Azure DevOps

Azure DevOps comes with several options to use as build agents in your Azure Pipelines. Microsoft has hosted agents where you don’t have to maintain your own hardware and you can turn any machine you own into a agent by installing the agent script on that machine.

The hosted agents are packed with lots of pre-installed software to support you in your builds. If you run your own private agents you can customize them as you would like. I’m currently at a large enterprise where I’m a consultant in a IT for IT team that hosts a number of private agents for all other development teams to use. Our agents are fully set up through automation and have all common tools used by teams (Based on the Hosted Azure DevOps agent images which are open source). These Agents work for the largest group of development teams but there are always teams who need some special tools. What we do to give teams freedom on their tool selection is having them run their builds inside a docker container. This is a new feature released at the end of September 2018.


When you run builds inside a container all steps in your pipeline are executed inside this container. The work directory of the agent is volume mapped inside the container. The ability of running your pipeline in a custom container gives you all the freedom of creating an image that has all the tools required for you to execute your build. The Docker image has 2 requirements.  Bash and Node.js have to be available within the container and then you’re ready to go.

How to create a containerized build pipeline

An important note is that containerized pipelines are currently only available in YAML based pipelines. I don’t know if pipelines created in the portal will eventually also support this but in my opinion YAML based pipelines are the way forward from now on because they have a lot of advantages over traditional pipelines. The official documentation on YAML pipelines can be found here.

Let’s take a simple YAML pipeline as this example. I’ve create a simple Asp.Net Core application and have set up a pipeline for that. This is what my azure-pipelines.yml file looks like.

I tried to make the build as simple as possible. It’s just a basic .Net core build that we want to execute. For this example it doesn’t matter what the exact steps are that we are executing. It could be anything, a .Net build, npm, Go, Java Maven, anything goes. We use one of the hosted agent queues to execute the build.

Next step is to make this regular build execute exactly the same steps in a container. We can do this fairly simple by adding some settings to our pipeline. You’ll need to add a container to your resources defined at the start of your YAML file. This can either be a public container from Docker hub or a container from a private repository. The container resource will receive a name. In my example “dotnet-geert”. We can use this name to set a reference to this container in our pipeline so all build steps will be executed in this container.  You can do that by adding a line just below your build pool saying which container should be used. container: dotnet-geert

In this example we run our build in a Hosted Ubuntu Agent. Downsides of running this on a hosted agent is that the hosted agent won’t cache your docker image so it has to download the full image at every run. Because of this I don’t think this approach will be that effective compared to private agents which cache the docker image locally so spinning up a container is only a matter of seconds.

Running it on your local agents work exactly the same. there are however a few requirements. You either need a Linux machine with docker support or a Windows machine which runs Windows Server and has a higher version than 1803.

Performance improvements

First thing we want to do is run our build on a private agent so we can re-use our Docker images and only have to download them once. Another feature of using containers is that you’ll always receive a fresh instance of your environment. This is nice because you can be sure that every build was run exactly the same and wasn’t relying on previous changes another build might have done to your agent. It’s also something to consider when your builds take a long time. Because you’ll receive a fresh environment each build you’ll also have to download all your dependencies each build. Most applications nowadays are using a lot of external dependencies so let’s have a look on how we can fix this.

Docker has a feature called volume mappings that enables you to map certain directories from your host machine and use them in your container. In my example pipeline we’re building a .Net Core application that uses NuGet packages. We can map a folder on our host machine to function as the global NuGet cache and use this within our container. each time the container is downloading nuget packages It’s storing it outside the container and we run the same build again it can use the cached packages. Same thing would work for npm packages or maven packages when you are building applications with other technologies.

We can create the volume mapping by passing an option to our container. This option has to be -v for volume mapping and then passing in the <source folder>:<destination folder>. In the case of NuGet packages we’re also setting a global environment variable that sets the NuGet cache to this folder. After we do this our builds will be super fast and we have all the flexibility of tools that containerized builds give you. Below is a full sample pipeline that uses a volume mapping for the NuGet cache.

I really like this new feature of Azure DevOps because it gives you a lot of flexibility in your builds without having to do customize your own private agents to much.

Happy Coding! (And building!)

Geert van der Cruijsen


Please follow and like:
Follow by Email

Passing in custom user settings and secrets to Maven in Maven VSTS Build Tasks

VSTS is tooling for setting up automated pipelines for all kinds of programming languages. I’ve seen more and more non Microsoft technologies being used on VSTS and I came across a couple of questions repeatedly so i thought it was a good idea to write this in a blog post.



The problem is the following: If you want to do a Maven build, Maven will expect some user settings to be present somewhere on your build server. While this is often configured once on the build server it is better to pass it in during build time especially if it contains secrets that you don’t want to have stored in plain text somewhere. So how do we do this?

In the sample we’ll add a connection to Sonatype Nexus (A package management solution, comparable to VSTS Package management) so Maven is capable of downloading packages it needs or it is capable of pushing its build artifacts there. Although this example does only set these settings you can use it for other kinds of settings as well.

So how to implement this?

First we need to add a file to our repo and call it ci-settings.xml. it will contain our user settings with a username and password to connect to Nexus.

This file has a few variables that we are going to replace called nexusUser and nexusPassword. the “REPOSITORY ID” needs to match the id used in the pom.xml file.

In the Maven task we then pass this user settings file to the maven command using the -s option. We can also pass in the values for our parameters in the ci-settings.xml file using -DnexusUser and -DnexusPassword. The full Options would look like something like this.


-s $(System.DefaultWorkingDirectory)/ci-settings.xml -DnexusUser=$(Nexus.User) -DnexusPassw ord=$(Nexus.Password)

The actual values of $(Nexus.User) and $(Nexus.Password) are stored in the VSTS variable section where you can also make the password a secret so it’s hidden from logs and from people editing or viewing the build definition



Please follow and like:
Follow by Email

Fix error on Azure: “the subscription doesn’t have permissions to register the resource provider”

Working in an enterprise environment, permissions in Azure might be trimmed down so users do not have access on Azure subscriptions itself and only have access to specific resource groups. When someone has contributor permissions in a resource group you might think that they should be able to create all the things in there that they would like.  This is not always the case. Each Azure resource type has to be registered through a resource provider on the subscription level. When users only have access to certain resource groups and not to the subscription itself you can run into errors when you try to create a new resource that is not registered yet.

The error will say:

the subscription [subscription name] doesn’t have permissions to register the resource provider(s): [resource type]

Here is a sample screenshot that happened when sql was not registered.

the subscription does not have permissions to register the resource provider


There are a couple of options to fix this.

  • Manually register the resource type in the azure portal
  • Register all resource types in a subscription using the Azure CLI
  • Create a specific role for all users to give them permissions to register resource providers

Manually registering resource types in the Azure portal

Registering a resource type in the Azure portal is the simplest if you only want to register a specific resource type. If you want to register every resource type available this requires a lot of clicking so it’s better to choose one of the other 2 options using the CLI or a custom role.

Using the Azure portal to register a resource type is easy though. In the portal navigate to your Subscription. In the Left menu click on Resource Providers and after that click Register for each of the resources you want to register.

az portal


Register all resource types in a subscription using the Azure CLI

You can also use the Azure CLI to register all available resource types in your azure subscription. This is done through one line of Azure CLI.
This will initially list all resource providers and then for each resource provider it will call the register method. One caveat to watch out for is that if new resource types are added to Azure they are not automatically registered so you’ll have to run the script again or choose the 3rd option creating a specific role that all users get so they can automatically register resource providers

Create a specific role for all users to give them permissions to register resource providers

The final and most future proof solution is creating a new role which you can assign to all your users which has permissions to register a new resource provider. The first step in creating this is defining a new json file describing this role. 

this json file sets the action for registering resource providers to be allowed and the only thing you’ll have to customize is adding your own subscription ids. When this security role json file is finished we can use the Azure CLI to create the role and after that we can assign users to the role. This does require you have certain groups in AD containing all your users you want to give access. If you don’t have that the 2nd option is probably better for you because it will become a lot of work to assign this role to all your users manually.

That’s it! I’ve given you 3 options to solve this Azure error so hopefully one of these can help you get going again in building great cool stuff on Azure

Geert van der Cruijsen



Please follow and like:
Follow by Email

Programatically creating Azure resource groups and defining permissions

In my job as DevOps consultant I try to help my clients build better software faster.  A key part of this is automation of the complete delivery pipeline. Most of the times this focusses on the delivery pipeline from user story to committed code to eventually this code running in production. With tools as VSTS this is quite easy to do but what about the things that happen outside of the core of the application?

Creating Infrastructure as code is becoming mainstream in public cloud scenarios so teams can create and deploy their own infrastructure. This allows independent self serving teams to build better software faster. But often people stop here. There are still several tasks that often are manual steps where someone with the right permissions has to step in to do these tasks. Examples can be: Creating a new VSTS Team, GIT repo, opening ports on the firewall or creating a resource group in Azure where the team can create their infrastructure. My goal is to automate everything here so teams can create these things in a guided, self serving manner. I’ll be diving deeper in that subject in a later post where i explain how we’ve created an Operations Chatbot that does these kind of things. In this post i want to focus on 1 specific area this bot can help: Creating Azure resource groups for teams and assigning permissions.

In many of my projects we host our infrastructure in Azure and I like DevOps teams to be independent. Looking at Azure they should have a space where they can create their infrastructure and do their thing. It’s up to the teams what kind of stuff they spin up since they should be the ones maintaining it and they are responsible for the costs.

The thing we’ve built is a chat bot that helps create new resource groups for teams by asking a user for 3 questions:

  • What is the application name? (my practise is to group infrastructure for a single application together in 1 resource group)
  • What team is the owner of the application? (in my case all teams have an AD group containing all team members)
  • What kind of environment do you need? (Dev, Test, Acceptance, Production) These choices are made by my client and we have 2 subscriptions (1 DTA and 1 Prod)

After answering these 3 questions the bot will create a standardised resource group name for the team in the format: <appname>-<teamname>-<environment>-rg
for example:


this resource group will be created and the team’s AD group will be granted contributer permissiosn to this newly created resource group.


Enough about the chat bot for now, let’s create the code to actually create a new resource group programmatically.

To do this we’ll use 2 nuget packages from Microsoft called

  • Microsoft.Azure.Management.Fluent
  • Microsoft.Azure.Management.Authorization

These 2 packages contain all the APIs to manage Azure resources. the 2 things we need is managing resource groups and AD permissions. With adding these 2 packages we can start coding our method called CreateResourceGroup. The only parameters we need is the resource group and the ad group.

First you need to log in to your Azure subscription to be able to retrieve information and have an account that has permissions to create resource groups in Azure. It’s not a best practice to run this code as your user account so it’s better to create a service principal who can do this. to create a new service principal take a look at this guide:

After we’ve retrieved the credentials creating a resource group is super easy. It’s just 1 line of code. Adding the correct AD group to add permissions is quite simple to if the service principal has the right permissions to query AD. After querying the right group we can create a RoleAssignment to assign the contributor role to the Azure AD group.

More info on the full bot solution later. Hopefully this will help you create your own Azure automation to speed up your development process.

Happy Coding!

Geert van der Cruijsen

Please follow and like:
Follow by Email

Connecting to Azure Blob Storage events using Azure Event Grid

I was looking for a solution to sync an Azure blob storage containing images to a third party solution that contained an API to upload images. my initial thoughts were to hook up Azure Functions to react on Azure Blob Storage triggers. One thing that is not possible with blob storage triggers is to act on delete. there is only a trigger on adding or changing files.

Luckily Microsoft announced a new solution called Event Grid a few months back. Event Grid is great for connecting events that come from azure resources (or custom resources) to things like Azure Functions or Logic Apps.


Event Grid also supports events for Blob Storage where you get events for adding, changing or deleting items. So how to get started?

Event Grid is easy to setup and exists out of 2 parts. Topics and Subscriptions. Topics are places where events are sent to by Azure resources or even custom publishers. Subscriptions can be made on topics and will receive the events from a certain topic.

Creating the Storage Account / Event Grid Topic

Azure Blob storage has an Event Grid topic built in so you don’t have to actually create a separate Event Grid Topic. At the time of writing Event Grid is only available in West Central US and US West 2 regions so if you create a Storage account there i’ll automatically also get an Event Grid Topic.

Let’s create the topic using Azure CLI

#create resourcegroup
az group create -n <<ResourceGroupName>> -l westus2
#create storage account 
az storage account create \    
   --location westus2 \ 
   --name <<NameOfStorageAccount>> \  
   --access-tier cool \    
   --kind blobstorage \    
   --resource-group <<ResourceGroupName>> \ 
   --sku Standard_LRS \

Or using the Azure Portal:

Screen Shot 2017-12-06 at 15.12.33

When we open the storage account in the azure portal we’ll see that in the left menu is an option called Event Grid. In this menu you can see a list of all subscriptions to this Event Grid Topic. Current we don’t have one so lets take a look on how we can do that.

Creating the Event Grid Subscriptions

Since a topic can have multiple subscribers let’s add 2 different subscribers. First we’ll create a very simple subscription that allows us to see what the events actually look like and after that we’ll take a look in adding an Azure Function that handles the events.

Simple test subscription using is a website where you can request a simple URL that collects all http messages sent to that URL. This is a free service and will keep the last 20 messages for a maximum amount of 48 hours. As soon as we created our requestbin we’ll receive a URL that looks like something like this:

we can add this URL as a web hook subscription to the topic we created earlier. this can be done either using Azure CLI or the Azure portal.

az eventgrid resource event-subscription create    
   --endpoint "" \    
   --name requestbinsubscription \  
   --provider-namespace Microsoft.Storage \  
   --resource-type storageAccounts \  
   --resource-group <<ResourceGroupName>> \   
   --resource-name cloudinarysync

Or in the Azure portal:

 Screen Shot 2017-12-06 at 15.15.18

when we upload a file to the storage account now we are able to see the event that was triggered by going to the inspect web page. We’ll see the json payload containing the event details so we can use this later in our Azure function.

Screen Shot 2017-12-06 at 15.19.03

Example json for file adding and deleting looks like the following:

File Add/Change event

File Deleted event

So we’ve seen the easiest way to hook up the Storage account events using Event Grid. Let’s add an Azure function that actually does something with the events.

Azure Function

When you create a new Azure Function you’ll have to choose the trigger type. you can choose between several options here like a Http Trigger, Webhook Trigger or Event Grid Trigger.  When looking at the list you might think Event Grid Trigger is the way to go. I tend to disagree for now. The events sent from the Event Grid are just play POST http messages so choosing a webhook trigger or Http trigger works just as good and they are even easier to test locally. The thing that Event Grid Trigger adds is that it maps the json payload shown above to a typed object so you can immediatly use it without parsing the json yourself. Another downside from Http Trigger and Webhook trigger is that you’ll have to arrange security yourself since everyone who knows the url could call the webhook by default. Let’s look at both options.

If you are using the portal use the small link below “Get started on your own” called “Custom Function” to choose the trigger type.

Screen Shot 2017-12-06 at 20.10.54

Screen Shot 2017-12-06 at 20.12.47 Screen Shot 2017-12-06 at 20.12.59 Screen Shot 2017-12-06 at 20.13.11

Event Grid Trigger

When you open your event grid trigger (if you created it via the portal but also for precompiled functions that you uploaded there is a link on the top right called “Add Event Grid Subscription”. Click this to set up the Event Grid Trigger.

Screen Shot 2017-12-06 at 20.14.06



In the “Create Event Subscription” window select the “Storage Account” topic type and select your storage account. After pressing Create your Azure Function will be triggered after each change in the storage account.

Screen Shot 2017-12-06 at 20.14.31


Http / Webhook Trigger

Webhook or Http Triggers work almost the same way. in the Portal there is a link on the top right to get the Function URL. When you click this  you’ll see a popup with the endpoint that you should copy. after this adding the subscription works exactly the same as i described above for the except now you’ll enter this URL you just copied.

Screen Shot 2017-12-06 at 20.15.23

Now we’ve set up the plumbing we can start writing our function. i’ll show you the code for a http trigger but an Event grid Trigger function would like almost the same. you can skip the parsing of the json there because you get a typed  object as parameter containing all information.


Pitfalls & Tips

So if everything is correct you should have a working Azure Function now. But how to track the events that are coming in? You could set up application insights and track usage yourself but a nice feature of Event Grid is that it has build in logging and metrics. The metrics itself are a bit hidden in the Azure Portal so i’ll explain how to find them.

The subscriptions itself cannot be found in the resource group of the topic. When navigating to the storage account and clicking Event Grid you can get a list of your subscriptions but no metrics.

For metrics of the subscriptions you have to go to your left menu in the Azure portal and click the > arrow for more resources. search for Subscriptions and here the Event Grid Subscriptions will show up here

Screen Shot 2017-12-06 at 20.53.32


when you go here you’ll get an overview of all events that were triggered by a Topic and the subscriptions connected to the topic. You can also see when the Event Grid did retries and which events completed successfully or failed. It took me a while to find this so hopefully this helps more people find it.

Screen Shot 2017-12-05 at 17.29.52


Another thing to note is that right after creation of the subscription it might take a while for the events to start firing. I don’t know if this is something that is related to the preview state of Event Grid or if it will always be the case.  In the end all events fired but it took a while for the first events to be fired. After it was running for a while the events were triggered really fast. even when i did a large amount of changes the events were fired within seconds.

Hopefully this is useful for you developers who want to react to triggers from Azure Storage.

Happy Coding!

Geert van der Cruijsen

Please follow and like:
Follow by Email

Setting up Continuous delivery for Azure API management with VSTS

Where Continuous delivery for web applications on Azure is becoming quite popular and common  I couldn’t find anything about settting this up for your API definitions in Azure API management. Since i had to set this up at my current customer I thought it was a good idea to share this in a blogpost so everyone can enjoy it. In this blogpost i’ll explain how you can set up Continuous delivery of your API definitions in Azure API management including the actual API implementation in Azure Web apps using VSTS (Visual Studio Team Services)

azure api management

First let me explain the architecture we use for our API landscape. As explained we use Azure API management for exposing the APIs to the outside world and we use Azure Web Apps for hosting the API implementation. These Web apps (Both .Net Core and Full framework .Net Web APIs) are hosted in an ASE (App Service Environment) so they are not exposed directly to the internet while we can still use all the cool things Azure Web Apps offer. These API web apps then connect to datastores hosted in Azure or connect to the on premise hosted environments through an express route or VPN.

To be able to set up our Continuous Delivery pipeline we have to arrange the following things.

  • Build your API implementation so we have a releasable package
  • Create infrastructure to host the API implementation
  • Deploy the API implementation package to the newly created infrastructure
  • Add API definition to Azure API management.
  • Repeat above steps for each environment. (DTAP)

Building your App

The first step can be different from my example if you’re not building your APIs using .Net technology. In our landscape we have a mix of APIs made with .Net Core and APIs made with .Net Full Framework because they needed libraries that were not available in .Net Core (yet). I’m not going into details on how to build your API using VSTS because i’ll assume  you’re already doing this or you know how to do this. If not here is a link to the official documentation.

One thing to keep in mind is that your API  web app does have to expose a API definition so Azure API Management can import this. We use Swashbuckle for this to automatically generate a swagger definition. If you’re using .Net Core you’ll have to use Swashbuckle.AspNetCore

Deploying the API implementation & adding it to Azure API management

For automating the deployments we’re going to the use Release Management feature of VSTS. In our first environment we’ll create steps to do all the things we described above.

 Screen Shot 2017-07-21 at 13.20.30

 The steps in our workflow are the following:

  1. Create web application infrastructure by rolling out an ARM template
  2. Set environment specific variables
  3. deploy the API implementation package
  4. Use a task group to add the API definition to Azure API management.

Creating the web app infrastructure & deploying the API Implementation package

the first and third steps are the basic steps of deploying a web application to Azure web apps. This is no different for APIs so i’ll just link to an existing blogpost here that explains these if you don’t know what they do.

Setting environment specific variables

the second task is a custom task created by my colleague Pascal Naber. It can help you overwrite specific variables you want to use in your environments by storing these settings as App Settings on your Azure web app. We use this to set the connection strings to backend systems for example a Redis Cache or a database.

Add API to API Management

So if we release the first 3 steps we would have an API that would on it’s own. But the main reason of this blogpost was that we want to have our API exposed through Azure API management so let’s have a look on how we can do that.

Azure API management has Powershell commands to interact with it and we can use this to add API definitions to Azure API management too. Below is a sample piece of Powershell that can import such an API definition from a Swagger file.

The script is built up out of 3 parts: first we retrieve the API management context by using the New-AzureRmApiManagementContext Commandlet. When we’ve gotten a context we can use this to interact with our API management instance. The second part is retrieving the swagger file from our running Web app through wget which is short for doing a GET web request. We’ll download the swagger file to a temporary disk location because in our case our web apps are running in an ASE  and therefore are not accessible through the Internet. if your web apps are connected to the internet you can also directly use the URL in the 3rd command to import the Swagger file into Azure API Management. Import-AzureRmApiManagementApi.

So now we have a script that we can use to import the API let’s add it to the VSTS release pipeline we could just add the powershell script to our source control and call the powershell using the build in powershell task. I’d like to make the developers’ life in our dev teams as easy as possible so i’m tried to abstract all Powershell mumbo jumbo away from them so they can focus on their APIs. To do this i’ve created a “Task Group” in VSTS containing this Powershell task so developers can just pick the “Add API to API Management Task” from the list in VSTS and supply the necessary parameters.

Screen Shot 2017-07-21 at 13.22.00

Screen Shot 2017-07-21 at 13.23.46

When we add this task group to the release we can run our release again and the API should be added to Azure API Management.

 Screen Shot 2017-07-21 at 13.20.30

Success!! Our initial continuous delivery process is fixed. At my current client we have 4 different API management instances and we also deploy our APIs 4x. A Development, Test, Acceptance and Production instance. The workflow we created deploys the API to our development environment. We’ve set this up to be continuous so every time a build completes on the master branch we create a new release that will deploy a new API instance to Azure and will update our Development Azure API management instance.

We can now clone this environment 3x so we create a pipeline that will move from dev, test to acceptance and production. I always set the trigger to automatically after the previous environment is completed. if we run our release again we’ll have 4 API instances deployed and in all 4 Azure API management instances they corresponding API will be imported.

Now the only thing you have to add is optionally adding integration tests to the environment you prefer and you are ready to roll!

Screen Shot 2017-07-21 at 13.24.10


Happy Coding!

Geert van der Cruijsen

Please follow and like:
Follow by Email

Session videos to watch from Build 2017 for Mobile developers

It’s been 2 weeks since we had a great week at Build 2017 in Seattle. In the weeks after Build all recorded sessions came online on Channel 9. I’ve created a list of all things relevant for (Xamarin) mobile developers ranging from the new Xamarin announcements that made the headlines to some other sessions that you might have missed but can be relevant for mobile developers.


Lets start with the basics:

Keynotes of day 1 & 2

Although the first day didn’t have that much mobile stuff in it it did a good job on displaying where Microsoft is aiming for in the future. AI is a big part in this and AI can also be big on Mobile so it might still inspire you to build great new innovations. The Second day was the day of Windows and also contained all the mobile stuff. &


“With great power comes great responsibility – Satya Nadella –  build 2017”



Xamarin had a bunch of sessions where they announced new things and showed their roadmap.

Xamarin: The future of mobile app development

When i first looked at the Build sessions i was about to skip this session. Sure James + Miguel are great presenters so that’s almost worth to watch but i expected this to be some Xamarin introduction. Luckily Miguel mentioned that this wasn’t the case so i was there. In this session Miguel explains what Microsoft is trying to achieve for mobile developers and they show all the new cool tools & bits for developers. a must watch! i especially like the bit where they demo the live player and the fastlane integration.


Visual Studio for Mac


During the keynote Visual Studio for Mac was already announced but in this session Miguel and Joseph go through more of the details of Visual studio for the Mac and what the future for this IDE will look like.  Joseph and Miguel were clearly having fun on stage. Bunch of Hackers!



What’s new in Xamarin.Forms

The third and last session held by Xamarin folks was about Xamarin Forms. Nobody less than Jason Smith explains about what is to come in Xamarin Forms 3.0 such as performance improvements,  and the FlexLayout which is an awesome feature for building apps across multiple device sizes. Some other things Jason mentioned were css like styling, one time binding and improvements in the listview


Mobile Center & Visual Studio

Mobile center is a new product that together with VSTS should cover most devops teams in a full devops solution for mobile developers. Here are the key sessions to watch regarding mobile center:

Visual Studio Mobile Center: Ship mobile apps faster

The major session by the Mobile Center team is a must watch if you are thinking about using mobile center.  Thomas Dohmke and Keith Ballinger explain all the new features of mobile center like UWP support, Push notifications and store deployments


Visual Studio Mobile Center and Visual Studio Team Services: Better together for your Mobile DevOps

This session by Simana Pasat explains what mobile center does and how this fits together with VSTS.


General C# & .Net

Build is not only about mobile development. There are several other sessions that are not specific for mobile developers but can be really useful for mobile developers since we still code in C# and .Net right?


Three Runtimes, one standard… .NET Standard: All in Visual Studio 2017

Scott Hanselman and Scott Hunter gave a great presentation on .Net Standard and .Net core. important changes coming to .Net in the coming year. If you’re not up to date with what .Net standard is all about this session has your covered. Next to that the 2 Scotts are just generally funny so worth a watch even if you’re already an expert on .Net Core and .Net Standard 2.0


The future of C#

This session is a classic Build session that Mads and Dustin do every year. When i read the session abstract it made me laugh: “We’re Back!” it stated as this is a Build classic i remembered back from when i was at Build in 2012. This year they showed all the new features of C# 7 but also the road ahead of C# 7.1, 7.2 and C# 8!!


SignalR .NET Core: Realtime cross-platform open web communication

Damian Edwards and David Fowler explain the future of SignalR. SignalR was already a way of building real time communication between several devices but this was always a bit unreliable and wonky, especially on Mobile devices. With SignalR Core they are focussing on a complete rebuild from the ground up and looks really promising. I think this will be used a lot after it’s GA somewhere later this year


Cognitive Services

Cognitive services and AI were a major topic at build.  Here are some videos (next to the keynote) that might inspire you to use cognitive services in your apps

Computer vision made easy: From pre-trained models to Custom Vision, Microsoft Cognitive Services has you covered

Computer vision is a super cool topic and it’s so easy to implement. In this session Anna Roth shows you the possibilities of cognitive services related to computer vision


Using Microsoft Cognitive Services to bring the power of speech recognition to your apps

Next to computer vision speech recognition is another cognitive service that really blows my mind how far we’ve come with technology in the past few years. Watch this session for everything about speech recognition in your apps

Project Rome

Project Rome is a really interesting project for mobile developers and most mobile dev’s i’ve spoken at Build or after Build still didn’t have any knowledge about it. Project Rome focusses on inter device experiences for apps which is something that is going to be huge in the future is my prediction.

bill buxton

Cross-device and cross-platform experiences with Project Rome and Microsoft Graph

This session gives a good overview of what is possible by using Project Rome. Vikas and Carmen gave lots of demo’s and explained the why, the what and how of project rome


App engagement in Windows Timeline and Cortana with User Activities and Project Rome

Project Rome goes hand in hand with the Microsoft Graph and the addition of User Activities and Devices to the Microsoft Graph. In this session Shawn and Juan describe how you can engage users cross device by using the features of Cortana and the new Windows Timeline.



“Bots are the new apps”. it’s a sentence i’ve heard quite often in the past year which was unofficially called “the year of the bots”. Is this mobile tech?  I’m still not convinced this will replace native mobile apps but it’s a great addition to cover certain mobile moments.

bot framework

Bot capabilities, patterns and principles

I visited this session and i have to say i really liked it.  Mat Velloso and Ryan Volum give some real life examples of how you could set up a bot and what design patterns you can use to set up a good bot. Even if you’re not a bot developer this might inspire you to build some small bots or integrate them in your apps.



UWP was a big topic at Build as it is every year. although i haven’t focussed on this much during this year of Build since i was at all sessions above here focussing Xamarin and other Mobile or Azure related topics there is still quite a big list of videos worth watching is you’re building UWP apps.

The first major announcement that i really liked was the Fluent design system. Although i’m a dev i really love good design and these kind of systems really help me to build great stuff. I absolutely loved Metro (back when it was announced) but Microsoft didn’t upgrade this design language that much up until now.


fluent design system

Introducing Fluent Design

Build Amazing Apps with Fluent Design


Other sessions on UWP development:

What’s new and coming for Windows UI: XAML and composition

Windows Store: Manage and promote apps your way

App Model evolution

Nextgen UWP app distribution: Building extensible, stream-able, componentized apps

Ten things you didn’t know about Visual Studio 2017 for building .NET UWP apps

XAML custom controls for UWP: Start to finish



These sessions should cover quite some of your spare time to get you fully up to date of current mobile development in the Microsoft space. I really like listening to some of them during my commute. Did i miss any important sessions? please let me know in the comments.


I had a great time at Build in Seattle so hopefully to see you next time.

Happy Coding!

Geert van der Cruijsen

Please follow and like:
Follow by Email

Created an open source VSTS build & release task for Azure Web App Virtual File System

I’ve created a new VSTS Build & Release task to help you interact with the (VFS) Virtual File System API (Part of KUDU API of your Azure Web App). Currently this task can only be used to delete specific files or directories from the web app during your build or release workflow. It will be updated in the near future to also be able to list files or to upload / download files through the VFS API.


The reason i made this task was that i needed it at my current customer. We’re deploying our custom solution to a Sitecore website running on Azure web apps using MSDeploy. The deployment consists of 2 parts: an install of the out-of-the-box Sitecore installation and the deployment of our customisations. When deploying new versions we want to keep the Sitecore installation and MSDeploy will update most of our customisations. Some customisations however create artifacts that stay on the server and aren’t  in control of the MSDeploy package that can cause errors on our web application. This new VSTS Build / Release task can help you delete these files. In the future this task will be updated with other functionality of the VFS API such as listing, uploading or downloading files.

The task is available in the VSTS Marketplace and is open source on github.

Let’s have a look how to use this task and how it works under the hood.
Continue reading

Please follow and like:
Follow by Email
« Older posts