Mobile First Cloud First

A blog by Geert van der Cruijsen on Software Development, Cloud, DevOps & Apps

Author: Geert van der Cruijsen (page 1 of 3)

Creating automated releases and other things using Hub CLI in Github Actions

I love the new Github Actions and all the options it gives to automate things in my workflow such as automating CICD.

When i got added to the beta i wanted to try out if i could create a full CICD pipeline that would first build my code and release it afterwards to Azure. I used Azure DevOps quite a lot in the past and what i loved there was the separation between build and release steps where build creates an immutable artifact that will be deployed in the release stage.

gh actions

I wanted to recreate something similar so my approach was the following:

  • Create a build workflow that compiles my code
  • Let the build workflow create a release in Github containing artifacts that could be downloaded later on.
  • create a release workflow that downloads the release artifacts and deploys them to my Azure environment.

Creating the workflow that compiles code isn’t that hard. There are plenty of samples out there and Github helps you with a starter workflow by checking what kind of code is in your repo as well. Then it was time to create a release in my workflow and upload the artifacts to it. First thing i did was search the marketplace and look for tasks that could do this for me. But after not finding anything i thought to myself: Shouldn’t it be possible to do this from the command line? When building these workflows i see myself doing more and more plain command line tasks instead of using some marketplace tasks that are often not that much more than just a wrapper on top of some CLI API.

Github has a hub CLI that can automate a lot of things from the command line such as creating releases so my thoughts were just calling some hub commands and everything would be OK.. Well it wasn’t that simple because the hub cli isn’t installed by default on the build agents. To fix this I created my own marketplace task to install this CLI to the build agent and from then on you can just use all hub commands from within your Actions workflow.

Creating a release using the hub cli is easy. Just use hub release create.  There is a range of other methods you could also use in your own workflow but in this post we’ll focus on the hub release part. Hub release create does need some more setting up though because creating releases does need some authentication and we’ll need some more parameters. Let’s look at all the details. But first here is a sample full build workflow.

Install Hub CLI

As i wrote above I’ve created a custom Github Action called setup-hub that installs the hub cli on your build agent so you can use it in your workflow. If you want to use the hub CLI yourself just add these 2 lines to the workflow steps and hub CLI will be installed and added to the path.

 - name: Install hub cli
   uses: geertvdc/setup-hub@master


gh action setup-hub

Authentication

We’ll be using the hub cli from within a script block of the workflow. By default hub CLI will prompt you for an interactive login when doing authenticated calls such as creating a release. In an Actions workflow however there is already a default secret defined called GITHUB_TOKEN which you can use to authenticate.  You will have to pass it into the ‘Run’ task by setting it as an environment variable

 env:
   GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

gh actions release1

The github token will execute the command using a generic “github-actions” user. This is fine in most cases but there are some limitations. Actions created by the Github Actions user will never trigger further other actions. So in my case where i wanted to trigger a deploy workflow after a release is created wouldn’t work. To change this you can use a work around that will execute the task using  your own personal user. To do this create 2 secrets. 1 called GITHUB_USER containing your Github username and 1 secret called GITHUB_PAT that contains a personal access token for your user. Pass this PAT secret into the GITHUB_TOKEN environment variable and now the hub cli command will be executed as if it was executed by you.

 env:
   GITHUB_USER: ${{ secrets.GITHUB_USER }}
   GITHUB_TOKEN: ${{ secrets.GITHUB_PAT }}

gh action release 2

Release Versioning

Ok so now we can do authenticated calls towards the hub CLI. Now we can create a release but to actually create a release we’ll need a version number of some sort. What i do in my builds is just auto incrementing the previous release by 1. To do this we can use the hub cli to query the last release by calling hub release -L 1. this gives us the last release. Now we’ll add some bash script to increment this version

 version=$(hub release -L 1| awk -F. -v OFS=. 'NF==1{print ++$NF}; NF>1{$NF=sprintf("%0*d", length($NF), ($NF+1)); print}')
 echo $version

This oneliner will take the current version number in the form of v#.#.# and will increase the lastest number by 1.

now we have everything to call hub release in our workflow. using the -a tag you can upload artifacts as you can see in the full example i listed above.

hub release create -m $version -a $HOME/artifacts/frontend-$version.zip $version

That’s all. Now each time a push is made to the master branch a new release will be created and saved in your Github releases.  You could hook up another workflow that triggers on a release creation using the following yaml:

 on:
   release: 
     types: [created]

Hub CLI can do far more than just releases. It can also help you in automation around pr’s,or issues. I’m really curious what others will use it for. Please let me and the other readers know in the comments if you found new cool ways of using this.

If you like my setup-hub Github Action please give it a star on Github or give feedback on Twitter

Happy Coding!

Geert van der Cruijsen

Use VSCode REST Client plugin with OAuth and Azure Active Directory

When i’m building apps that consume APIs (so that’s basically every app i built..) i want to test out these APIs by hand to see how/if they work as intended and what the exact responses are. To do this i love to use the VSCode plugin called “REST Client“. This plugin makes it super easy to test API calls and one of the great benefits is that it stores all the information in plain text files so i can store them together with my code in git.

rest client azure ad oauth

Quite often the APIs i want to test need some for of authentication and OAuth 2 is a very common scenario. Lately i was working with APIs from Azure and the Microsoft Graph API and they are all using OAuth 2 to authorize the requests. OAuth requires you to get a bearer token first which you then pass into the other API calls to do authorized calls. REST Client is able to do this, you just have to know how it’s done and since i couldn’t find it in the docs i decided to blog about it:

So how to get started?

In this example i’ll use a Service Principal with ClientID and ClientSecret to get a bearer token. If you don’t have a Service Principal yet here is a guide on how to create it.

As I wrote before i love the REST Client plugin because it stores all my API calls in code.. however i don’t like secrets stored in git so we’ll first start of by setting some environment variables in VSCode that the REST Client plugin can than use. Open the settings.json file from within VSCode and add a new block here containing the information to get the bearer token such as your tenantID, clientID and ClientSecret in the case of Azure Active Directory.

save the settings.json file. When you open the command palette in VS Code now choose “Rest Client: Switch Environment”. the newly created environment should be there and you’re able to use these environment variables in your API calls.  

 

1 1

Retrieving a bearer token

Now that we’ve made sure we don’t have to store secrets in our .http files and therefore they don’t end up in git we can start create the API call to get a bearer token. In this example we get a bearer token to access the MS Graph api so we log into our Azure AAD Tenant to get the token. We need to pass in the tenantId on the URL and as form values we have to pass in the clientId, ClientSecret and a scope. (in my case this is https://graph.microsoft.com/.default)

When you execute this call should get a bearer token in response. Hooray! You could ofcourse copy this bearer token into a  variable again and use it this way but what is even nicer is that you can use it directly from the response into next calls. You can do this by adding a name “# @name auth” on top of your API call and if you do that you can reference this request and response in next calls.

3

Using the Bearer Token

Below is an example of how we use the access token to requests users from Azure Active Directory using the just requested Access Token. by using the variable {{auth.response.body.access_token}} that has the value from “auth” the name of our rest call to retrieve the bearer token and the acces_token from the response body.


As you can see it’s actually quite simple to first get a bearer token and later on use this in your REST Client querying. I use this all the time but it isn’t documented that well so hopefully it can help you in your API consuming/ discovering endeavors.

Happy Coding!

Geert van der Cruijsen

Adding Azure Active Directory Authentication to connect an Angular app to Asp.Net Core Web API using MSAL

Integrating your application with Azure Active Directory using OAuth shouldn’t be to hard at first sight. I have done this many times with different development technologies like Asp.Net, Xamarin etc, but this week i had to do it for an Angular app for the first time. There is quite some information and docs to be found on this subject but a lot of them are outdated and it took me longer than expected so that’s why i decided to write up how i got it to work, step by step.

angular - active directory - dotnetcore web api - oauth

So here is a description of what we’ll create in this post:

  • Create an angular app from scratch using the Angular Cli and make it authenticate the user in Azure Active Directory using the MSAL library.
  • Create an Asp.Net Core Web Api from scratch and connect it to Azure Active Directory as well
  • Enable the angular app able to communicate with the web api in an authenticated way using access tokens.

Setting up Azure Active Directory

In Azure Active Directory we have to register 2 applications. You can add an application in the Azure Portal by going to “Azure Active Directory -> App Registrations -> New Registration”

Front End App Registration

We’ll call the first application “demoapp-frontend” and it will contain the configuration for our frontend application.

In here you can also select which AD should be used. If it should only allow users from your tentant or you also want to allow multiple tenants or microsoft accounts.

Lastly we fill in the Redirect URI where we enter “http://localhost:4200” because that is where our Angular application will expect it.

1

After that we press Register and wait for the application to be created. As soon as it is created we can go into the details and write down the Client ID and Tenant ID because we will need it later.

2

Go to the Authentication menu item and check the boxes for Access Tokens and ID Tokens and save the configuration.

 

3

The last step in this app registration is enabling the Oauth Implicit flow. To do this open the manifest and  set “oauth2AllowImplicitFlow” to true

 

4

The last step is enabling the app registration to be used by end users when logging in. You can do this by going to the “API Permissions” menu and Grant consent for the application.

9

Now the app registration is ready and we can continue with the app registration for the API.

API App Registration

We create another app registration called “demoapp-api”. We only need to enter the name and don’t need a redirect url since this app will only check for logged in users and won’t log in the users itself.

Write down the client ID again because we’re going to use it later on.

After we’ve created the app registration go to “Expose an API“. In here we’re going to add a scope by pressing “Add a Scope“.

6

As a scope we’re going to add a scope called “api://<cliendID>/api-access.

Note you can come up with your own scope name or add more scopes later on.

7

After adding the scope we’re going to add the front end app registration as “Authorized Client Application”. Press “Add a Client Application” and enter the client id of the Angular app registration we added.

 

8

This is all we need to do in Azure AD to enable our API and front end application to make use of Azure Active Directory. Now we can start coding our applications. We’ll make use of the MSAL library to connect the angular app to our Web API. Let’s create the Asp.Net Core Web API first that will check for logged in users for all its requests or otherwise will throw a 401 unauthorized.

 

Creating the Asp.Net Core Web API

We’ll be creating a brand new Asp.Net Core 2.2  Web API in this sample by using the CLI. “dotnet new webapi“.

Add an “AzureActiveDirectory” object to your appsettings.json (or add them using secrets ) and fill in your AAD Domain name, Tenant ID and Client ID. (of the API app registration)

 "AzureActiveDirectory": {
 "Instance": "https://login.microsoftonline.com/",
 "Domain": "<yourdomain.onmicrosoft.com>",
 "TenantId": "<yourtenantid>",
 "ClientId": "api://<yourclientid>"
 },

please note to make the client ID be in the form of api://<yourclientid>.

After creating these settings we only need to update the startup.cs to add authentication here to set up AAD integration.

There are a few things to add here (see example startup.cs below)

  • ConfigureServices:
    • Add the services.AddAuthentication method to load our settings to point to the correct AAD app registration.
    • Add Cors. In this example we’ve taken the simples approach by allowing every source. You might want to make this more specific in your own application.
  • Configure
    • Add app.UseCors
    • Add app.UseAuthentication.

This is everything we need to do to have a working Asp.Net Core Web API with AAD integration. whenever you create a new API Controller just add an [Authorize] attribute to make sure your API calls are authenticated.

Creating the Angular App

We’ll also start with a brand new Angular app creating by using the Angular CLI. Create a new app using “ng new” In the Angular App we will use the MSAL library from Microsoft to connect to Azure Active Directory. MSAL is a new library which should replace the ADAL library Microsoft created earlier. MSAL is created to work with the new v2 endpoints of Azure Active Directory while ADAL only works with the v1 endpoints. Microsoft has created a npm package for MSAL to be used in Angular which makes using MSAL a lot easier. Install this package using “npm i @azure/msal-angular” After installing this package we only need to enable Azure Active Directory in our app.module.ts.   A sample is shown below. What do we need to add:

      • Add the MSAL Module with the correct client ID, and domain (https://login.microsoftonline.com/<tenantid>

 

  • Create a protected resources map. This will function as a guard so each time a resource from one of these URLs is called the right access tokens will be sent along with it.

 

 

  • Fill the Consent Scopes: a list of all the scopes you would like to get access tokens for. This could be User.Read to retrieve the users login name from AD and specific API scopes for your API calls.

 

 

  • Add a HTTP Interceptor so MSAL will add the right tokens and headers to your requests when needed whenever you use a HttpClient.

 

 

Now we can run the application and as soon as we do a network call to an url listed in the protected resources map we will get prompted to log in with our Azure AD Credentials.

In the end connecting your Angular App with Azure Active Directory isn’t that hard, you just have to know exactly what id’s to use where.

Hopefully this will help others in making the connection work smoothly. It took me a few hours to long but managed to get it working with the help and all seeing eyes of my great colleagues Chris, Niels and Thijs

Happy Coding!

Observing distributed application health using Azure Application Insights & Azure Log Analytics

Most people who use Azure Application insights to monitor their applications will not look at it until something is wrong and only then will they look for what exceptions are thrown to see what is going on. In my opinion if you want to build high available systems you also want to be able to see if everything is working as normal when there are no problems.

When you build a monolithic application it’s often quite easy to find where certain performance bottle necks are by monitoring cpu and memory usage. When we look into distributed systems and microservice architectures an application will often span multiple services with even more instances running into thousands of machines, service busses, APIs you name it. How do we monitor this by looking into CPU, memory and all other traditional monitoring measures. You simply can’t.

In these types of scenarios where you have several instances or maybe even thousands of instances we have to look for other things. One thing you could do is come up with a KPI of measuring a service that your application is providing and seeing how often this is completed. To make this a bit simpler to understand lets look at an example:

Netflix is famous for their micro service architecture spanning thousands of machines and they monitor on SPS (Starts per second). With the millions of subscribers they have this number is something that should be fairly predictable. That’s why they monitor for this and if this number is affected something must be wrong (If people start more often maybe playback isn’t working so they keep pressing play? If less people press play maybe the UI is broken and the event is not coming down to the server or something else might be wrong.) By just monitoring 1 number they can use this if the overall health of the system is OK or not.  You can learn more at the Netflix technology blog.

So how do you start with something like this yourself?

Finding the right KPI

There is not 1 solution to find the right KPI that is best for measuring. But there are some things you might consider. First of all it has to be important for your business. Next to that it would be nice if the number was somewhat stable or has clear patterns. This all depends on your business and application.

Maybe it’s best to start with another example we used for one of our clients. We’ll take this example from the initial idea to how we actually monitor it using Azure Log Analytics and Application Insights.

The application we worked on had to do calculations every few minutes and these calculations could take up from 10 seconds to about a minute. It was really important that the end result of these calculations were send customers / other systems every X minutes. Because of this the development team added logging to Application Insights that stored the calculation time for each cycle. During the day the calculation time ranged from fast (10 seconds) to slow (1 minute) because of several parameters. I’ve drawn a picture of what the graph looked that took all the App Insights calculation times and plotted it over time.

1

 

The Graph looked like this. Initially the dev team only created this view to monitor health of the calculation times. A big problem in here is that it provides no information of what is “normal”. As humans we are quite good at recognizing patterns and after showing this picture to several people they all noted. Wow somewhere between 9:00 and 12:00 in the morning there must be something wrong.

2

 

The problem is that this data is only the data of 1 day. It does not even have a pattern. There are several external influences that have impact on calculation times. One of them is customer orders being created. This application is a business to business application and a majority of orders is created during the morning of european working hours. This is why we need more data in our graph so we can actually see if there are some patterns.

In the next graph I’ve plotted the data of a full work week on the same area to see if we can find patterns.

3

 

 

When we plot this full week of calculation times we can see that there is quite a pattern to be found. Next to that we can also very easily spot where something is not following our pattern. Is the high curve just before 12:00 still an anomaly? Guess not… But what is happening in the afternoon? Data that first looked like being part of some pattern in our heads does stand out all of a sudden. I think we’ve found our KPI that we want to measure.

4

When developing an application adding counters and logging information is important to be able to create these kinds of dashboards. If you are not sure on what to measure. Just start with business functions start/completed and each service start/completed/retried. This gives you a starting point. from there on you can come up with new measures and counters.

An important area of DevOps is as developers we have to start thinking more like Ops. what are good things to measure, monitor etc. In the past few years I often come across Devs telling Ops to become more like Developers by adding automation and doing stuff as code but it’s also important to focus on the other way around. Devs taking ownership of what they are building and making it easy to see if the application is still working like it is supposed to.

As Devs you have far more knowledge of what could cause certain delays, outages etc because you know how the application is working internally. So join forces and work together.

Implement it using Azure Application Insights and Azure Log Analytics

So now we have a pretty good idea of what we want on a dashboard. How do we implement this? Since the title of this post is about Application insights and Azure log analytics i’m assuming you already have Azure Application insights in place. If not here is a guide.  When we have access to an Application insights instance we can start doing our custom measurements. In this post we’ll focus on measuring calculation times similar to the example above but you could do this with any type of measurement.

How to track timing in app insights?
We can use the code above to track custom timing of pieces of code.  We’ll create a DependencyTelemetry object, Fill in the name and type properties call Start, do your calculation and if it succeeds  set the success to true and then finally call the Stop method so the timer is always stopped. This is all the code you need. When you run your app now and go to Application insights open the Analytics tab and run a query showing all “dependencies with name “CalculationCycle”.  Since we haven’t logged anything else we’ll just query all dependencies and voila there are our timings in the duration field. appinsights So our application is logging the calculation times. Now it is time to create a dashboard that shows the “normal” state and values from the last 24 hours.

Creating a kusto Query in log analytics:

We want to create a similar graph as i drew in this post earlier. We could have all these colored lines for all the different days but what is even better is that we can take the data for the last month and combine it. When we create the query we’re actually building 2 series and we will combine them in the end to display a graph. The first series we will create will be called “Today” and will show all the values of the calculation time and will summarize them per hour. The second series we create is called “LastMonth” and will take all the values of the last 30 days and will group them by hours of the day as well. We also only take the 90 percentile of the values so we remove values that are special cases.

Run the query to get the graph below. You can pin this graph to a dashboard and now you can see your calculation times compared to average calculation times of the last month on a per hour basis.

For our scenario this worked really well. If you create something similar make sure that the last 30 days is a good comparison. Should calculations be the same every day of the week or are your calculations taking longer on a Monday compared to the Friday? if that is the case you might want to tweak your query so you are actually comparing to your “normal” state.
graph

 

Hopefully this post helped you set up a dashboard view of viewing a “normal” state of your application that you could have displayed near your team working area to see if everything is still working as you expected it to.

Finally i would like to do a shout out to my colleagues Rene and Jasper who created this with me from idea to final result.

Happy Coding (and observing)

Geert van der Cruijsen

Containerized build pipeline in Azure DevOps

Azure DevOps comes with several options to use as build agents in your Azure Pipelines. Microsoft has hosted agents where you don’t have to maintain your own hardware and you can turn any machine you own into a agent by installing the agent script on that machine.

The hosted agents are packed with lots of pre-installed software to support you in your builds. If you run your own private agents you can customize them as you would like. I’m currently at a large enterprise where I’m a consultant in a IT for IT team that hosts a number of private agents for all other development teams to use. Our agents are fully set up through automation and have all common tools used by teams (Based on the Hosted Azure DevOps agent images which are open source). These Agents work for the largest group of development teams but there are always teams who need some special tools. What we do to give teams freedom on their tool selection is having them run their builds inside a docker container. This is a new feature released at the end of September 2018.

containerizedpipelines

When you run builds inside a container all steps in your pipeline are executed inside this container. The work directory of the agent is volume mapped inside the container. The ability of running your pipeline in a custom container gives you all the freedom of creating an image that has all the tools required for you to execute your build. The Docker image has 2 requirements.  Bash and Node.js have to be available within the container and then you’re ready to go.

How to create a containerized build pipeline

An important note is that containerized pipelines are currently only available in YAML based pipelines. I don’t know if pipelines created in the portal will eventually also support this but in my opinion YAML based pipelines are the way forward from now on because they have a lot of advantages over traditional pipelines. The official documentation on YAML pipelines can be found here.

Let’s take a simple YAML pipeline as this example. I’ve create a simple Asp.Net Core application and have set up a pipeline for that. This is what my azure-pipelines.yml file looks like.

I tried to make the build as simple as possible. It’s just a basic .Net core build that we want to execute. For this example it doesn’t matter what the exact steps are that we are executing. It could be anything, a .Net build, npm, Go, Java Maven, anything goes. We use one of the hosted agent queues to execute the build.

Next step is to make this regular build execute exactly the same steps in a container. We can do this fairly simple by adding some settings to our pipeline. You’ll need to add a container to your resources defined at the start of your YAML file. This can either be a public container from Docker hub or a container from a private repository. The container resource will receive a name. In my example “dotnet-geert”. We can use this name to set a reference to this container in our pipeline so all build steps will be executed in this container.  You can do that by adding a line just below your build pool saying which container should be used. container: dotnet-geert

In this example we run our build in a Hosted Ubuntu Agent. Downsides of running this on a hosted agent is that the hosted agent won’t cache your docker image so it has to download the full image at every run. Because of this I don’t think this approach will be that effective compared to private agents which cache the docker image locally so spinning up a container is only a matter of seconds.

Running it on your local agents work exactly the same. there are however a few requirements. You either need a Linux machine with docker support or a Windows machine which runs Windows Server and has a higher version than 1803.

Performance improvements

First thing we want to do is run our build on a private agent so we can re-use our Docker images and only have to download them once. Another feature of using containers is that you’ll always receive a fresh instance of your environment. This is nice because you can be sure that every build was run exactly the same and wasn’t relying on previous changes another build might have done to your agent. It’s also something to consider when your builds take a long time. Because you’ll receive a fresh environment each build you’ll also have to download all your dependencies each build. Most applications nowadays are using a lot of external dependencies so let’s have a look on how we can fix this.

Docker has a feature called volume mappings that enables you to map certain directories from your host machine and use them in your container. In my example pipeline we’re building a .Net Core application that uses NuGet packages. We can map a folder on our host machine to function as the global NuGet cache and use this within our container. each time the container is downloading nuget packages It’s storing it outside the container and we run the same build again it can use the cached packages. Same thing would work for npm packages or maven packages when you are building applications with other technologies.

We can create the volume mapping by passing an option to our container. This option has to be -v for volume mapping and then passing in the <source folder>:<destination folder>. In the case of NuGet packages we’re also setting a global environment variable that sets the NuGet cache to this folder. After we do this our builds will be super fast and we have all the flexibility of tools that containerized builds give you. Below is a full sample pipeline that uses a volume mapping for the NuGet cache.

I really like this new feature of Azure DevOps because it gives you a lot of flexibility in your builds without having to do customize your own private agents to much.

Happy Coding! (And building!)

Geert van der Cruijsen

 

Passing in custom user settings and secrets to Maven in Maven VSTS Build Tasks

VSTS is tooling for setting up automated pipelines for all kinds of programming languages. I’ve seen more and more non Microsoft technologies being used on VSTS and I came across a couple of questions repeatedly so i thought it was a good idea to write this in a blog post.

maven-vsts

 

The problem is the following: If you want to do a Maven build, Maven will expect some user settings to be present somewhere on your build server. While this is often configured once on the build server it is better to pass it in during build time especially if it contains secrets that you don’t want to have stored in plain text somewhere. So how do we do this?

In the sample we’ll add a connection to Sonatype Nexus (A package management solution, comparable to VSTS Package management) so Maven is capable of downloading packages it needs or it is capable of pushing its build artifacts there. Although this example does only set these settings you can use it for other kinds of settings as well.

So how to implement this?

First we need to add a file to our repo and call it ci-settings.xml. it will contain our user settings with a username and password to connect to Nexus.

This file has a few variables that we are going to replace called nexusUser and nexusPassword. the “REPOSITORY ID” needs to match the id used in the pom.xml file.

In the Maven task we then pass this user settings file to the maven command using the -s option. We can also pass in the values for our parameters in the ci-settings.xml file using -DnexusUser and -DnexusPassword. The full Options would look like something like this.

1

-s $(System.DefaultWorkingDirectory)/ci-settings.xml -DnexusUser=$(Nexus.User) -DnexusPassw ord=$(Nexus.Password)

The actual values of $(Nexus.User) and $(Nexus.Password) are stored in the VSTS variable section where you can also make the password a secret so it’s hidden from logs and from people editing or viewing the build definition

2

 

Fix error on Azure: “the subscription doesn’t have permissions to register the resource provider”

Working in an enterprise environment, permissions in Azure might be trimmed down so users do not have access on Azure subscriptions itself and only have access to specific resource groups. When someone has contributor permissions in a resource group you might think that they should be able to create all the things in there that they would like.  This is not always the case. Each Azure resource type has to be registered through a resource provider on the subscription level. When users only have access to certain resource groups and not to the subscription itself you can run into errors when you try to create a new resource that is not registered yet.

The error will say:

the subscription [subscription name] doesn’t have permissions to register the resource provider(s): [resource type]

Here is a sample screenshot that happened when sql was not registered.

the subscription does not have permissions to register the resource provider

 

There are a couple of options to fix this.

  • Manually register the resource type in the azure portal
  • Register all resource types in a subscription using the Azure CLI
  • Create a specific role for all users to give them permissions to register resource providers

Manually registering resource types in the Azure portal

Registering a resource type in the Azure portal is the simplest if you only want to register a specific resource type. If you want to register every resource type available this requires a lot of clicking so it’s better to choose one of the other 2 options using the CLI or a custom role.

Using the Azure portal to register a resource type is easy though. In the portal navigate to your Subscription. In the Left menu click on Resource Providers and after that click Register for each of the resources you want to register.

az portal

 

Register all resource types in a subscription using the Azure CLI

You can also use the Azure CLI to register all available resource types in your azure subscription. This is done through one line of Azure CLI.
This will initially list all resource providers and then for each resource provider it will call the register method. One caveat to watch out for is that if new resource types are added to Azure they are not automatically registered so you’ll have to run the script again or choose the 3rd option creating a specific role that all users get so they can automatically register resource providers

Create a specific role for all users to give them permissions to register resource providers

The final and most future proof solution is creating a new role which you can assign to all your users which has permissions to register a new resource provider. The first step in creating this is defining a new json file describing this role. 

this json file sets the action for registering resource providers to be allowed and the only thing you’ll have to customize is adding your own subscription ids. When this security role json file is finished we can use the Azure CLI to create the role and after that we can assign users to the role. This does require you have certain groups in AD containing all your users you want to give access. If you don’t have that the 2nd option is probably better for you because it will become a lot of work to assign this role to all your users manually.

That’s it! I’ve given you 3 options to solve this Azure error so hopefully one of these can help you get going again in building great cool stuff on Azure

Geert van der Cruijsen

 

 

Programatically creating Azure resource groups and defining permissions

In my job as DevOps consultant I try to help my clients build better software faster.  A key part of this is automation of the complete delivery pipeline. Most of the times this focusses on the delivery pipeline from user story to committed code to eventually this code running in production. With tools as VSTS this is quite easy to do but what about the things that happen outside of the core of the application?

Creating Infrastructure as code is becoming mainstream in public cloud scenarios so teams can create and deploy their own infrastructure. This allows independent self serving teams to build better software faster. But often people stop here. There are still several tasks that often are manual steps where someone with the right permissions has to step in to do these tasks. Examples can be: Creating a new VSTS Team, GIT repo, opening ports on the firewall or creating a resource group in Azure where the team can create their infrastructure. My goal is to automate everything here so teams can create these things in a guided, self serving manner. I’ll be diving deeper in that subject in a later post where i explain how we’ve created an Operations Chatbot that does these kind of things. In this post i want to focus on 1 specific area this bot can help: Creating Azure resource groups for teams and assigning permissions.

In many of my projects we host our infrastructure in Azure and I like DevOps teams to be independent. Looking at Azure they should have a space where they can create their infrastructure and do their thing. It’s up to the teams what kind of stuff they spin up since they should be the ones maintaining it and they are responsible for the costs.

The thing we’ve built is a chat bot that helps create new resource groups for teams by asking a user for 3 questions:

  • What is the application name? (my practise is to group infrastructure for a single application together in 1 resource group)
  • What team is the owner of the application? (in my case all teams have an AD group containing all team members)
  • What kind of environment do you need? (Dev, Test, Acceptance, Production) These choices are made by my client and we have 2 subscriptions (1 DTA and 1 Prod)

After answering these 3 questions the bot will create a standardised resource group name for the team in the format: <appname>-<teamname>-<environment>-rg
for example:

publicwebsite-mar-dev-rg

this resource group will be created and the team’s AD group will be granted contributer permissiosn to this newly created resource group.

rg-ad

Enough about the chat bot for now, let’s create the code to actually create a new resource group programmatically.

To do this we’ll use 2 nuget packages from Microsoft called

  • Microsoft.Azure.Management.Fluent
  • Microsoft.Azure.Management.Authorization

These 2 packages contain all the APIs to manage Azure resources. the 2 things we need is managing resource groups and AD permissions. With adding these 2 packages we can start coding our method called CreateResourceGroup. The only parameters we need is the resource group and the ad group.

First you need to log in to your Azure subscription to be able to retrieve information and have an account that has permissions to create resource groups in Azure. It’s not a best practice to run this code as your user account so it’s better to create a service principal who can do this. to create a new service principal take a look at this guide: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal

After we’ve retrieved the credentials creating a resource group is super easy. It’s just 1 line of code. Adding the correct AD group to add permissions is quite simple to if the service principal has the right permissions to query AD. After querying the right group we can create a RoleAssignment to assign the contributor role to the Azure AD group.

More info on the full bot solution later. Hopefully this will help you create your own Azure automation to speed up your development process.

Happy Coding!

Geert van der Cruijsen

Connecting to Azure Blob Storage events using Azure Event Grid

I was looking for a solution to sync an Azure blob storage containing images to a third party solution that contained an API to upload images. my initial thoughts were to hook up Azure Functions to react on Azure Blob Storage triggers. One thing that is not possible with blob storage triggers is to act on delete. there is only a trigger on adding or changing files.

Luckily Microsoft announced a new solution called Event Grid a few months back. Event Grid is great for connecting events that come from azure resources (or custom resources) to things like Azure Functions or Logic Apps.

eventgrid

Event Grid also supports events for Blob Storage where you get events for adding, changing or deleting items. So how to get started?

Event Grid is easy to setup and exists out of 2 parts. Topics and Subscriptions. Topics are places where events are sent to by Azure resources or even custom publishers. Subscriptions can be made on topics and will receive the events from a certain topic.

Creating the Storage Account / Event Grid Topic

Azure Blob storage has an Event Grid topic built in so you don’t have to actually create a separate Event Grid Topic. At the time of writing Event Grid is only available in West Central US and US West 2 regions so if you create a Storage account there i’ll automatically also get an Event Grid Topic.

Let’s create the topic using Azure CLI

#create resourcegroup
az group create -n <<ResourceGroupName>> -l westus2
#create storage account 
az storage account create \    
   --location westus2 \ 
   --name <<NameOfStorageAccount>> \  
   --access-tier cool \    
   --kind blobstorage \    
   --resource-group <<ResourceGroupName>> \ 
   --sku Standard_LRS \

Or using the Azure Portal:

Screen Shot 2017-12-06 at 15.12.33

When we open the storage account in the azure portal we’ll see that in the left menu is an option called Event Grid. In this menu you can see a list of all subscriptions to this Event Grid Topic. Current we don’t have one so lets take a look on how we can do that.

Creating the Event Grid Subscriptions

Since a topic can have multiple subscribers let’s add 2 different subscribers. First we’ll create a very simple subscription that allows us to see what the events actually look like and after that we’ll take a look in adding an Azure Function that handles the events.

Simple test subscription using Requestb.in

Requestb.in is a website where you can request a simple URL that collects all http messages sent to that URL. This is a free service and will keep the last 20 messages for a maximum amount of 48 hours. As soon as we created our requestbin we’ll receive a URL that looks like something like this: https://requestb.in/1ckzahm1

we can add this URL as a web hook subscription to the topic we created earlier. this can be done either using Azure CLI or the Azure portal.

az eventgrid resource event-subscription create    
   --endpoint "https://requestb.in/1ckzahm1" \    
   --name requestbinsubscription \  
   --provider-namespace Microsoft.Storage \  
   --resource-type storageAccounts \  
   --resource-group <<ResourceGroupName>> \   
   --resource-name cloudinarysync

Or in the Azure portal:

 Screen Shot 2017-12-06 at 15.15.18

when we upload a file to the storage account now we are able to see the event that was triggered by going to the requestb.in inspect web page. We’ll see the json payload containing the event details so we can use this later in our Azure function.

Screen Shot 2017-12-06 at 15.19.03

Example json for file adding and deleting looks like the following:

File Add/Change event

File Deleted event

So we’ve seen the easiest way to hook up the Storage account events using Event Grid. Let’s add an Azure function that actually does something with the events.

Azure Function

When you create a new Azure Function you’ll have to choose the trigger type. you can choose between several options here like a Http Trigger, Webhook Trigger or Event Grid Trigger.  When looking at the list you might think Event Grid Trigger is the way to go. I tend to disagree for now. The events sent from the Event Grid are just play POST http messages so choosing a webhook trigger or Http trigger works just as good and they are even easier to test locally. The thing that Event Grid Trigger adds is that it maps the json payload shown above to a typed object so you can immediatly use it without parsing the json yourself. Another downside from Http Trigger and Webhook trigger is that you’ll have to arrange security yourself since everyone who knows the url could call the webhook by default. Let’s look at both options.

If you are using the portal use the small link below “Get started on your own” called “Custom Function” to choose the trigger type.

Screen Shot 2017-12-06 at 20.10.54

Screen Shot 2017-12-06 at 20.12.47 Screen Shot 2017-12-06 at 20.12.59 Screen Shot 2017-12-06 at 20.13.11

Event Grid Trigger

When you open your event grid trigger (if you created it via the portal but also for precompiled functions that you uploaded there is a link on the top right called “Add Event Grid Subscription”. Click this to set up the Event Grid Trigger.

Screen Shot 2017-12-06 at 20.14.06

 

 

In the “Create Event Subscription” window select the “Storage Account” topic type and select your storage account. After pressing Create your Azure Function will be triggered after each change in the storage account.

Screen Shot 2017-12-06 at 20.14.31

 

Http / Webhook Trigger

Webhook or Http Triggers work almost the same way. in the Portal there is a link on the top right to get the Function URL. When you click this  you’ll see a popup with the endpoint that you should copy. after this adding the subscription works exactly the same as i described above for the requestb.in except now you’ll enter this URL you just copied.

Screen Shot 2017-12-06 at 20.15.23

Now we’ve set up the plumbing we can start writing our function. i’ll show you the code for a http trigger but an Event grid Trigger function would like almost the same. you can skip the parsing of the json there because you get a typed  object as parameter containing all information.

 

Pitfalls & Tips

So if everything is correct you should have a working Azure Function now. But how to track the events that are coming in? You could set up application insights and track usage yourself but a nice feature of Event Grid is that it has build in logging and metrics. The metrics itself are a bit hidden in the Azure Portal so i’ll explain how to find them.

The subscriptions itself cannot be found in the resource group of the topic. When navigating to the storage account and clicking Event Grid you can get a list of your subscriptions but no metrics.

For metrics of the subscriptions you have to go to your left menu in the Azure portal and click the > arrow for more resources. search for Subscriptions and here the Event Grid Subscriptions will show up here

Screen Shot 2017-12-06 at 20.53.32

 

when you go here you’ll get an overview of all events that were triggered by a Topic and the subscriptions connected to the topic. You can also see when the Event Grid did retries and which events completed successfully or failed. It took me a while to find this so hopefully this helps more people find it.

Screen Shot 2017-12-05 at 17.29.52

 

Another thing to note is that right after creation of the subscription it might take a while for the events to start firing. I don’t know if this is something that is related to the preview state of Event Grid or if it will always be the case.  In the end all events fired but it took a while for the first events to be fired. After it was running for a while the events were triggered really fast. even when i did a large amount of changes the events were fired within seconds.

Hopefully this is useful for you developers who want to react to triggers from Azure Storage.

Happy Coding!

Geert van der Cruijsen

Setting up Continuous delivery for Azure API management with VSTS

Where Continuous delivery for web applications on Azure is becoming quite popular and common  I couldn’t find anything about settting this up for your API definitions in Azure API management. Since i had to set this up at my current customer I thought it was a good idea to share this in a blogpost so everyone can enjoy it. In this blogpost i’ll explain how you can set up Continuous delivery of your API definitions in Azure API management including the actual API implementation in Azure Web apps using VSTS (Visual Studio Team Services)

azure api management

First let me explain the architecture we use for our API landscape. As explained we use Azure API management for exposing the APIs to the outside world and we use Azure Web Apps for hosting the API implementation. These Web apps (Both .Net Core and Full framework .Net Web APIs) are hosted in an ASE (App Service Environment) so they are not exposed directly to the internet while we can still use all the cool things Azure Web Apps offer. These API web apps then connect to datastores hosted in Azure or connect to the on premise hosted environments through an express route or VPN.

To be able to set up our Continuous Delivery pipeline we have to arrange the following things.

  • Build your API implementation so we have a releasable package
  • Create infrastructure to host the API implementation
  • Deploy the API implementation package to the newly created infrastructure
  • Add API definition to Azure API management.
  • Repeat above steps for each environment. (DTAP)

Building your App

The first step can be different from my example if you’re not building your APIs using .Net technology. In our landscape we have a mix of APIs made with .Net Core and APIs made with .Net Full Framework because they needed libraries that were not available in .Net Core (yet). I’m not going into details on how to build your API using VSTS because i’ll assume  you’re already doing this or you know how to do this. If not here is a link to the official documentation.

One thing to keep in mind is that your API  web app does have to expose a API definition so Azure API Management can import this. We use Swashbuckle for this to automatically generate a swagger definition. If you’re using .Net Core you’ll have to use Swashbuckle.AspNetCore

Deploying the API implementation & adding it to Azure API management

For automating the deployments we’re going to the use Release Management feature of VSTS. In our first environment we’ll create steps to do all the things we described above.

 Screen Shot 2017-07-21 at 13.20.30

 The steps in our workflow are the following:

  1. Create web application infrastructure by rolling out an ARM template
  2. Set environment specific variables
  3. deploy the API implementation package
  4. Use a task group to add the API definition to Azure API management.

Creating the web app infrastructure & deploying the API Implementation package

the first and third steps are the basic steps of deploying a web application to Azure web apps. This is no different for APIs so i’ll just link to an existing blogpost here that explains these if you don’t know what they do.

Setting environment specific variables

the second task is a custom task created by my colleague Pascal Naber. It can help you overwrite specific variables you want to use in your environments by storing these settings as App Settings on your Azure web app. We use this to set the connection strings to backend systems for example a Redis Cache or a database.

Add API to API Management

So if we release the first 3 steps we would have an API that would on it’s own. But the main reason of this blogpost was that we want to have our API exposed through Azure API management so let’s have a look on how we can do that.

Azure API management has Powershell commands to interact with it and we can use this to add API definitions to Azure API management too. Below is a sample piece of Powershell that can import such an API definition from a Swagger file.

The script is built up out of 3 parts: first we retrieve the API management context by using the New-AzureRmApiManagementContext Commandlet. When we’ve gotten a context we can use this to interact with our API management instance. The second part is retrieving the swagger file from our running Web app through wget which is short for doing a GET web request. We’ll download the swagger file to a temporary disk location because in our case our web apps are running in an ASE  and therefore are not accessible through the Internet. if your web apps are connected to the internet you can also directly use the URL in the 3rd command to import the Swagger file into Azure API Management. Import-AzureRmApiManagementApi.

So now we have a script that we can use to import the API let’s add it to the VSTS release pipeline we could just add the powershell script to our source control and call the powershell using the build in powershell task. I’d like to make the developers’ life in our dev teams as easy as possible so i’m tried to abstract all Powershell mumbo jumbo away from them so they can focus on their APIs. To do this i’ve created a “Task Group” in VSTS containing this Powershell task so developers can just pick the “Add API to API Management Task” from the list in VSTS and supply the necessary parameters.

Screen Shot 2017-07-21 at 13.22.00

Screen Shot 2017-07-21 at 13.23.46

When we add this task group to the release we can run our release again and the API should be added to Azure API Management.

 Screen Shot 2017-07-21 at 13.20.30

Success!! Our initial continuous delivery process is fixed. At my current client we have 4 different API management instances and we also deploy our APIs 4x. A Development, Test, Acceptance and Production instance. The workflow we created deploys the API to our development environment. We’ve set this up to be continuous so every time a build completes on the master branch we create a new release that will deploy a new API instance to Azure and will update our Development Azure API management instance.

We can now clone this environment 3x so we create a pipeline that will move from dev, test to acceptance and production. I always set the trigger to automatically after the previous environment is completed. if we run our release again we’ll have 4 API instances deployed and in all 4 Azure API management instances they corresponding API will be imported.

Now the only thing you have to add is optionally adding integration tests to the environment you prefer and you are ready to roll!

Screen Shot 2017-07-21 at 13.24.10

 

Happy Coding!

Geert van der Cruijsen

Older posts