Mobile First Cloud First

A blog by Geert van der Cruijsen on Software Development, Cloud, DevOps & Apps

Tag: Azure

Adding Azure Active Directory Authentication to connect an Angular app to Asp.Net Core Web API using MSAL

Integrating your application with Azure Active Directory using OAuth shouldn’t be to hard at first sight. I have done this many times with different development technologies like Asp.Net, Xamarin etc, but this week i had to do it for an Angular app for the first time. There is quite some information and docs to be found on this subject but a lot of them are outdated and it took me longer than expected so that’s why i decided to write up how i got it to work, step by step.

angular - active directory - dotnetcore web api - oauth

So here is a description of what we’ll create in this post:

  • Create an angular app from scratch using the Angular Cli and make it authenticate the user in Azure Active Directory using the MSAL library.
  • Create an Asp.Net Core Web Api from scratch and connect it to Azure Active Directory as well
  • Enable the angular app able to communicate with the web api in an authenticated way using access tokens.

Setting up Azure Active Directory

In Azure Active Directory we have to register 2 applications. You can add an application in the Azure Portal by going to “Azure Active Directory -> App Registrations -> New Registration”

Front End App Registration

We’ll call the first application “demoapp-frontend” and it will contain the configuration for our frontend application.

In here you can also select which AD should be used. If it should only allow users from your tentant or you also want to allow multiple tenants or microsoft accounts.

Lastly we fill in the Redirect URI where we enter “http://localhost:4200” because that is where our Angular application will expect it.

1

After that we press Register and wait for the application to be created. As soon as it is created we can go into the details and write down the Client ID and Tenant ID because we will need it later.

2

Go to the Authentication menu item and check the boxes for Access Tokens and ID Tokens and save the configuration.

 

3

The last step in this app registration is enabling the Oauth Implicit flow. To do this open the manifest and  set “oauth2AllowImplicitFlow” to true

 

4

The last step is enabling the app registration to be used by end users when logging in. You can do this by going to the “API Permissions” menu and Grant consent for the application.

9

Now the app registration is ready and we can continue with the app registration for the API.

API App Registration

We create another app registration called “demoapp-api”. We only need to enter the name and don’t need a redirect url since this app will only check for logged in users and won’t log in the users itself.

Write down the client ID again because we’re going to use it later on.

After we’ve created the app registration go to “Expose an API“. In here we’re going to add a scope by pressing “Add a Scope“.

6

As a scope we’re going to add a scope called “api://<cliendID>/api-access.

Note you can come up with your own scope name or add more scopes later on.

7

After adding the scope we’re going to add the front end app registration as “Authorized Client Application”. Press “Add a Client Application” and enter the client id of the Angular app registration we added.

 

8

This is all we need to do in Azure AD to enable our API and front end application to make use of Azure Active Directory. Now we can start coding our applications. We’ll make use of the MSAL library to connect the angular app to our Web API. Let’s create the Asp.Net Core Web API first that will check for logged in users for all its requests or otherwise will throw a 401 unauthorized.

 

Creating the Asp.Net Core Web API

We’ll be creating a brand new Asp.Net Core 2.2  Web API in this sample by using the CLI. “dotnet new webapi“.

Add an “AzureActiveDirectory” object to your appsettings.json (or add them using secrets ) and fill in your AAD Domain name, Tenant ID and Client ID. (of the API app registration)

 "AzureActiveDirectory": {
 "Instance": "https://login.microsoftonline.com/",
 "Domain": "<yourdomain.onmicrosoft.com>",
 "TenantId": "<yourtenantid>",
 "ClientId": "api://<yourclientid>"
 },

please note to make the client ID be in the form of api://<yourclientid>.

After creating these settings we only need to update the startup.cs to add authentication here to set up AAD integration.

There are a few things to add here (see example startup.cs below)

  • ConfigureServices:
    • Add the services.AddAuthentication method to load our settings to point to the correct AAD app registration.
    • Add Cors. In this example we’ve taken the simples approach by allowing every source. You might want to make this more specific in your own application.
  • Configure
    • Add app.UseCors
    • Add app.UseAuthentication.

This is everything we need to do to have a working Asp.Net Core Web API with AAD integration. whenever you create a new API Controller just add an [Authorize] attribute to make sure your API calls are authenticated.

Creating the Angular App

We’ll also start with a brand new Angular app creating by using the Angular CLI. Create a new app using “ng new” In the Angular App we will use the MSAL library from Microsoft to connect to Azure Active Directory. MSAL is a new library which should replace the ADAL library Microsoft created earlier. MSAL is created to work with the new v2 endpoints of Azure Active Directory while ADAL only works with the v1 endpoints. Microsoft has created a npm package for MSAL to be used in Angular which makes using MSAL a lot easier. Install this package using “npm i @azure/msal-angular” After installing this package we only need to enable Azure Active Directory in our app.module.ts.   A sample is shown below. What do we need to add:

      • Add the MSAL Module with the correct client ID, and domain (https://login.microsoftonline.com/<tenantid>

 

  • Create a protected resources map. This will function as a guard so each time a resource from one of these URLs is called the right access tokens will be sent along with it.

 

 

  • Fill the Consent Scopes: a list of all the scopes you would like to get access tokens for. This could be User.Read to retrieve the users login name from AD and specific API scopes for your API calls.

 

 

  • Add a HTTP Interceptor so MSAL will add the right tokens and headers to your requests when needed whenever you use a HttpClient.

 

 

Now we can run the application and as soon as we do a network call to an url listed in the protected resources map we will get prompted to log in with our Azure AD Credentials.

In the end connecting your Angular App with Azure Active Directory isn’t that hard, you just have to know exactly what id’s to use where.

Hopefully this will help others in making the connection work smoothly. It took me a few hours to long but managed to get it working with the help and all seeing eyes of my great colleagues Chris, Niels and Thijs

Happy Coding!

Containerized build pipeline in Azure DevOps

Azure DevOps comes with several options to use as build agents in your Azure Pipelines. Microsoft has hosted agents where you don’t have to maintain your own hardware and you can turn any machine you own into a agent by installing the agent script on that machine.

The hosted agents are packed with lots of pre-installed software to support you in your builds. If you run your own private agents you can customize them as you would like. I’m currently at a large enterprise where I’m a consultant in a IT for IT team that hosts a number of private agents for all other development teams to use. Our agents are fully set up through automation and have all common tools used by teams (Based on the Hosted Azure DevOps agent images which are open source). These Agents work for the largest group of development teams but there are always teams who need some special tools. What we do to give teams freedom on their tool selection is having them run their builds inside a docker container. This is a new feature released at the end of September 2018.

containerizedpipelines

When you run builds inside a container all steps in your pipeline are executed inside this container. The work directory of the agent is volume mapped inside the container. The ability of running your pipeline in a custom container gives you all the freedom of creating an image that has all the tools required for you to execute your build. The Docker image has 2 requirements.  Bash and Node.js have to be available within the container and then you’re ready to go.

How to create a containerized build pipeline

An important note is that containerized pipelines are currently only available in YAML based pipelines. I don’t know if pipelines created in the portal will eventually also support this but in my opinion YAML based pipelines are the way forward from now on because they have a lot of advantages over traditional pipelines. The official documentation on YAML pipelines can be found here.

Let’s take a simple YAML pipeline as this example. I’ve create a simple Asp.Net Core application and have set up a pipeline for that. This is what my azure-pipelines.yml file looks like.

I tried to make the build as simple as possible. It’s just a basic .Net core build that we want to execute. For this example it doesn’t matter what the exact steps are that we are executing. It could be anything, a .Net build, npm, Go, Java Maven, anything goes. We use one of the hosted agent queues to execute the build.

Next step is to make this regular build execute exactly the same steps in a container. We can do this fairly simple by adding some settings to our pipeline. You’ll need to add a container to your resources defined at the start of your YAML file. This can either be a public container from Docker hub or a container from a private repository. The container resource will receive a name. In my example “dotnet-geert”. We can use this name to set a reference to this container in our pipeline so all build steps will be executed in this container.  You can do that by adding a line just below your build pool saying which container should be used. container: dotnet-geert

In this example we run our build in a Hosted Ubuntu Agent. Downsides of running this on a hosted agent is that the hosted agent won’t cache your docker image so it has to download the full image at every run. Because of this I don’t think this approach will be that effective compared to private agents which cache the docker image locally so spinning up a container is only a matter of seconds.

Running it on your local agents work exactly the same. there are however a few requirements. You either need a Linux machine with docker support or a Windows machine which runs Windows Server and has a higher version than 1803.

Performance improvements

First thing we want to do is run our build on a private agent so we can re-use our Docker images and only have to download them once. Another feature of using containers is that you’ll always receive a fresh instance of your environment. This is nice because you can be sure that every build was run exactly the same and wasn’t relying on previous changes another build might have done to your agent. It’s also something to consider when your builds take a long time. Because you’ll receive a fresh environment each build you’ll also have to download all your dependencies each build. Most applications nowadays are using a lot of external dependencies so let’s have a look on how we can fix this.

Docker has a feature called volume mappings that enables you to map certain directories from your host machine and use them in your container. In my example pipeline we’re building a .Net Core application that uses NuGet packages. We can map a folder on our host machine to function as the global NuGet cache and use this within our container. each time the container is downloading nuget packages It’s storing it outside the container and we run the same build again it can use the cached packages. Same thing would work for npm packages or maven packages when you are building applications with other technologies.

We can create the volume mapping by passing an option to our container. This option has to be -v for volume mapping and then passing in the <source folder>:<destination folder>. In the case of NuGet packages we’re also setting a global environment variable that sets the NuGet cache to this folder. After we do this our builds will be super fast and we have all the flexibility of tools that containerized builds give you. Below is a full sample pipeline that uses a volume mapping for the NuGet cache.

I really like this new feature of Azure DevOps because it gives you a lot of flexibility in your builds without having to do customize your own private agents to much.

Happy Coding! (And building!)

Geert van der Cruijsen

 

Fix error on Azure: “the subscription doesn’t have permissions to register the resource provider”

Working in an enterprise environment, permissions in Azure might be trimmed down so users do not have access on Azure subscriptions itself and only have access to specific resource groups. When someone has contributor permissions in a resource group you might think that they should be able to create all the things in there that they would like.  This is not always the case. Each Azure resource type has to be registered through a resource provider on the subscription level. When users only have access to certain resource groups and not to the subscription itself you can run into errors when you try to create a new resource that is not registered yet.

The error will say:

the subscription [subscription name] doesn’t have permissions to register the resource provider(s): [resource type]

Here is a sample screenshot that happened when sql was not registered.

the subscription does not have permissions to register the resource provider

 

There are a couple of options to fix this.

  • Manually register the resource type in the azure portal
  • Register all resource types in a subscription using the Azure CLI
  • Create a specific role for all users to give them permissions to register resource providers

Manually registering resource types in the Azure portal

Registering a resource type in the Azure portal is the simplest if you only want to register a specific resource type. If you want to register every resource type available this requires a lot of clicking so it’s better to choose one of the other 2 options using the CLI or a custom role.

Using the Azure portal to register a resource type is easy though. In the portal navigate to your Subscription. In the Left menu click on Resource Providers and after that click Register for each of the resources you want to register.

az portal

 

Register all resource types in a subscription using the Azure CLI

You can also use the Azure CLI to register all available resource types in your azure subscription. This is done through one line of Azure CLI.
This will initially list all resource providers and then for each resource provider it will call the register method. One caveat to watch out for is that if new resource types are added to Azure they are not automatically registered so you’ll have to run the script again or choose the 3rd option creating a specific role that all users get so they can automatically register resource providers

Create a specific role for all users to give them permissions to register resource providers

The final and most future proof solution is creating a new role which you can assign to all your users which has permissions to register a new resource provider. The first step in creating this is defining a new json file describing this role. 

this json file sets the action for registering resource providers to be allowed and the only thing you’ll have to customize is adding your own subscription ids. When this security role json file is finished we can use the Azure CLI to create the role and after that we can assign users to the role. This does require you have certain groups in AD containing all your users you want to give access. If you don’t have that the 2nd option is probably better for you because it will become a lot of work to assign this role to all your users manually.

That’s it! I’ve given you 3 options to solve this Azure error so hopefully one of these can help you get going again in building great cool stuff on Azure

Geert van der Cruijsen

 

 

Programatically creating Azure resource groups and defining permissions

In my job as DevOps consultant I try to help my clients build better software faster.  A key part of this is automation of the complete delivery pipeline. Most of the times this focusses on the delivery pipeline from user story to committed code to eventually this code running in production. With tools as VSTS this is quite easy to do but what about the things that happen outside of the core of the application?

Creating Infrastructure as code is becoming mainstream in public cloud scenarios so teams can create and deploy their own infrastructure. This allows independent self serving teams to build better software faster. But often people stop here. There are still several tasks that often are manual steps where someone with the right permissions has to step in to do these tasks. Examples can be: Creating a new VSTS Team, GIT repo, opening ports on the firewall or creating a resource group in Azure where the team can create their infrastructure. My goal is to automate everything here so teams can create these things in a guided, self serving manner. I’ll be diving deeper in that subject in a later post where i explain how we’ve created an Operations Chatbot that does these kind of things. In this post i want to focus on 1 specific area this bot can help: Creating Azure resource groups for teams and assigning permissions.

In many of my projects we host our infrastructure in Azure and I like DevOps teams to be independent. Looking at Azure they should have a space where they can create their infrastructure and do their thing. It’s up to the teams what kind of stuff they spin up since they should be the ones maintaining it and they are responsible for the costs.

The thing we’ve built is a chat bot that helps create new resource groups for teams by asking a user for 3 questions:

  • What is the application name? (my practise is to group infrastructure for a single application together in 1 resource group)
  • What team is the owner of the application? (in my case all teams have an AD group containing all team members)
  • What kind of environment do you need? (Dev, Test, Acceptance, Production) These choices are made by my client and we have 2 subscriptions (1 DTA and 1 Prod)

After answering these 3 questions the bot will create a standardised resource group name for the team in the format: <appname>-<teamname>-<environment>-rg
for example:

publicwebsite-mar-dev-rg

this resource group will be created and the team’s AD group will be granted contributer permissiosn to this newly created resource group.

rg-ad

Enough about the chat bot for now, let’s create the code to actually create a new resource group programmatically.

To do this we’ll use 2 nuget packages from Microsoft called

  • Microsoft.Azure.Management.Fluent
  • Microsoft.Azure.Management.Authorization

These 2 packages contain all the APIs to manage Azure resources. the 2 things we need is managing resource groups and AD permissions. With adding these 2 packages we can start coding our method called CreateResourceGroup. The only parameters we need is the resource group and the ad group.

First you need to log in to your Azure subscription to be able to retrieve information and have an account that has permissions to create resource groups in Azure. It’s not a best practice to run this code as your user account so it’s better to create a service principal who can do this. to create a new service principal take a look at this guide: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal

After we’ve retrieved the credentials creating a resource group is super easy. It’s just 1 line of code. Adding the correct AD group to add permissions is quite simple to if the service principal has the right permissions to query AD. After querying the right group we can create a RoleAssignment to assign the contributor role to the Azure AD group.

More info on the full bot solution later. Hopefully this will help you create your own Azure automation to speed up your development process.

Happy Coding!

Geert van der Cruijsen

Connecting to Azure Blob Storage events using Azure Event Grid

I was looking for a solution to sync an Azure blob storage containing images to a third party solution that contained an API to upload images. my initial thoughts were to hook up Azure Functions to react on Azure Blob Storage triggers. One thing that is not possible with blob storage triggers is to act on delete. there is only a trigger on adding or changing files.

Luckily Microsoft announced a new solution called Event Grid a few months back. Event Grid is great for connecting events that come from azure resources (or custom resources) to things like Azure Functions or Logic Apps.

eventgrid

Event Grid also supports events for Blob Storage where you get events for adding, changing or deleting items. So how to get started?

Event Grid is easy to setup and exists out of 2 parts. Topics and Subscriptions. Topics are places where events are sent to by Azure resources or even custom publishers. Subscriptions can be made on topics and will receive the events from a certain topic.

Creating the Storage Account / Event Grid Topic

Azure Blob storage has an Event Grid topic built in so you don’t have to actually create a separate Event Grid Topic. At the time of writing Event Grid is only available in West Central US and US West 2 regions so if you create a Storage account there i’ll automatically also get an Event Grid Topic.

Let’s create the topic using Azure CLI

#create resourcegroup
az group create -n <<ResourceGroupName>> -l westus2
#create storage account 
az storage account create \    
   --location westus2 \ 
   --name <<NameOfStorageAccount>> \  
   --access-tier cool \    
   --kind blobstorage \    
   --resource-group <<ResourceGroupName>> \ 
   --sku Standard_LRS \

Or using the Azure Portal:

Screen Shot 2017-12-06 at 15.12.33

When we open the storage account in the azure portal we’ll see that in the left menu is an option called Event Grid. In this menu you can see a list of all subscriptions to this Event Grid Topic. Current we don’t have one so lets take a look on how we can do that.

Creating the Event Grid Subscriptions

Since a topic can have multiple subscribers let’s add 2 different subscribers. First we’ll create a very simple subscription that allows us to see what the events actually look like and after that we’ll take a look in adding an Azure Function that handles the events.

Simple test subscription using Requestb.in

Requestb.in is a website where you can request a simple URL that collects all http messages sent to that URL. This is a free service and will keep the last 20 messages for a maximum amount of 48 hours. As soon as we created our requestbin we’ll receive a URL that looks like something like this: https://requestb.in/1ckzahm1

we can add this URL as a web hook subscription to the topic we created earlier. this can be done either using Azure CLI or the Azure portal.

az eventgrid resource event-subscription create    
   --endpoint "https://requestb.in/1ckzahm1" \    
   --name requestbinsubscription \  
   --provider-namespace Microsoft.Storage \  
   --resource-type storageAccounts \  
   --resource-group <<ResourceGroupName>> \   
   --resource-name cloudinarysync

Or in the Azure portal:

 Screen Shot 2017-12-06 at 15.15.18

when we upload a file to the storage account now we are able to see the event that was triggered by going to the requestb.in inspect web page. We’ll see the json payload containing the event details so we can use this later in our Azure function.

Screen Shot 2017-12-06 at 15.19.03

Example json for file adding and deleting looks like the following:

File Add/Change event

File Deleted event

So we’ve seen the easiest way to hook up the Storage account events using Event Grid. Let’s add an Azure function that actually does something with the events.

Azure Function

When you create a new Azure Function you’ll have to choose the trigger type. you can choose between several options here like a Http Trigger, Webhook Trigger or Event Grid Trigger.  When looking at the list you might think Event Grid Trigger is the way to go. I tend to disagree for now. The events sent from the Event Grid are just play POST http messages so choosing a webhook trigger or Http trigger works just as good and they are even easier to test locally. The thing that Event Grid Trigger adds is that it maps the json payload shown above to a typed object so you can immediatly use it without parsing the json yourself. Another downside from Http Trigger and Webhook trigger is that you’ll have to arrange security yourself since everyone who knows the url could call the webhook by default. Let’s look at both options.

If you are using the portal use the small link below “Get started on your own” called “Custom Function” to choose the trigger type.

Screen Shot 2017-12-06 at 20.10.54

Screen Shot 2017-12-06 at 20.12.47 Screen Shot 2017-12-06 at 20.12.59 Screen Shot 2017-12-06 at 20.13.11

Event Grid Trigger

When you open your event grid trigger (if you created it via the portal but also for precompiled functions that you uploaded there is a link on the top right called “Add Event Grid Subscription”. Click this to set up the Event Grid Trigger.

Screen Shot 2017-12-06 at 20.14.06

 

 

In the “Create Event Subscription” window select the “Storage Account” topic type and select your storage account. After pressing Create your Azure Function will be triggered after each change in the storage account.

Screen Shot 2017-12-06 at 20.14.31

 

Http / Webhook Trigger

Webhook or Http Triggers work almost the same way. in the Portal there is a link on the top right to get the Function URL. When you click this  you’ll see a popup with the endpoint that you should copy. after this adding the subscription works exactly the same as i described above for the requestb.in except now you’ll enter this URL you just copied.

Screen Shot 2017-12-06 at 20.15.23

Now we’ve set up the plumbing we can start writing our function. i’ll show you the code for a http trigger but an Event grid Trigger function would like almost the same. you can skip the parsing of the json there because you get a typed  object as parameter containing all information.

 

Pitfalls & Tips

So if everything is correct you should have a working Azure Function now. But how to track the events that are coming in? You could set up application insights and track usage yourself but a nice feature of Event Grid is that it has build in logging and metrics. The metrics itself are a bit hidden in the Azure Portal so i’ll explain how to find them.

The subscriptions itself cannot be found in the resource group of the topic. When navigating to the storage account and clicking Event Grid you can get a list of your subscriptions but no metrics.

For metrics of the subscriptions you have to go to your left menu in the Azure portal and click the > arrow for more resources. search for Subscriptions and here the Event Grid Subscriptions will show up here

Screen Shot 2017-12-06 at 20.53.32

 

when you go here you’ll get an overview of all events that were triggered by a Topic and the subscriptions connected to the topic. You can also see when the Event Grid did retries and which events completed successfully or failed. It took me a while to find this so hopefully this helps more people find it.

Screen Shot 2017-12-05 at 17.29.52

 

Another thing to note is that right after creation of the subscription it might take a while for the events to start firing. I don’t know if this is something that is related to the preview state of Event Grid or if it will always be the case.  In the end all events fired but it took a while for the first events to be fired. After it was running for a while the events were triggered really fast. even when i did a large amount of changes the events were fired within seconds.

Hopefully this is useful for you developers who want to react to triggers from Azure Storage.

Happy Coding!

Geert van der Cruijsen

Setting up Continuous delivery for Azure API management with VSTS

Where Continuous delivery for web applications on Azure is becoming quite popular and common  I couldn’t find anything about settting this up for your API definitions in Azure API management. Since i had to set this up at my current customer I thought it was a good idea to share this in a blogpost so everyone can enjoy it. In this blogpost i’ll explain how you can set up Continuous delivery of your API definitions in Azure API management including the actual API implementation in Azure Web apps using VSTS (Visual Studio Team Services)

azure api management

First let me explain the architecture we use for our API landscape. As explained we use Azure API management for exposing the APIs to the outside world and we use Azure Web Apps for hosting the API implementation. These Web apps (Both .Net Core and Full framework .Net Web APIs) are hosted in an ASE (App Service Environment) so they are not exposed directly to the internet while we can still use all the cool things Azure Web Apps offer. These API web apps then connect to datastores hosted in Azure or connect to the on premise hosted environments through an express route or VPN.

To be able to set up our Continuous Delivery pipeline we have to arrange the following things.

  • Build your API implementation so we have a releasable package
  • Create infrastructure to host the API implementation
  • Deploy the API implementation package to the newly created infrastructure
  • Add API definition to Azure API management.
  • Repeat above steps for each environment. (DTAP)

Building your App

The first step can be different from my example if you’re not building your APIs using .Net technology. In our landscape we have a mix of APIs made with .Net Core and APIs made with .Net Full Framework because they needed libraries that were not available in .Net Core (yet). I’m not going into details on how to build your API using VSTS because i’ll assume  you’re already doing this or you know how to do this. If not here is a link to the official documentation.

One thing to keep in mind is that your API  web app does have to expose a API definition so Azure API Management can import this. We use Swashbuckle for this to automatically generate a swagger definition. If you’re using .Net Core you’ll have to use Swashbuckle.AspNetCore

Deploying the API implementation & adding it to Azure API management

For automating the deployments we’re going to the use Release Management feature of VSTS. In our first environment we’ll create steps to do all the things we described above.

 Screen Shot 2017-07-21 at 13.20.30

 The steps in our workflow are the following:

  1. Create web application infrastructure by rolling out an ARM template
  2. Set environment specific variables
  3. deploy the API implementation package
  4. Use a task group to add the API definition to Azure API management.

Creating the web app infrastructure & deploying the API Implementation package

the first and third steps are the basic steps of deploying a web application to Azure web apps. This is no different for APIs so i’ll just link to an existing blogpost here that explains these if you don’t know what they do.

Setting environment specific variables

the second task is a custom task created by my colleague Pascal Naber. It can help you overwrite specific variables you want to use in your environments by storing these settings as App Settings on your Azure web app. We use this to set the connection strings to backend systems for example a Redis Cache or a database.

Add API to API Management

So if we release the first 3 steps we would have an API that would on it’s own. But the main reason of this blogpost was that we want to have our API exposed through Azure API management so let’s have a look on how we can do that.

Azure API management has Powershell commands to interact with it and we can use this to add API definitions to Azure API management too. Below is a sample piece of Powershell that can import such an API definition from a Swagger file.

The script is built up out of 3 parts: first we retrieve the API management context by using the New-AzureRmApiManagementContext Commandlet. When we’ve gotten a context we can use this to interact with our API management instance. The second part is retrieving the swagger file from our running Web app through wget which is short for doing a GET web request. We’ll download the swagger file to a temporary disk location because in our case our web apps are running in an ASE  and therefore are not accessible through the Internet. if your web apps are connected to the internet you can also directly use the URL in the 3rd command to import the Swagger file into Azure API Management. Import-AzureRmApiManagementApi.

So now we have a script that we can use to import the API let’s add it to the VSTS release pipeline we could just add the powershell script to our source control and call the powershell using the build in powershell task. I’d like to make the developers’ life in our dev teams as easy as possible so i’m tried to abstract all Powershell mumbo jumbo away from them so they can focus on their APIs. To do this i’ve created a “Task Group” in VSTS containing this Powershell task so developers can just pick the “Add API to API Management Task” from the list in VSTS and supply the necessary parameters.

Screen Shot 2017-07-21 at 13.22.00

Screen Shot 2017-07-21 at 13.23.46

When we add this task group to the release we can run our release again and the API should be added to Azure API Management.

 Screen Shot 2017-07-21 at 13.20.30

Success!! Our initial continuous delivery process is fixed. At my current client we have 4 different API management instances and we also deploy our APIs 4x. A Development, Test, Acceptance and Production instance. The workflow we created deploys the API to our development environment. We’ve set this up to be continuous so every time a build completes on the master branch we create a new release that will deploy a new API instance to Azure and will update our Development Azure API management instance.

We can now clone this environment 3x so we create a pipeline that will move from dev, test to acceptance and production. I always set the trigger to automatically after the previous environment is completed. if we run our release again we’ll have 4 API instances deployed and in all 4 Azure API management instances they corresponding API will be imported.

Now the only thing you have to add is optionally adding integration tests to the environment you prefer and you are ready to roll!

Screen Shot 2017-07-21 at 13.24.10

 

Happy Coding!

Geert van der Cruijsen

Created an open source VSTS build & release task for Azure Web App Virtual File System

I’ve created a new VSTS Build & Release task to help you interact with the (VFS) Virtual File System API (Part of KUDU API of your Azure Web App). Currently this task can only be used to delete specific files or directories from the web app during your build or release workflow. It will be updated in the near future to also be able to list files or to upload / download files through the VFS API.

banner

The reason i made this task was that i needed it at my current customer. We’re deploying our custom solution to a Sitecore website running on Azure web apps using MSDeploy. The deployment consists of 2 parts: an install of the out-of-the-box Sitecore installation and the deployment of our customisations. When deploying new versions we want to keep the Sitecore installation and MSDeploy will update most of our customisations. Some customisations however create artifacts that stay on the server and aren’t  in control of the MSDeploy package that can cause errors on our web application. This new VSTS Build / Release task can help you delete these files. In the future this task will be updated with other functionality of the VFS API such as listing, uploading or downloading files.

The task is available in the VSTS Marketplace and is open source on github.

Let’s have a look how to use this task and how it works under the hood.
Continue reading

Building, testing and deploying precompiled Azure Functions

Azure functions are great to build small specialized services really fast. When you create an Azure Functions project by using the built-in template from the SDK in Visual Studio you’ll automatically get a function made in a CSX file. This looks like plain old C# but in fact it is actually  is C# Script. When you’re deploying these files to Azure you don’t have to compile them locally or on a build server but you can just upload them to your Azure Storage directly.

In the last update for Azure Functions the option to build precompiled functions was added. Doing this is actually pretty simple. I’ve created a sample project on Github containing a precompiled Azure function, unit tests for the function and an ARM template to deploy the function. Lets go over the steps to create a precompiled Azure function.

Continue reading

Adding an Azure web app to an Application Service Environment running in another subscription

Web apps and Api apps  in Azure are great, however when using them you have to agree to have them connected to the internet directly without the possibility of adding a WAF or other kind of additional protection (next to the default Azure line of defense). When you want to add something like that you have to add an Internal Application Service Environment to host your apps so you can control the network access to these apps.

App Service

However adding an Application Service Environment is quite costly if you are only running a few apps in them. (Minimum requirements for an Application Service Environment are 2 P2’s and 2 P1’s to run the Application Service Environment (ASE)

In our case adding an ASE was fine except that we have a scenario where we have quite a lot of subscriptions and most of them are quite small running only a couple of apps in them. Adding an ASE for each subscription was going to become a bit to costly so we came up with the idea of creating 1 central subscription called “Shared Services” where we would host things that multiple departments could share such as WAF functionality, the VNet, the Express route and also the ASE.

After creating the design we ran in to some problems actually implementing it because we weren’t able to select an ASE in another subscription which was part of the same enterprise agreement when creating an App Service Plan or Web App in Azure.  After checking it seems that this is a limitation of the Azure Portal and we had to use ARM templates to create our web app. This didn’t matter because we were planning on using ARM templates anyway. so we started to give it a try.

At first we had some trouble adding the ASE as our hosting environment. we tried adding the “HostingEnvironment” to point to the name of the ASE in our other subscription but this did not work and we kept receiving errors like “Cannot find HostingEnvironment with name *HostingEnvironmentName*. (Code: NotFound)”

ASE erorr message

 

After that we tried to remove the “HostingEnvironment” property and only set the “HostingEnvironmentID” to directly link to the full resourceID of our ASE. this did get our hopes up because we were able to deploy the web app, however it was running on the P1’s that were part of the workerpool of our internal ASE but it still had a public dns name and was accessible from the internet. I guess we weren’t supposed to created it this way. so i asked help from the Microsoft product team and they pointed me to the right correction.

It all boils down to using a newer API version of the Web App and App Service Plan ARM template API than that are generated in visual studio when building ARM templates. we had to use apiVersion: 2015-08-01

in here we can set the “hostingEnvironmentProfile” to the full resourceID of our ASE for both the App Service Plan as the Web App. Next to that we also have to set the sku to the correct worker pool within our ASE.

Now when we try to deploy our ARM template it will actually create an App Service Plan and Web App in another subscription than where our ASE is running. Nice!

Hopefully this post will help you when you run in to the same problems i did when trying to deploy web apps in an ASE using ARM templates.

Happy Coding / Deploying

Geert van der Cruijsen

Upgraded my blog to Project Nami on Azure

My blog has been running on WordPress in combination with Microsoft Azure since the start. I always disliked the fact that i had to get a MySQL database using ClearDB to host it but it seemed the only way. A few days ago i made some mistakes and kinda broke my current blog (i won’t bother you with the details 😉 ) and had to do some reinstalls. Since quite some time passed i thought it might be good to do some research in the current possibilities of using native azure components for my datastorage.

the first thing i found was using MySQL in app with your Azure web app. This removes the dependency of ClearDB but you’ll still have some limitations such as being limited to a single instance and no features for access your database remotely.

Screen Shot 2016-09-01 at 08.28.08

more info on MySQL in App preview:  https://blogs.msdn.microsoft.com/appserviceteam/2016/08/18/announcing-mysql-in-app-preview-for-web-apps/

When i looked a bit further i found another cool project called “Project Nami“. Nami stands for Not Another MySQL Install and is a fork of WordPress that changed the database layer to talk T-SQL so they could add SQL Azure as the database layer.

ProjectNamiLogo

Installing it was pretty easy. you can just get it from the Azure Marketplace when you search for “WordPress” or “project Nami” and after that it was installed in a few minutes.

Screen Shot 2016-09-01 at 08.33.16

I have to say the performance is great! the initial costs seem to be lower than they were before and my blog seems to be a lot more responsive. I really liked the upgrade to Project Nami so thought I share it with you so you might consider making the same switch if you are still running wordpress + mysql on Azure.

One downside of my screwing up my previous blog instance is that all comments are kinda lost. so thats why you might not see any comments on my older posts anymore.

Geert van der Cruijsen