Mobile First Cloud First

A blog by Geert van der Cruijsen on Software Development, Cloud, DevOps & Apps

Category: ALM (page 1 of 2)

Observing distributed application health using Azure Application Insights & Azure Log Analytics

Most people who use Azure Application insights to monitor their applications will not look at it until something is wrong and only then will they look for what exceptions are thrown to see what is going on. In my opinion if you want to build high available systems you also want to be able to see if everything is working as normal when there are no problems.

When you build a monolithic application it’s often quite easy to find where certain performance bottle necks are by monitoring cpu and memory usage. When we look into distributed systems and microservice architectures an application will often span multiple services with even more instances running into thousands of machines, service busses, APIs you name it. How do we monitor this by looking into CPU, memory and all other traditional monitoring measures. You simply can’t.

In these types of scenarios where you have several instances or maybe even thousands of instances we have to look for other things. One thing you could do is come up with a KPI of measuring a service that your application is providing and seeing how often this is completed. To make this a bit simpler to understand lets look at an example:

Netflix is famous for their micro service architecture spanning thousands of machines and they monitor on SPS (Starts per second). With the millions of subscribers they have this number is something that should be fairly predictable. That’s why they monitor for this and if this number is affected something must be wrong (If people start more often maybe playback isn’t working so they keep pressing play? If less people press play maybe the UI is broken and the event is not coming down to the server or something else might be wrong.) By just monitoring 1 number they can use this if the overall health of the system is OK or not.  You can learn more at the Netflix technology blog.

So how do you start with something like this yourself?

Finding the right KPI

There is not 1 solution to find the right KPI that is best for measuring. But there are some things you might consider. First of all it has to be important for your business. Next to that it would be nice if the number was somewhat stable or has clear patterns. This all depends on your business and application.

Maybe it’s best to start with another example we used for one of our clients. We’ll take this example from the initial idea to how we actually monitor it using Azure Log Analytics and Application Insights.

The application we worked on had to do calculations every few minutes and these calculations could take up from 10 seconds to about a minute. It was really important that the end result of these calculations were send customers / other systems every X minutes. Because of this the development team added logging to Application Insights that stored the calculation time for each cycle. During the day the calculation time ranged from fast (10 seconds) to slow (1 minute) because of several parameters. I’ve drawn a picture of what the graph looked that took all the App Insights calculation times and plotted it over time.

1

 

The Graph looked like this. Initially the dev team only created this view to monitor health of the calculation times. A big problem in here is that it provides no information of what is “normal”. As humans we are quite good at recognizing patterns and after showing this picture to several people they all noted. Wow somewhere between 9:00 and 12:00 in the morning there must be something wrong.

2

 

The problem is that this data is only the data of 1 day. It does not even have a pattern. There are several external influences that have impact on calculation times. One of them is customer orders being created. This application is a business to business application and a majority of orders is created during the morning of european working hours. This is why we need more data in our graph so we can actually see if there are some patterns.

In the next graph I’ve plotted the data of a full work week on the same area to see if we can find patterns.

3

 

 

When we plot this full week of calculation times we can see that there is quite a pattern to be found. Next to that we can also very easily spot where something is not following our pattern. Is the high curve just before 12:00 still an anomaly? Guess not… But what is happening in the afternoon? Data that first looked like being part of some pattern in our heads does stand out all of a sudden. I think we’ve found our KPI that we want to measure.

4

When developing an application adding counters and logging information is important to be able to create these kinds of dashboards. If you are not sure on what to measure. Just start with business functions start/completed and each service start/completed/retried. This gives you a starting point. from there on you can come up with new measures and counters.

An important area of DevOps is as developers we have to start thinking more like Ops. what are good things to measure, monitor etc. In the past few years I often come across Devs telling Ops to become more like Developers by adding automation and doing stuff as code but it’s also important to focus on the other way around. Devs taking ownership of what they are building and making it easy to see if the application is still working like it is supposed to.

As Devs you have far more knowledge of what could cause certain delays, outages etc because you know how the application is working internally. So join forces and work together.

Implement it using Azure Application Insights and Azure Log Analytics

So now we have a pretty good idea of what we want on a dashboard. How do we implement this? Since the title of this post is about Application insights and Azure log analytics i’m assuming you already have Azure Application insights in place. If not here is a guide.  When we have access to an Application insights instance we can start doing our custom measurements. In this post we’ll focus on measuring calculation times similar to the example above but you could do this with any type of measurement.

How to track timing in app insights?
We can use the code above to track custom timing of pieces of code.  We’ll create a DependencyTelemetry object, Fill in the name and type properties call Start, do your calculation and if it succeeds  set the success to true and then finally call the Stop method so the timer is always stopped. This is all the code you need. When you run your app now and go to Application insights open the Analytics tab and run a query showing all “dependencies with name “CalculationCycle”.  Since we haven’t logged anything else we’ll just query all dependencies and voila there are our timings in the duration field. appinsights So our application is logging the calculation times. Now it is time to create a dashboard that shows the “normal” state and values from the last 24 hours.

Creating a kusto Query in log analytics:

We want to create a similar graph as i drew in this post earlier. We could have all these colored lines for all the different days but what is even better is that we can take the data for the last month and combine it. When we create the query we’re actually building 2 series and we will combine them in the end to display a graph. The first series we will create will be called “Today” and will show all the values of the calculation time and will summarize them per hour. The second series we create is called “LastMonth” and will take all the values of the last 30 days and will group them by hours of the day as well. We also only take the 90 percentile of the values so we remove values that are special cases.

Run the query to get the graph below. You can pin this graph to a dashboard and now you can see your calculation times compared to average calculation times of the last month on a per hour basis.

For our scenario this worked really well. If you create something similar make sure that the last 30 days is a good comparison. Should calculations be the same every day of the week or are your calculations taking longer on a Monday compared to the Friday? if that is the case you might want to tweak your query so you are actually comparing to your “normal” state.
graph

 

Hopefully this post helped you set up a dashboard view of viewing a “normal” state of your application that you could have displayed near your team working area to see if everything is still working as you expected it to.

Finally i would like to do a shout out to my colleagues Rene and Jasper who created this with me from idea to final result.

Happy Coding (and observing)

Geert van der Cruijsen

Containerized build pipeline in Azure DevOps

Azure DevOps comes with several options to use as build agents in your Azure Pipelines. Microsoft has hosted agents where you don’t have to maintain your own hardware and you can turn any machine you own into a agent by installing the agent script on that machine.

The hosted agents are packed with lots of pre-installed software to support you in your builds. If you run your own private agents you can customize them as you would like. I’m currently at a large enterprise where I’m a consultant in a IT for IT team that hosts a number of private agents for all other development teams to use. Our agents are fully set up through automation and have all common tools used by teams (Based on the Hosted Azure DevOps agent images which are open source). These Agents work for the largest group of development teams but there are always teams who need some special tools. What we do to give teams freedom on their tool selection is having them run their builds inside a docker container. This is a new feature released at the end of September 2018.

containerizedpipelines

When you run builds inside a container all steps in your pipeline are executed inside this container. The work directory of the agent is volume mapped inside the container. The ability of running your pipeline in a custom container gives you all the freedom of creating an image that has all the tools required for you to execute your build. The Docker image has 2 requirements.  Bash and Node.js have to be available within the container and then you’re ready to go.

How to create a containerized build pipeline

An important note is that containerized pipelines are currently only available in YAML based pipelines. I don’t know if pipelines created in the portal will eventually also support this but in my opinion YAML based pipelines are the way forward from now on because they have a lot of advantages over traditional pipelines. The official documentation on YAML pipelines can be found here.

Let’s take a simple YAML pipeline as this example. I’ve create a simple Asp.Net Core application and have set up a pipeline for that. This is what my azure-pipelines.yml file looks like.

I tried to make the build as simple as possible. It’s just a basic .Net core build that we want to execute. For this example it doesn’t matter what the exact steps are that we are executing. It could be anything, a .Net build, npm, Go, Java Maven, anything goes. We use one of the hosted agent queues to execute the build.

Next step is to make this regular build execute exactly the same steps in a container. We can do this fairly simple by adding some settings to our pipeline. You’ll need to add a container to your resources defined at the start of your YAML file. This can either be a public container from Docker hub or a container from a private repository. The container resource will receive a name. In my example “dotnet-geert”. We can use this name to set a reference to this container in our pipeline so all build steps will be executed in this container.  You can do that by adding a line just below your build pool saying which container should be used. container: dotnet-geert

In this example we run our build in a Hosted Ubuntu Agent. Downsides of running this on a hosted agent is that the hosted agent won’t cache your docker image so it has to download the full image at every run. Because of this I don’t think this approach will be that effective compared to private agents which cache the docker image locally so spinning up a container is only a matter of seconds.

Running it on your local agents work exactly the same. there are however a few requirements. You either need a Linux machine with docker support or a Windows machine which runs Windows Server and has a higher version than 1803.

Performance improvements

First thing we want to do is run our build on a private agent so we can re-use our Docker images and only have to download them once. Another feature of using containers is that you’ll always receive a fresh instance of your environment. This is nice because you can be sure that every build was run exactly the same and wasn’t relying on previous changes another build might have done to your agent. It’s also something to consider when your builds take a long time. Because you’ll receive a fresh environment each build you’ll also have to download all your dependencies each build. Most applications nowadays are using a lot of external dependencies so let’s have a look on how we can fix this.

Docker has a feature called volume mappings that enables you to map certain directories from your host machine and use them in your container. In my example pipeline we’re building a .Net Core application that uses NuGet packages. We can map a folder on our host machine to function as the global NuGet cache and use this within our container. each time the container is downloading nuget packages It’s storing it outside the container and we run the same build again it can use the cached packages. Same thing would work for npm packages or maven packages when you are building applications with other technologies.

We can create the volume mapping by passing an option to our container. This option has to be -v for volume mapping and then passing in the <source folder>:<destination folder>. In the case of NuGet packages we’re also setting a global environment variable that sets the NuGet cache to this folder. After we do this our builds will be super fast and we have all the flexibility of tools that containerized builds give you. Below is a full sample pipeline that uses a volume mapping for the NuGet cache.

I really like this new feature of Azure DevOps because it gives you a lot of flexibility in your builds without having to do customize your own private agents to much.

Happy Coding! (And building!)

Geert van der Cruijsen

 

Passing in custom user settings and secrets to Maven in Maven VSTS Build Tasks

VSTS is tooling for setting up automated pipelines for all kinds of programming languages. I’ve seen more and more non Microsoft technologies being used on VSTS and I came across a couple of questions repeatedly so i thought it was a good idea to write this in a blog post.

maven-vsts

 

The problem is the following: If you want to do a Maven build, Maven will expect some user settings to be present somewhere on your build server. While this is often configured once on the build server it is better to pass it in during build time especially if it contains secrets that you don’t want to have stored in plain text somewhere. So how do we do this?

In the sample we’ll add a connection to Sonatype Nexus (A package management solution, comparable to VSTS Package management) so Maven is capable of downloading packages it needs or it is capable of pushing its build artifacts there. Although this example does only set these settings you can use it for other kinds of settings as well.

So how to implement this?

First we need to add a file to our repo and call it ci-settings.xml. it will contain our user settings with a username and password to connect to Nexus.

This file has a few variables that we are going to replace called nexusUser and nexusPassword. the “REPOSITORY ID” needs to match the id used in the pom.xml file.

In the Maven task we then pass this user settings file to the maven command using the -s option. We can also pass in the values for our parameters in the ci-settings.xml file using -DnexusUser and -DnexusPassword. The full Options would look like something like this.

1

-s $(System.DefaultWorkingDirectory)/ci-settings.xml -DnexusUser=$(Nexus.User) -DnexusPassw ord=$(Nexus.Password)

The actual values of $(Nexus.User) and $(Nexus.Password) are stored in the VSTS variable section where you can also make the password a secret so it’s hidden from logs and from people editing or viewing the build definition

2

 

Programatically creating Azure resource groups and defining permissions

In my job as DevOps consultant I try to help my clients build better software faster.  A key part of this is automation of the complete delivery pipeline. Most of the times this focusses on the delivery pipeline from user story to committed code to eventually this code running in production. With tools as VSTS this is quite easy to do but what about the things that happen outside of the core of the application?

Creating Infrastructure as code is becoming mainstream in public cloud scenarios so teams can create and deploy their own infrastructure. This allows independent self serving teams to build better software faster. But often people stop here. There are still several tasks that often are manual steps where someone with the right permissions has to step in to do these tasks. Examples can be: Creating a new VSTS Team, GIT repo, opening ports on the firewall or creating a resource group in Azure where the team can create their infrastructure. My goal is to automate everything here so teams can create these things in a guided, self serving manner. I’ll be diving deeper in that subject in a later post where i explain how we’ve created an Operations Chatbot that does these kind of things. In this post i want to focus on 1 specific area this bot can help: Creating Azure resource groups for teams and assigning permissions.

In many of my projects we host our infrastructure in Azure and I like DevOps teams to be independent. Looking at Azure they should have a space where they can create their infrastructure and do their thing. It’s up to the teams what kind of stuff they spin up since they should be the ones maintaining it and they are responsible for the costs.

The thing we’ve built is a chat bot that helps create new resource groups for teams by asking a user for 3 questions:

  • What is the application name? (my practise is to group infrastructure for a single application together in 1 resource group)
  • What team is the owner of the application? (in my case all teams have an AD group containing all team members)
  • What kind of environment do you need? (Dev, Test, Acceptance, Production) These choices are made by my client and we have 2 subscriptions (1 DTA and 1 Prod)

After answering these 3 questions the bot will create a standardised resource group name for the team in the format: <appname>-<teamname>-<environment>-rg
for example:

publicwebsite-mar-dev-rg

this resource group will be created and the team’s AD group will be granted contributer permissiosn to this newly created resource group.

rg-ad

Enough about the chat bot for now, let’s create the code to actually create a new resource group programmatically.

To do this we’ll use 2 nuget packages from Microsoft called

  • Microsoft.Azure.Management.Fluent
  • Microsoft.Azure.Management.Authorization

These 2 packages contain all the APIs to manage Azure resources. the 2 things we need is managing resource groups and AD permissions. With adding these 2 packages we can start coding our method called CreateResourceGroup. The only parameters we need is the resource group and the ad group.

First you need to log in to your Azure subscription to be able to retrieve information and have an account that has permissions to create resource groups in Azure. It’s not a best practice to run this code as your user account so it’s better to create a service principal who can do this. to create a new service principal take a look at this guide: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal

After we’ve retrieved the credentials creating a resource group is super easy. It’s just 1 line of code. Adding the correct AD group to add permissions is quite simple to if the service principal has the right permissions to query AD. After querying the right group we can create a RoleAssignment to assign the contributor role to the Azure AD group.

More info on the full bot solution later. Hopefully this will help you create your own Azure automation to speed up your development process.

Happy Coding!

Geert van der Cruijsen

Setting up Continuous delivery for Azure API management with VSTS

Where Continuous delivery for web applications on Azure is becoming quite popular and common  I couldn’t find anything about settting this up for your API definitions in Azure API management. Since i had to set this up at my current customer I thought it was a good idea to share this in a blogpost so everyone can enjoy it. In this blogpost i’ll explain how you can set up Continuous delivery of your API definitions in Azure API management including the actual API implementation in Azure Web apps using VSTS (Visual Studio Team Services)

azure api management

First let me explain the architecture we use for our API landscape. As explained we use Azure API management for exposing the APIs to the outside world and we use Azure Web Apps for hosting the API implementation. These Web apps (Both .Net Core and Full framework .Net Web APIs) are hosted in an ASE (App Service Environment) so they are not exposed directly to the internet while we can still use all the cool things Azure Web Apps offer. These API web apps then connect to datastores hosted in Azure or connect to the on premise hosted environments through an express route or VPN.

To be able to set up our Continuous Delivery pipeline we have to arrange the following things.

  • Build your API implementation so we have a releasable package
  • Create infrastructure to host the API implementation
  • Deploy the API implementation package to the newly created infrastructure
  • Add API definition to Azure API management.
  • Repeat above steps for each environment. (DTAP)

Building your App

The first step can be different from my example if you’re not building your APIs using .Net technology. In our landscape we have a mix of APIs made with .Net Core and APIs made with .Net Full Framework because they needed libraries that were not available in .Net Core (yet). I’m not going into details on how to build your API using VSTS because i’ll assume  you’re already doing this or you know how to do this. If not here is a link to the official documentation.

One thing to keep in mind is that your API  web app does have to expose a API definition so Azure API Management can import this. We use Swashbuckle for this to automatically generate a swagger definition. If you’re using .Net Core you’ll have to use Swashbuckle.AspNetCore

Deploying the API implementation & adding it to Azure API management

For automating the deployments we’re going to the use Release Management feature of VSTS. In our first environment we’ll create steps to do all the things we described above.

 Screen Shot 2017-07-21 at 13.20.30

 The steps in our workflow are the following:

  1. Create web application infrastructure by rolling out an ARM template
  2. Set environment specific variables
  3. deploy the API implementation package
  4. Use a task group to add the API definition to Azure API management.

Creating the web app infrastructure & deploying the API Implementation package

the first and third steps are the basic steps of deploying a web application to Azure web apps. This is no different for APIs so i’ll just link to an existing blogpost here that explains these if you don’t know what they do.

Setting environment specific variables

the second task is a custom task created by my colleague Pascal Naber. It can help you overwrite specific variables you want to use in your environments by storing these settings as App Settings on your Azure web app. We use this to set the connection strings to backend systems for example a Redis Cache or a database.

Add API to API Management

So if we release the first 3 steps we would have an API that would on it’s own. But the main reason of this blogpost was that we want to have our API exposed through Azure API management so let’s have a look on how we can do that.

Azure API management has Powershell commands to interact with it and we can use this to add API definitions to Azure API management too. Below is a sample piece of Powershell that can import such an API definition from a Swagger file.

The script is built up out of 3 parts: first we retrieve the API management context by using the New-AzureRmApiManagementContext Commandlet. When we’ve gotten a context we can use this to interact with our API management instance. The second part is retrieving the swagger file from our running Web app through wget which is short for doing a GET web request. We’ll download the swagger file to a temporary disk location because in our case our web apps are running in an ASE  and therefore are not accessible through the Internet. if your web apps are connected to the internet you can also directly use the URL in the 3rd command to import the Swagger file into Azure API Management. Import-AzureRmApiManagementApi.

So now we have a script that we can use to import the API let’s add it to the VSTS release pipeline we could just add the powershell script to our source control and call the powershell using the build in powershell task. I’d like to make the developers’ life in our dev teams as easy as possible so i’m tried to abstract all Powershell mumbo jumbo away from them so they can focus on their APIs. To do this i’ve created a “Task Group” in VSTS containing this Powershell task so developers can just pick the “Add API to API Management Task” from the list in VSTS and supply the necessary parameters.

Screen Shot 2017-07-21 at 13.22.00

Screen Shot 2017-07-21 at 13.23.46

When we add this task group to the release we can run our release again and the API should be added to Azure API Management.

 Screen Shot 2017-07-21 at 13.20.30

Success!! Our initial continuous delivery process is fixed. At my current client we have 4 different API management instances and we also deploy our APIs 4x. A Development, Test, Acceptance and Production instance. The workflow we created deploys the API to our development environment. We’ve set this up to be continuous so every time a build completes on the master branch we create a new release that will deploy a new API instance to Azure and will update our Development Azure API management instance.

We can now clone this environment 3x so we create a pipeline that will move from dev, test to acceptance and production. I always set the trigger to automatically after the previous environment is completed. if we run our release again we’ll have 4 API instances deployed and in all 4 Azure API management instances they corresponding API will be imported.

Now the only thing you have to add is optionally adding integration tests to the environment you prefer and you are ready to roll!

Screen Shot 2017-07-21 at 13.24.10

 

Happy Coding!

Geert van der Cruijsen

Building, testing and deploying precompiled Azure Functions

Azure functions are great to build small specialized services really fast. When you create an Azure Functions project by using the built-in template from the SDK in Visual Studio you’ll automatically get a function made in a CSX file. This looks like plain old C# but in fact it is actually  is C# Script. When you’re deploying these files to Azure you don’t have to compile them locally or on a build server but you can just upload them to your Azure Storage directly.

In the last update for Azure Functions the option to build precompiled functions was added. Doing this is actually pretty simple. I’ve created a sample project on Github containing a precompiled Azure function, unit tests for the function and an ARM template to deploy the function. Lets go over the steps to create a precompiled Azure function.

Continue reading

It’s 2017: Test automation is not optional when building mobile apps!

Note: although this post focusses on mobile app development using Xamarin it also applies to other native mobile apps built in Swift, Java or even web apps. it’s 2017! whatever you are building get started with Test Automation!

As a consultant working for Xpirit i get to see a lot of different customers which I help with my expertise in building mobile applications to improve their mobile apps. Something I noticed in the previous year is that continuous delivery is a hot topic and companies and teams focus on deploying apps automatically to their testers through hockeyapp or even to the stores in beta and / or production.

In agile scenario’s (and come on who isn’t doing that currently? Every company or project I visit is saying they are agile or doing Scrum although some only do dailies and call that scrum 😉 ) In the current world it is really important to be able to release often because you want to be able to adapt to customer needs which are almost always changing and evolving.

Implementing a Shift left Quality Model

Test Automation is a process that does not belong to the developers or testers alone. It’s something that has to be in everyone’s mind from Product Owner to Developer and Tester. Automated tests can help you lower regression test effort but investing in Test Automation can really help you make a shift left focussing on quality earlier in your application development process.

Continue reading

Created an open source VSTS build & release task for Sitecore.Ship

At my current customer we’re implementing an Azure environment which also contains a Sitecore application. We’re using VSTS to for all aspects of the application development lifecycle from agile planning to source control and automated builds & releases. In the release pipeline we can use the out of the box Azure web deploy build steps to deploy our code to our web app which is a Sitecore instance. The next step is to be able to also deploy the Sitecore content we created as code using TDS to our Sitecore Site.

Sitecore.Ship is an open source project which makes it easy to deploy Sitecore .Update files to your Sitecore Instance. It is created by Kevin Obee and it can be found here on Github

in our Build we build our TDS projects using MSBuild to generate .Update files which we can then deploy in our release pipeline to Sitecore. There was one problem however which was that there was no build task to do this. Since my role at my current client is supporting all the development teams in improving their continuous delivery process i decided to create a task to make their life easier.

sitecoreship
Continue reading

Improved Hockeyapp publishing from VSTS for Windows 10 UWP apps

In my last blogposts that  showed a tutorial on how to do automated builds in VSTS and continuous deployments to Hockeyapp. For UWP the support wasn’t that good and we couldn’t use the Hockeyapp Build steps from VSTS because Hockeyapp could not handle zip files containing the files from your app package such as the appx or appxbundle file, the powershell install script etc. I made a workaround using powershell but that is not needed anymore because the Hockeyapp team made some changes to the Hockeyapp build step.

This has changed last week although it wasn’t announced anywhere. I asked the question on the Hockeyapp slack group when the Hockeyapp team would implement this feature and they told me they just did it before the Microsoft Build conference started.

So if you’ve used my Powershell script before to publish your app to Hockeyapp you can now change it back to the  Hockeyapp build step and leave the rest of the steps the same. the zip file should contain all your files from the AppPackages folder. I’ve also updated the tutorial post for anyone who’s using it in the future to use the correct way right away.

image_thumb62.png

Hockey app UWP deployment

If there is an .appxsym file in the zip file as well the symbols will also be used within Hockeyapp.

I’m really glad that UWP apps are on the same level of maturity again as Android and iOS apps are regarding Continuous deployments in Hockeyapp.

The Hockeyapp team also announced in the public Hockeyapp Slack group that they are working with the Windows product team to improve installation of apps so more to come in the future. I can’t wait!

Happy Coding!

Geert van der Cruijsen

Continuous deployment of Xamarin.iOS apps to Hockeyapp using VSTS

Last week I’ve created 2 posts on setting up VSTS and Hockeyapp in a continuous deployment scenario for both Xamarin.Android and Windows 10 UWP apps. Today we’ll discuss the 3rd platform: Apps built using Xamarin.iOS

The basics of all 3 platforms are the same but there are still quite some differences so lets look at the steps for Xamarin.iOS apps:

image

Context: the app we are going to deploy

We’re going to deploy the same app as yesterday in the Xamarin Android post. I’ve created a simple solution in Visual Studio containing an average Xamarin project. 1 PCL, 1 Android app, 1 iOS app and 1 Windows 10 UWP app.

image_thumb1_thumb

To make things easier during automated build I’ve also created a new solution that only contains the projects relevant for the iOS app.

image-1

This solution only contains the PCL project and the Xamarin.iOS app project.

Setting up the build in VSTS: Prerequisites

 

There are some difficulties comparing the build of iOS apps to Android or Windows apps. first difficulty is that you are going to need a Mac to do the actual build steps. to do this you have 2 options. The first is to just use a Mac within your network and use that as a build agent, the other option is to use a service called MacInCloud.com which has a special plan for VSTS build agents.

Mac Build agent

Mac in cloud has a special VSTS build agent plan which only costs 30$ a month. in my experience this works really well. If you have a spare Mac somewhere to use as build agent this would also work fine but most clients I come don’t have that Winking smile

Setting up a VSTS build agent on a mac isn’t that hard. There is a good guide on this on github here: https://github.com/Microsoft/vso-agent/blob/master/docs/vsts.md

make sure Xamarin Studio is installed on the mac because it’s needed for doing the actual builds. (not in the VSTS build agent guide)

App Certificates

To be able to build the iOS app and to do ad-hoc distribution we’re going to need to set up certificates from the apple developer portal on our mac build agent. Xamarin has a great guide on how to do this so I won’t copy all the steps in this blogpost: https://developer.xamarin.com/guides/ios/deployment,_testing,_and_metrics/app_distribution/ad-hoc-distribution/

After we’ve arranged a mac build agent and set up the app certificates we can actually start building our app.

Setting up the build in VSTS

in VSTS open your team project and go to the BUILD tab. in here we’re going to create a new build definition by clicking the green + sign.

image_thumb3_thumb

Choose the Xamarin.iOS build template to set up the build.

image

In the next step select the correct repository you want to use for your app and make sure you select the Mac build agent in the drop down box.

image
Click Create and now our build definition is created with 2 out of the box steps. The first step is doing the actual Xamarin.iOS build. select the iOS specific solution so only the iOS related projects are build.

We’ll be removing the Xamarin test cloud step for now. if you want to know more about this let me know in a comment so I can create a new blogpost about this if people would like that.

image

so we only have the Xamarion.iOS step left but this will only build our app. we’re also going to need 2 extra steps which are not part of the Xamarin.iOS step:

  • Nuget package restore
  • Copy and Publish Artifacts.

Nuget Package restore:

I’ve you’ve read my previous posts on Android and Windows UWP you would expect we’re going to use the out of the box “restore nuget package” build step. But that is not possible for iOS. Why not? these steps are implemented using Powershell which doesn’t run on your mac agent. so we’ll have to do it manually by executing a shell script. So we’re going to create a shell script task. First we need to create the actual script. The script is quite simple. download the nuget.exe file and execute the nuget restore command

 

 

Save the .sh file in your repository so we can add it to our build step. Click the green + and select “Shell Script”

image

Drag the .sh script to the top of the build so the nuget restore will be executed before the actual build step. Select your .sh file in the script path and as argumenets pass in the path to your iOS solution file so the script knows what packages to restore.

image

Copy and Publish Artifacts

So we’ve set up the nuget restore and the build. the last step of our build is to copy and publish the build artificats  so we can send these to Hockeyapp later.

Add a “Copy and Publish Build Artifacts” step to your build definition.

image

In the copy and publish step set the root to the correct folder and for contents we’re going to select the .ipa file. The Artifact name we’re naming “drop” of type server.

image

These 3 steps should together be able to do a successful build. Go to triggers to schedule your build nightly or set it to build every time someone checks in code.

Queue a build to see if everything works.

image

Setting up Hockeyapp

I assume you’ve already created an account at Hockeyapp otherwise just sign up at www.hockeyapp.net (It’s free for 2 apps or less) once you’ve logged in go and create your first app by pressing the New App Button

image_thumb38_thumb

Hockeyapp will ask you to upload a build. we’re not going to do that since we’re setting up automatic deployments. choose to add the app manually

image_thumb40_thumb

Choose  iOS as the platform and fill in your release type and title of the app.

image

Click Save and in the overview of the app copy and save the App ID

image_thumb-19

To be able to deploy from VSTS we need to set up an API token we can use in VSTS. If you already followed the Android or Windows UWP guide you might have already taken these steps so you can skip these steps and move to the chapter of deploying the app.

Click on your user icon in the top right and select API Tokens from the menu on the left.

Create a new API token and call it VSTS. Copy this API token. we’ll need it in VSTS

image_thumb46_thumb

Move back to VSTS and  open up the marketplace (top right next to your name) and click manage extensions. browse the marketplace and install the Hockeyapp extension

image_thumb49_thumb

After installing the extension go back to your VSTS team project and navigate to the settings window. in the settings menu go to Services and add a new service Endpoint of type “Hockeyapp”

Give the endpoint a proper name and copy in the API token you’ve generated earlier. now save the service endpoint.

image_thumb51_thumb

Now all the plumbing with Hockeyapp is done we can actually start deploying our app to hockeyapp

Deploying your automated build to Hockeyapp

Go back to your team project in VSTS and navigate to the Release tab

image_thumb53_thumb

Choose an Empty deployment template and press OK.

image_thumb56_thumb

First go to the Artifact tab and select the build we’ve created earlier as supplier of the artifacts we’re going to release to hockeyapp.

image

Go back to Environments, name your deployment template and add a new Task

image_thumb60

This list should now contain a Hockeyapp step since we installed that as our extension

image_thumb62

Configure the Hockeyapp step by selecting the Hockeyapp connection from the list. (the service connection you’ve created earlier should be listed here) Enter the APP ID you copied earlier and select the .ipa file that was generated by the build.

image

That’s all it takes to set up the release. for the final step we’re going to set the triggers in the “Triggers” tab of the release to be deployed continuously after each build.

image

Press the Green + To start a manual release towards Hockeyapp. Everything should work and the release should be created in Hockeyapp.

If you have any questions let me know via twitter @geertvdc or by commenting below.

Happy coding!

Geert van der  Cruijsen

Older posts