.Net, Azure and occasionally gamedev

Home App - Public Website

2018/03/11

The first step of a commercial product would be the landing page and user registration.

In addition to the landing page that showcases the features it should offer the user the possibility of registering a new account.

Since the user eventually has to download an app for iOS/Android/UWP I would have to recreate the login process multiple times (web, Android, iOS, UWP) in order to allow a smooth signup experience.

The obvious solution is to design a mobile capable website and serve that from within the apps as well. This also allows updating the process without modifying the apps.

The website itself is pretty straight forward to build and I could easily use the Azure portal to click together a new webapp resource, however - as with all components - I want it to be fully automated solution with deployments and infrastructure as code to ensure consistency.

Arm templates

The obvious solution in Azure is to use the ARM (Azure Resource Manager) templates.

Each template file can be adjusted via injected parameters and reference other existing resources. The templates can then be pushed to Azure via resourcegroup deployments. Azure resourcegroup deployments will then deploy the relevant resources. Since all templates are idempotent, any changes made will only ever add new resources to Azure (while keeping the existing ones).

This default behaviour is good because it doesn't delete and recreate your existing storage/sql databases on every deployment but only adds any new resources and ensures that your environment always contains all the necessary resources.

The downside is that as you grow your infrastructure you might accidently remove some part of infrastructure from your deployments that is actually needed. Since each deployment only adds new infrastructure and doesn't ever delete anything you might not notice for a while that your deployment script is out of sync and is actually missing required infrastructure. When you eventually want to fully recreate the entire infrastructure the deployment will fail.

Azure supports two modes "Incremental" (the default) and complete. The later always deploys the entire resourcegroup as you specified and deletes any resources not specified. While one might think that complete would offer a solution to the issue mentioned above, complete mode makes it easy to accidently delete resources by having a typo.

Thus my personal recommendation is to always use incremental for deployments of the real infrastructure and to make full infrastructure deployments part of your testing suite.

Obviously you shouldn't delete and redeploy your existing infrastructure (as it would cause downtime to users). Instead your build should be setup in such a way that your entire infrastructure can be deployed multiple times in parallel.

In a perfect environment your resources are all dependent on a single variable to generate their unique name. Thus changing the name should allow you to deploy all resources a second time.

Example

In my case I use concatenation in VSTS build variables to generate a unique resource name for every component:

The public website uses a unique name (both for the resource group and web app name):

homeapp-(ProductShortName)-(Release.EnvironmentName)

Where $ProductShortName would be unique per resource (e.g. "api", "web", etc.) and Environment names would be "dev", "test", "pred", ..

In the case of the website my build thus initially creates webapps "homeapp-web-dev", "homeapp-web-test", etc.

As part of the full infrastructure deployment tests I then have a different build that creates "inf-homeapp-web-dev" and asserts that it works the same as the original one. (I use "inf" prefixes for infrastructure deployments that can be deleted).

After asserting that the inf* deployment is identical to the real environment the build then deletes the inf deployment.

Thus I get to use incremental mode with my real environment and still have the assertion that in a worst case scenario I can recreate the entire infrastructure from code.

Improving arm templates

There is no clear guide on how to create arm templates and I personally find them way to verbose for everyday use.

Visual Studio and Visual Studio Code both have syntax highlighting and validation for templates and the quickstart templates library offers a variety of "getting started" templates.

In addition the Azure portal (after manually creating resources via the portal) allows you to see the automation script that would have created the specific resources.

However these ARM templates contain lots of unnecessary variables. I have only ever found them useful to look at for specific values that are unclear but I never used them for actual deployments because they just contain way to many lines of code.

As a comparision: When looking at the ARM template in the Azure portal for my homeapp-web template it has ~300 lines of json with many variables "homeapp" "homeapp_web" "homeapp_name" that are all set to the same values while the webapp sample in the quickstart templates is a lot more concise and only contains ~60 lines of json.

Since my private azure usage is usually limited to using single instances of resources (one sql server + one webapp + one storage account) I always found the ARM template system a bit overkill.

Which is why I wrapped most resources with powershell scripts for reusability.

For any product that needs to be deployed I then have a "deploy.ps1" script that deploys all the necessary resources by calling the wrapper scripts.

Here's an excerpt of my website "deploy.ps1" script:


Invoke-Expression -Command "Deploy-WebApp.ps1 -WebAppName $ResourceGroupName -AppServicePlanResourceGroupName $AppServicePlanResourceGroupName -AppServicePlan $AppServicePlanName -ResourceGroupName $ResourceGroupName -Environments $Environments"

Invoke-Expression -Command "Deploy-AppInsights.ps1 -AppInsightsName $ResourceGroupName -ResourceGroupName $ResourceGroupName"

this part deploys both the webapp and appinsights resources.

The advantage of the "deploy.ps1" script is that it can be executed locally and the build server only requires a single step "Execute powershell" (which saves me from setting up multiple build steps "deploy keyvault", "deploy webapp", "deploy storageaccount", ... over and over again for every service).

For the homeapp infrastructure I am also trying out a new way of handling parameters: Every parameter can be an environment variable.

If the parameter is not passed to the script, the script looks for an environment variable with the same name.

Here's the startup code of the script:

param(
  $ResourceGroupName = "homeapp-web",
  $AppServicePlanName,
  $AppServicePlanResourceGroupName,
  $ResourceGroupLocation = "West Europe"
)
$ErrorActionPreference = "Stop"
# ensure all params are set (if not, use their equivalent environment variables and set the param)
function UseEnvironmentAsFallback($paramName) {
    $value = (Get-Variable $paramName).Value
    if ($value -eq $null) {
        $value = [Environment]::GetEnvironmentVariable($paramName, "Process")
    }
    return $value
}
Get-Command $PSCommandPath | %{ 
    $_.Parameters.GetEnumerator() | %{ 
        $name = (Get-Variable $_.Key).Name
        $value = UseEnvironmentAsFallback($name)
        if ($value -eq $null) {
            Write-Error "Expected either script or environment variable '$name' to be not null"
        }
        Set-Variable -name $name -value $value
    }
}

This allows two things:

  1. When running it locally I can specify any values I want via commandline.
  2. When running on VSTS I don't have to pass any parameters (as all VSTS variables become environment variables by default)

Previously, these scripts always had buildsteps like this in VSTS:

deploy.ps1 -ResourceGroupName $(ResourceGroupName) `
           -AppServicePlanName $(AppServicePlanName) `
           -AppServicePlanResourceGroupName $(AppServicePlanResourceGroupName) `
           -ResourceGroupLocation $(ResourceGroupLocation)

Having to specify all parameters with essentially redundant names is just cubersome and any changes in parameters meant I had to adjust many buildsteps (since each environment will need the same buildsteps).

Now by reusing the environment variables I can just call:

deploy.ps1 -ClientSecret $(ClientSecret)

and it will automatically fetch the VSTS variables.

Note that secrets are "Not decrypted into environment variables" by design.

Since I have a password (for the AD tenant that renews the Let's Encrypt certificates) I have to pass that in manually. Having to pass in only one parameter is a lot better than having to pass in all 13 parameters that my actual script requires.

When deploying locally it will use the passed parameters (environment variables are only used if the parameter is not set). This prevents accidently using environment variables that have similar names and also allows me to skip setting the "ResourceGroupLocation" over and over (I have never not used "West Europe" in Azure so it is always a global shared parameter).

Automated infrastructure tests

With this setup, all I have to do is change one name (resourcegroup name) and I can deploy the same infrastructure multiple times in parallel.

If e.g. "homeapp-web" is the real infrastructure, then I can just create "homeapp-web2" and have a second resourcegroup with all the same resources.

Getting the website online (with SSL)

I have spent all weekend automating the webapp deployment.

One additonal point to note is that I also use Let's Encrypt for SSL certificates.

It is not yet a natively supported feature in Azure (they still recommend purchasing certificates) but the setup wasn't too hard.

I have previously already used the Site Extension "Let's Encrypt" to continuously renew my SSL certificates. It still required quite a few manual steps each time (not to mention these steps are needed per webapp).

So instead I created a generic script to deploy the site extension and add the relevant app settings based on the ARM template of the Let's Encrypt site extension.

That way any webapp deployed will also directly benefit from free SSL certificates.

Unfortunately the extension has quite some issues. It

All but the last point are circuumvented by automatically and consistently deploying the infrastructure via ARM templates.

The last point is a bit more tricky.

The Let's Encrypt background job looks for settings only in the main slot, however any slot swapping causes the settings to be moved between slots.

Using sticky settings circuumvents this problem but using the sticky settings causes the webapp to be restarted on every swap even when the webapp in the sourceslot is already warmed up. This has to do with the way the azure app settings overrides are implemented.

Thus I temporarily made the settings sticky in the main slot and will look for a workaround in the future.

The final step involved setting up the VSTS build (with only two steps: 1. run deploy.ps1, 2. publish code to webapp) and now my preliminary website is running at homeappv2.marcstan.net.

(I use v2 in the url because I am already using homeapp.marcstan.net for my previous solution. Once I start porting the server code I will merge the two).

tagged as Azure and Home Automation