.Net, Azure and occasionally gamedev

Static websites via Azure Storage and CDN

2019/07/12

A while ago I posted about the preview feature static websites.

Not only are they super scalable, they are also a lot cheaper than using azure web apps and because this blog doesn't rely on any server-side code I decided to make use of the static website feature.

As of last week, this blog is running on Azure Storage! Since the setup was a bit tricky, I documented all my steps here.

What are static websites?

There's a feature in Azure Storage that's been out for a while now called static website. Essentially it allows you to turn your storage account into a very cheap hosting solution for static websites.

If your site does not require server-side code to operate (e.g. it's a blog like mine, generated by a static site generator) then this is a perfect solution because it gets rid of the server infrastructure (and its associated cost) while increasing the uptime of your site (since blob storage has much higher uptime promises than regular app services).

With Blob Storage you also only pay for the storage and the actual egress charges (i.e. visitors on your website) compared to web apps where you pay the fixed price (60$/month for the first "production tier") of hosting no matter how many visitors you have.

Benefits

What are the downsides?

Technically the Storage account has a feature to map your custom domain easily to it, but it currently does not support HTTPS which is a deal breaker in my opinion (also it's limited to a single domain, so no www to root domain redirect possible either).

Luckily the documentation also mentions the workaround that will be the focus of this post: Using Azure CDN.

As you can see, my website is accessible via both www.marcstan.net and marcstan.net and it's all done through the combination of Storage Account, Azure CDN and DNS!

Read on to get the details.

Blob Storage and the static website feature

While the initial setup took me quite some time, no part of it was overly complicated and I managed to migrate my website from a regular app service to the CDN without any downtime.

The CDN is needed as it's currently the only way to use Blob Storage with a custom domain and HTTPS.

Azure DNS is needed because it's the only way to get a root domain hosted via Azure CDN.

Depending on your use case you will be limited in the options you can choose from:

Known limitations

CDN options

Azure CDN currently offers four providers: Verizon, Microsoft and Akamai at Standard tier as well as Verizon (Premium).

In my opinion only Verizon Premium is usable:

While all but the Verizon (Premium) offering cost the same (0.068$/GB) none of them have a rule engine.

This means: no URL redirection (e.g. www.example.com -> example.com).

Most importantly: Without a rule engine you won't be able to redirect HTTP to HTTPS.

This leaves you with A) your website available via HTTP (but not redirected to HTTPS) or B) your website not available via HTTP (with an ugly error message for your users).

While the Verizon Premium offering is 2x as expensive as the other options it's still less than 2$/month if you serve 10 GB traffic.

Since it's the only way to upgrade HTTP access to HTTPS I consider it essential and the rest of the tutorial assumes you picked Verizon Premium.

Root domain

Unfortunately, you can't map root domains directly via Azure CDN (as the CDN only allows CNAME validation).

While it's technically possible to set a CNAME on a root domain, I suggest you don't do it:

Turning on CNAME on a root domain will most likely stop you from sending/receiving email.

If a CNAME is set, all other entries of the domain are ignored. In the case of the root domain, this will ignore entries such as your MX records (needed for email send/receive).

The workaround is to use Azure DNS which supports A record integration with the CDN (more on that later).

In short: If you want to map a root domain to storage/CDN you will have to let Azure manage the DNS records for you (0.40$/month).

Root domain + https

Yet another gotcha: Azure CDN supports free* certificates for your domain, but only if your domain is not a root domain. For root domains, you must bring your own certificate (since I had to do it, I covered it in this tutorial as well).

* The free certificate will be issued by Digicert on your behalf. You have the option of granting Digicert a "perpetual permission to issue certificates on your behalf forever" or alternatively run through the wizard once per year.

Since I had interest in neither option and needed a certificate for my root domain, I opted to use Let's Encrypt via BYOC.

Getting started

This tutorial will work for you if:

Depending on your use case you will have to some extra work, but it's all written down. Read along..

Setting up resources

Assuming you have no resources for the static website in Azure yet, create a resourcegroup to host all your resources.

Personally I use the convention of giving all my resources (and their resourcegroup) the same name.

E.g. my personal projects look like this:

azure resources

Since the azure portal already displays the type (and icon) for each resource, there is no need to use a prefix/suffix for each resourcetype to distinguish them.

(Another advantage is that the config file you will have to write for the Let's Encrypt function will be much shorter since it allows fallbacks for identically named resources).

Once you have a resourcegroup, create a keyvault (it will be used to store the Let's Encrypt certificate) and a storage account in it.

For the storage account you can pick whichever replication you want. I used Geo Replicated Storage (note that we'll be using a CDN to serve the content - if the CDN doesn't have the content it will try to fetch it from the storage account).

Also make sure that Secure transfer is turned off. If you turn it on, then your website won't be available via http and instead return an ugly message like this:

no http requests allowed

Http is also needed when issuing Let's Encrypt certificates (which we'll do later).

We'll also configure the CDN later to auto redirect HTTP requests to HTTPS so all users are upgraded to https automatically.

Enabling the static website

Once the storage account is created navigate to it and click on Static website.

Turn the feature on and set your default documents (e.g. index.html and 404.html for error pages).

Copy the URL (format like https://<storage>.<region>.web.core.windows.net/), we'll need it later.

You should already be able to access said URL but you'll get a 404 error as no content has been uploaded yet.

I suggest you either upload a few demo files (such as an index file) or if you have existing content migrate it to run via the new storage account. If you look into the blob storage you will see a new container $web. All its content is served via the URL, so go ahead and upload some files to it.

Personally I use azCopy (v10) as it supports sync (so on each deploy I only need to upload the change delta), but you can also use AZ CLI or Powershell.

Check out my deployment pipeline to see how azcopy sync works (unfortunately, the Azure DevOps task doesn't yet support the sync feature).

The website is now technically up and running (you should be able to browse to your files at the URL) but the URL isn't really pretty enough to hand to customers just yet.

Azure DNS for root domains

If you don't intend to use the root domain and only want to use subdomains, you can skip ahead to the Setting up the CDN section.

If you do want to use a root domain, you must use Azure DNS Zone (0.40$/month) as it's the only way (known to me) that allows mapping a root domain to the Azure CDN without breaking email and other services of the domain.

Important: Make a backup of all your DNS entries at your provider. Once you change the nameserver of your domains some providers will delete all records you have with them.

First create a new Azure DNS Zone resource (it's best to use the name of the root domain).

Once created, add all the same DNS records as you have at your existing domain provider (namecheap, GoDaddy, etc.). Note that the default TTL for records is set to 1 hour in Azure. For now I recommend you reduce it to ~5 minutes (so you can quickly correct mistakes). Once everything is running you can revert it to the default.

In addition you should also check the TTL set at your domain provider. Many providers still default to a TTL of 1 day, which means that once you make the change it could take up to 24 hours until all users see it.

If possible reduce the TTL at your provider and wait enough time for it to take effect.

The entries you make in Azure DNS Zone won't be valid immediately (after all your domain is still using your providers nameservers).

Before switching to Azure DNS, you can verify that Azure DNS is correctly setup by using nslookup:

nslookup example.com ns01-01.azure-dns.com

nslookup will resolve example.com via the Azure DNS provider to the right (have a look at your DNS zone to find your Azure DNS provider. There will be 4 domains in the top right corner of your Azure DNS resource panel, all of them should work):

If you resolve your newly setup domain, it should be point to the correct target.

Note: Even if the result is immediately visible, I still recommend you give it some time to propagate globally.

Activating Azure DNS

(This section applies only if you need a root domain).

So far, we are only changing the nameserver, your domain will still serve its content from the old server.

Once all DNS records are setup correctly, it's time to make the nameserver switch.

Go to your domain provider and specify that you would like to use a custom nameserver.

IMPORTANT: When you copy each name server address, make sure you copy the trailing period at the end of the address. The trailing period indicates the end of a fully qualified domain name. Some registrars append the period if the NS name doesn't have it at the end. To be compliant with the DNS RFC, include the trailing period.

Once you have set all four azure DNS servers at your registrar you can use nslookup again to verify that requests are handled via them:

nslookup -type=SOA example.com

After the TTL of your registrar expired this should resolve via Azure DNS!

At this point your site is still available to your users in its old form but DNS is now resolving via Azure.

Next we'll setup the CDN and then connect the two.

Setting up the CDN

The CDN will be required for all use cases. It will map your storage account to your custom domain and allow the use of HTTPS certificates.

In my own case, I use both www.marcstan.net and marcstan.net and have setup a simple redirection between the two using the rule engine of Verizon (Premium) CDN.

As mentioned earlier: Only the Premium plan supports a rule engine, so if you want to redirect domains (or create other rules) you have to use the Premium plan, otherwise you can also use any of the other providers.

Create a new Azure CDN (as per my convention I named mine "marcstan" like all my other resources).

I also had it create an endpoint of the same name ("marcstan") right away.

Beware though, that the quick create option allows you to specify an origin type storage. This won't work for our case (it's meant for the custom domain feature of Storage which as I mentioned doesn't support HTTPS).

Instead you must select custom origin and specify the hostname <storageName>.<region>.web.core.windows.net that we saw earlier on the storage account static website settings blade.

Now that the CDN is up and running, navigate to its endpoint that we just created. It will already have a URL like https://<name>.azureedge.net/.

If you browse to it you will see the content of your storage account, but this time served (and cached) via the CDN!

Note: The CDN will cache content globally at many edge sites. Whenever you update content, you want to purge the CDN, otherwise your updated content won't be visible to your customers until the CDN cache expires. (It is up to you whether to purge the entire CDN or only the paths that changed).

Mapping the custom domains to the CDN

Finally, let's map our domains!

If you are okay with 6h+ of downtime (e.g. setting up a domain for the first time) you can directly point your domain CNAME at the CDN URL (<name>.azureedge.net), otherwise you can use the cdnverify feature:

That way you won't incur downtime. I definitely recommend the cdnverify method, because the certificate deployment can take up to 6 hours to finish.

By using cdnverify your domain can already begin the certificate rollout process without any users being directed to it.

Thanks to the cdnverify CNAME you will be able to map www.example.com (and example.com) to the CDN, even though the domains themselves still point to your old webserver.

Setup as many subdomains as you need (in my case I setup www.marcstan.net and marcstan.net via the CNAMEs cdnverify.www.marcstan.net and cdnverify.marcstan.net) and let's continue with the https certificate next.

Provisioning Certificates

Once created you'll notice the custom domain stating Custom Https - Disabled. You will be able to visit the domain via HTTP, but HTTPS will fail.

It's now time to turn HTTPS on!

You have two options:

  1. letting the CDN manage it or
  2. bringing your own certificate (BYOC)

CDN managed certificate

Note: This is not supported if you have a root domain mapped. It will only work for subdomains.

The CDN managed certificate is issued for free by Digicert. You can either sign a document once that you agree to "Digicert may issue certificates for this domain indefinitely" or you can manually trigger the certificate process each year, but this process does not work for root domains.

If you want the CDN to manage certificates, turn on custom domain HTTPS and select CDN managed. After saving you will receive an email and once you completed the task in the email the certificate issue process will begin and you are done (so skip ahead to the Go live section)!

Bring your own certificate (BYOC)

If you have a root domain, you must use BYOC.

Thanks to Let's Encrypt this doesn't cost you anything either, but it will mean additional setup. LetsEncrypt certificates expire after 90 days so you want some automation to ensure that they are renewed regularly (recommended: 30 days before expiry).

Unfortunately the existing solutions (Let's Encrypt Site Extension, web app renewer) don't support Azure CDN..

But who would I be if I didn't have a solution ready that works with Azure CDN?

So I built an Azure function that supports certificates renewal and provisioning even when using Azure CDN.

It's opensource on Github and because it's an Azure function the total cost per month will be in the < 0.10$ range!

I have detailed setup instructions over at github, so I suggest you follow the steps over there (this post is already long enough).

In essence: The function will renew Let's Encrypt certificates and store them in a keyvault and trigger the Azure CDN certificate provisioning process.

Note if you are migrating a live domain: Let's Encrypt will check for an uploaded file at ./well-known/acme-challenge.

Since the function I built only supports Azure Storage you might want to setup a redirect for this subfolder so Let's Encrypt can find the challenge files on your old server.

IIS example:

<rule name="Acme challenge" stopProcessing="true">
    <match url="^.well-known/acme-challenge/(.+)" />
    <action type="Redirect" url="https://<storageName>.<region>.web.core.windows.net/.well-known/acme-challenge/{R:1}" redirectType="Temporary" />
</rule>

(If you are not using IIS, .Net Core has rewrite support and so do most other frameworks).

Once you have setup the function and successfully issued a certificate we can continue here.

Configuring Rule Engine

Verizon Premium allows you to configure rules via its rule engine.

I have set up these four rules:

To set up these rules, click the "Manage" button in the Azure CDN blade. You will be redirected to an old portal where you must select HTTP Large -> Rule engine.

Note that any modifications to the rules may take up to 4 hours to take effect.

Here's some screenshots of the rules I set up:

cache invalidation

cache invalidation

Essentially it will regex match the requests and redirect them accordingly.

HTTP -> HTTPS redirection

http to https redirection

www to non-www redirection

www to non-www redirection

azureedge.net to domain redirection

azureedge to domain redirection

Additionally you might be interested in the redirect index.html to / rule.

Monitoring

With my old web app I simply hooked up App Insights to get telemetry.

With blob Storage metrics are written to the $logs container. You won't see it in the Azure portal, but if you download the Storage Explorer you will see the container and the log files in it.

With the $web container exposed to the web you will see all incoming http requests in the log files (with a few hour delay).

Go live

You now should have a working CDN, DNS (if root domain is used), storage account and your website content should be deployed to it as well.

Navigate to the endpoint of your CDN in azure one more time and make sure that the https certificate has been deployed successfully (CDN certificate deployment may take up to 6 hours).

If it is, then let's continue.

Now it's time to let live traffic hit the new website.

For the root domain, you must go to the Azure DNS Zone, select the A record and switch Alias record set to yes. Once set, select the Azure CDN resource you created as the target and save it.

For any subdomains it is enough to switch the CNAME from the old target to your CDN endpoint: <name>.azureedge.net.

After the 5 min TTL you specified previously all new requests should start to hit your storage account, I suggest you wait a day or two before resetting the TTL of all domains to 1 hour just to make sure that everything is working.

You can now also delete all the cdnverify* CNAME entries we created, as they where only needed to initially verify the domain ownership.

Fin

If you made it this far: congratulations!

You now have a website running via Azure Storage and Azure CDN at rock bottom prices with better availability and performance than most other PaaS offerings can provide.

I hope this helped you, as gathering all the individual steps took me quite some time.

For the future I hope Azure will integrate much of it into the storage account workflow to provide a more streamlined experience when using the static website feature.

tagged as .Net, Azure and Web