How to quickly get some test code on the Internet

14 Apr 2022

Tags: azure, docker, testing

As I work on projects, there are often times I want to quickly spin up something on the Internet for testing or just for a small personal project, and I find myself frustrated that what was once a simple task, is now (necessarily) quite complicated. Normally I have say a bit of Go, Python or (recently) server-side Swift, and all I want to do is run a said code somewhere such that it can be accessed from the Internet by hostname and port. Unfortunately the Internet has grown somewhat since I first started out, and so hosting providers are geared up to larger projects, or I’m required to structure my code to work with other services (e.g., Heroku), and all I want is to have this lasted together proof of concept addressable briefly.

Despite it being an essential part of modern development, I’ve very little energy personally for doing DevOps. That is to say, the process of taking software designed for the web and deploying it properly. Out of necessity deploying software safely and at scale is quite a complicated business, and that’s why we need people for whom this is their primary task. I can muddle through with a modern AWS setup, but for me it doesn’t spark joy, and so I usually try and make sure on a project there’s someone else for whom it does. But when I’m just doing little tests or side-projects, I’m on my own, and I don’t really need all that comes with a proper AWS setup.

Usually I end up hosting such things on my own local network and leaving it at that. However, I found an easier way to get some simple bits of code on the web this week, which is a combination of using Docker and Azure. This sounds quite a bit like devops again, but bear with me.


If you’re not already familiar with Docker, then this tip isn’t for you - you’ll be in the same boat I was with having to learn a load of DevOps just to set up a small test. But I’ve long since taken to Docker as a way to develop software and services for the web locally, even though I’m not deploying them that way (and only in the last couple of years have my clients taken to deploying with Docker); mostly I use it to make it easier for me to juggle working with different client software stacks without having conflicting installs on my own computers.

So given I’m already used to working with Docker, I often wonder what is the quickest way I can deploy this tool I already have running in a Docker container? On indeed, if I have an existing service that is packaged in Docker that I want to test against, how do I get that up somewhere? Lots of hosting services will work with containers, such as the ubiquitous AWS and Azure, but my experience of those was one of having to set up a lot of other bits just to run a container - you needed a load balancer, a security domain, etc. before you can get to just run your container and have it accept packets. Again, none of this sparks joy.

So I was recently delighted to discover that with Azure at least, you can very quickly just get a container up and running on the Internet using just the standard Docker command line tools. Most of what follows is a variation on those instructions, so feel free to skip to those for more details, or stick here for a slightly streamlined based version.


For this to work, you will already need an Azure account set up. I thankfully have that as I host my personnel blog as a static site on Azure (and if I ever get time I’ll migrate this blog there too). Assuming you have that, and you have a docker container version of your project, then you’re ready to get rolling.

The first thing you need to do is create a “Resource Group” in the Azure portal just so Azure knows how to bill you. Technically you can let the Docker integration create one of these automatically for you, which sounds even easier, but it’ll use a UUID as the group name, and clearly that’s going to get confusing at some point, so I recommend just creating the Resource Group manually with a name that’ll help you remember why you have spent some money with them.

The other advantage of creating the resource group manually is that you can specify where in the planet your resources will be hosted. It will default to one of the US zones, but being in the UK I set it to their UK zone, just to save the packets a little bit of effort.

That done, we can now hop over to the command line and do everything else there.


The first thing to do is login to Azure with the Docker tools:

$ docker login azure

This will bounce you to a web browser and back to do the auth. I found this didn’t work properly with WSL2 on Windows, due to an issue with Go’s network library that I believe is now fixed so hopefully the Docker tools will update at some point, but it worked fine from Powershell and from macOS’s terminal.

Next up you need to create a Docker context. A context is just a way to tell Docker to use a computer other than the current one when it runs its commands. So we need to create a new Docker context and map it to the resource group you created (or, if you don’t specify one, Docker will create a new resource group for you, but with a random name):

$ docker context create aci my_fun_service --resource-group resource_group_I_created_alread

Now you can simply tell Docker to use Azure to run things by setting that context when you run a command, either by using the –context command line option (as I’ll do in the rest of the examples here), by settings the CONTEXT=my_fun_service environment variable, or using the “docker context use” command to change the default context.

Now you can just start and stop docker containers as you normally would. I was actually trying to run an existingly packaged container, consbio/mbtileserver, when I did this, so I could just do:

$ docker --context my\_fun\_service run --name some\_name\_I\_will\_remember \
   --domainname myfunservice -p 8000:8000 consbio/mbtileserver

The only unusual option there is the —domainname, which is to ask Azure to use that name when the service appears on the Internet: that name will be the first part of the hostname, otherwise you’ll have one that Azure automatically picks for you.

You can see that if you now do a Docker ps:

$ docker --context my_fun_service ps
CONTAINER ID               IMAGE                  COMMAND             STATUS              PORTS
some_name_I_will_remember  consbio/mbtileserver                       Running             myfunservice.uksouth.azurecontainer.io:8000->8000/tcp

And if I curl HTTP on that address it has provided I will find the service I tried to run. I now have the thing I wanted on the Internet up and running! To stop it I can just use the usual Docker commands again:

$ docker —context my_fun_service stop some_name_I_will_remember

I can use the Azure web portal to stop and start it also if I’d like.

Whilst this has been a few steps along the way to do this, I find this solution very convenient given I’m already usually set up with Docker for projects, and after setting up the initial resource group I can just do everything with my existing tools.

The main restriction that I hit in this is that you can’t remap ports when you launch a service this way - so the off the shelf container I was using run its service on port 8000, and so my public address for this had to use the same port.


Having run my process, in this particular example I also needed to have some data storage with it. You might want to do this to get access to logs or configuration, or in my case I was running a service that was processing some larger files and serving up a subset of them when a client requested it. In theory you could package all these extra files up in your container image, but it’d be nicer if you could just mount a file system volume, just as you would if using Docker locally.

Thankfully Azure’s Docker integration also make this somewhat easy, using a two stage process. First I went to the Azure web portal and created a storage account associated with the resource group I created initially, and then I could create a volume in that storage account using the docker command line tools:

$ docker --context my_fun_service volume create data-volume  --storage-account my_storage_account

I didn’t see if a storage account would be created automatically to save you going into the Azure portal, but I think by now you know I like to make sure things have sane names, so I never tried that option.

I could then upload files into the data-volume directly using the Azure web portal (I’m sure there are better ways, but this was the most painless for me given I already had the browser open at this page!), and then I deleted and re-ran my container this time specifying the volume map:

$ docker --context my_fun_service run --name some_name_I_will_remember --domainname myfunservice -v data-volume/my-data:/tilesets -p 8000:8000 consbio/mbtileserver

So all the tile data I uploaded to Azure through the portal, which I put in a sub-folder called my-data, is now available to my container for use, just like with mapping regular volumes in Docker.


And that’s it. I guess this is still a little devopsy, but it’s a lot less hassle that doing it to a production standard, which is what most of these services are geared up to do. Given I’m used to setting up containers, and in particular Go containers that are just the Go binary I want and nothing else.

One final caveat: this is not the most cost effective way of runnings things - I estimate to run a small and low traffic container for a month will be around £20 if left up continuously. But then I’m not using this for production, just as a way to let me run quick experiments on the way to production, and for that I don’t mind the price when you factor in the time I’ve saved standing images up and down like this, and particularly if they’re only up for a few hours at a time as I work.

Digital Flapjack Ltd, UK Company 06788544