When I began work with SQL Server, I used to run a single Windows server with a single instance of SQL Server 2000. I used this instance to test code and configurations separate from my employer’s servers. It helped me verify if a change could help and by how much it could help. That single instance grew into two instances when I wanted to start testing SQL Server 2005. Running a few named instances on a single server wasn’t tough.
By the time SQL Server 2008 R2 rolled out, It took too much effort to spin up multiple instances of SQL Server without the installers trying to step on each other. That was when I adopted VMWare. I upgraded my server to a dual processor box and started hosting multiple instances with different dependencies from that one server. The problem was those VMs were running multiple instances of Windows. The result was I was losing a noticeable amount of CPU and Memory to that overhead.
With SQL Server 2017, we received SQL Server for Linux and Docker. With Docker, I can run multiple instances of SQL server and only lose one bucket of resources for the operating system. Even better, the way containers work allows you to run multiple versions of SQL Server without the instances causing dependency issues with each other.
Now in 2023, I’m working with more than just SQL Server. I’m up to 33 containers. I’m running multiple versions of SQL Server, MySQL, and Oracle for database demos. I run pi-hole, a Unifi network controller, and an instance of swag (Secure Web Application Gateway) to help manage my local and lab networks. I’m running a Grafana, Loki, Promtail, and Prometheus stack to monitor my server and docker images.
I started running Portainer a little more than a year ago to simplify how I manage creating and updating my docker images. The problem was the process of updating my images was too manual. You’d have to scroll through your list of containers looking for those that indicated the image was outdated, Like my Grafana instance here.
Then you’d click the container name, and when the container details, you’d click Recreate.
Then flip the slider to Pull the latest image and then click recreate.
That’s not bad if you’re managing 2-3 containers that infrequently update. But when you get to 20 or so containers, and those containers are updated weekly, it quickly becomes a hassle.
Watchtower for more automation
Yes, the solution to manage 30+ containers is to create another container. Watchtower automates checking your images for new versions and optionally restarting containers that use those images. I say optionally since you might not want to enable this for production containers. In those cases, you’d want to spin up a new container and then update DNS entries to replace the old container with the new container.
But in my lab, this is perfect. I set up the new container with some environment variables like WATCHTOWER_CLEANUP that remove my unused images after an update. Then WATCHTOWER_SCHEDULE lets me run the check every night at 0100.
Once configured and granted access to the docker sock file on the host, you’re good to go. No more manual container updates!
As usual, if you have any questions, please let me know!