How Docker Made Coding and Testing New Projects Fun and Accessible

On By Puja Abbassi in tech

With Docker’s 2nd birthday coming up soon, I thought about what Docker has changed for me personally in this time and what that could mean for a wider group of people. But first a little background.

In the recent years I've looked into many development trends and tried Ruby/Rails, Python/Django, Objective-C/iOS, but it was always combined with a huge hassle in setting up and debugging the environments on my Mac. Sure there’s Vagrant, but finding the right images was never easy with that and the environments are really heavy, as they’re full VMs.

I was also always stumbling upon cool open source projects (e.g. ShareLaTeX, Etherpad, etc.), but the fear of having to set up environments and servers and such just to deploy and test these projects even locally, made me usually turn my back and work on something else.

Suddenly I’m working in the Docker ecosystem with all its promises of being easy and fast. And it actually is (at least with the more recent version of Docker). There’s so many tutorials and docs to get you started, from installing Docker and getting it to run to deploying complex apps with many containers. And for everything else there’s always Stack Overflow.

So I thought I’d give it a try. Keep in mind that this is me not as a Docker/Developer Pro but as a mere user of our service without much active coding for the last 10 years (besides small side projects).

Dockerfile Magic

First, I started with trying out other people's code and using Docker just for testing stuff (locally). Again Github and Stack Overflow are worth a lot here. I tried out and got ScummVM (Monkey Island or Day of the Tentacle instantly "in the cloud", anyone?) running in a container with remote access. (Maybe this will be my next blog post, as with Kord’s ngrok workaround we are not bound to the one http port on Giant Swarm anymore). Suddenly, even trying out projects like StartupML (which is even already dockerized) or Facebook’s Deep Learning algorithms is accessible to nearly anyone with some basic technical understanding.

Photo of magic book

Basically it’s always just some Dockerfile magic - in a way it’s like plug&play. Even if there’s often no ready made Docker container(s), usually the projects themselves have some good documentation as to how to setup the project on e.g. Ubuntu, which is easily translatable into a Dockerfile. For a lot of simple apps you don’t even need more than one container. If, however, the app consists of more than just code (DBs, caches, etc.) or is by itself split up into different services, then you should better think about splitting it up and deploying to several containers, which is a bit more tricky, but fig (now docker-compose) makes that quite easy (locally).

From Dockerfiles to Own Code

My next step was delving into actually writing some own code, and as it so happens we have lots of cool plans for expanding and improving our docs. Although I hadn’t written Java code for about 13 years (our profs made us do it), I decided to start building our first app guide in Java. Building on the shoulders of "giants", I sure looked internally into the Giant Swarm team first and thus based my initial setup on Matthias’ great introduction to Java and Docker. With some further help from Stack Overflow as to how to use REST API calls, Redis, and environment variables in Java I got the basic current weather example running quite quickly (I hope we can publish the guide soon). Testing it locally with docker-compose and then moving it to the swarm with a simple translation of the docker-compose.yml to a swarm.json (yes, we could automate that at some point) was not a big step anymore.

Conclusion

Summarizing, I would encourage everyone to start deploying open source tools and maybe even coding right now. It won't get much easier (ok, I know it will always get easier and we’re actively working on that, but still). What Docker and in this context also Github and Stack Overflow have accomplished already is democratizing the ability to run projects (be it other people’s or your own) with next to no knowledge of Linux and the rest of the stack below it. This already works great on your own computer.

The next step is making the same possible online, in the cloud, and scalable - again without having to get deep into server technologies. And that’s what we, Giant Swarm, made our mission. As mentioned above transitioning from a fig.yml or docker-compose.yml to a swarm.json is a matter minutes (at most). After that it’s really only a "swarm up" and seconds later it’s actually up and running and available to the Internet. Believe me, the first time you do a "swarm up" and check your domain seconds later, you won’t believe it was that easy.

This suddenly enables a whole new group of people to get their code running online. Lots of scientists for example already write code, but often times this code is only run on their own computers, which are usually underpowered and not fit for running for longer periods of time. These scientists could now just package their apps in containers and have them up and running on the swarm in a matter of no time. Testing, staging, and deployment, there’s nothing to be afraid of anymore. Next up microservices.

Picture of Puja Abbassi
Puja Abbassi
Developer Advocate @ Giant Swarm

Let’s start your journey towards microservices

Contact us Now