Building an environment to live in!

Let’s Build An Environment Together

So avid 23Squared fans, it’s that time again! Time for another discussion about what we’ve been up to! What have we been up to we heard you ask! Well, lots of exciting things but mainly we’ve been getting excited about all the chocolate we’re going to eat her at 23Squared headquarters! How much are we going to eat? All. Of. It.

We’ve also been doing some thinking alongside all that dreaming about chocolate. One of the main things we’ve been thinking about (and having problems with) is development environments. The classic phrase that’s been uttered by every developer at least once in their career, “works on my machine”. Yep, we’ve all used it as a defence as to why the integration build is broken, we’ve all hidden behind that excuse at least once. Oh, you haven’t? Well, then you’re a liar!

The main problem with developers is that they are inherently lazy when it comes to testing and they are also convinced that their code is never wrong. Developers never make mistakes. Right.

When they come to test something locally and it works then surely it’s going to work on every other environment that we have. Of course it is! Well, no, because you might have a sneaky developer configuration tweak that allows the build and tests to pass with flying colours on your machine but unfortunately, that tweak isn’t present on any other environment!

This is problem one.

Problem two comes from the new starter process. Picture the scene, you’ve landed your new role in that super cool startup you’ve always dreamt of working at! You can finally realise your childhood dream of providing wigs for cats! You take your place among the vibrant dev team on day one, next to the guy whose trousers are just that little bit too short and ask how to setup your environment; how do you run up the cat-wig-ordering-system? Blank faces stare back at you. Nobody remembers but you get directed to some out-of-date documentation. Great. How will you ever be sure that your environment is setup correctly or is even close to being production-like? Don’t be such a square man, we’re a trendy startup, we don’t need your rules…

You do. Lots of time could be saved with the new starter process and the ‘fix the broken build’ process by unifying the environment in which we work. Actually by streamlining the onboarding process (problem two), problem one is inherently fixed because developers don’t need to perform config hacks to get stuff working.

How do we achieve this? Distribute a VM image of course! Woah, hold up there Grandpa! VMs are great for some things but certainly not this. They definitely don’t suit VCS like Git because they are huge and it’s very difficult to version the actual software components that make up the VM individually. In addition, building a new image is a pain and time consuming. They are also slow, granted it very much depends on what you’re doing and the machine you’re doing it on but still, why waste the resources you have?

So, what do we do? Enter Docker. I’m sure most of the population know what Docker is, developer or not. Just in case you don’t, Docker allows you to package an application into a little box (or ‘container’, see what they did there?) to allow you to deploy it anywhere you want again and again.

There’s two ways to use Docker –

  • Docker containers which are black box with an application inside it that can be deployed
  • Dockerfiles which tell Docker how to build and deploy a certain application

So which one should I use?

Well, a Docker container isn’t very flexible but has it’s place if you know you’re going to build something once and then ship it. Most of the time you’re probably going to want to use a Dockerfile. We’ll show you a quick example of both though, for completeness sake:



The Docker Container Method

This method is pretty straightforward but isn’t quite so flexible –

docker run -it ubuntu:latest

This command will run up a Docker container based upon the latest version of Ubuntu. Simple huh? If the Ubuntu image isn’t present then Docker will even download it for you!

The container won’t have much inside it (apart from the OS) so you’ll have to use apt-get  to install the stuff you want. If you wanted you could run a python server up to host files –

Apt-get install python

Python -m SimpleHTTPServer

To shutdown the container, use exit . The command docker ps -a  will show you list of all the containers that have been run and are running. It’s also possible to publish the container to the Docker repository using the docker commit  command.

Unfortunately, you won’t be able to access the server from outside the container, to do that you need to run the following command to start the container –

docker run --name python -p 8000:8000 -it ubuntu:latest

The –name  option simply specifies the name of the container for easy referencing. The -p  switch publishes a port range (-P  publishes all exposed ports). This allows us to browse the directories hosted by the python server by going to the container’s IP and the host port. The easiest way to find this is to use the bundled Kitematic software to investigated the container.




Fortunately for us, there’s a more effective way to do the above…by creating a dockerfile. The following, added to a file named ‘Dockerfile’ will do the same –

FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y python
ENTRYPOINT  ["python"]
CMD ["-m", "SimpleHTTPServer"]

Then, we can build our container by using the command –

docker build -t python_server .

And then run our container by –

docker run -d -p 8000:8000 python_server

As you can see, we’re still using the port switch (-p ) to publish the port on our host system to container. We’ve also used EXPOSE  to tell the container to listen on the specified port.

Then, as if by magic, the the server is serving documents from the IP!

So, how is this helpful for solving the two problems described above?  Well, if you can’t see it then maybe you need to have a little think! Using Docker we’re able to use a Dockerfile to describe how parts of our stack should be setup and deployed! This means that not only can we ensure that entire development team is using the same version of a piece of software with the same configuration that is running on our environments, we are also able to point new guys towards a repo full of Dockerfiles to create their environment! B-E-Autiful! In addition, with the current trend of BYOD it makes clearing down your device post project one hell of a lot easier!

We really think setting up environments in this fashion really is a no-brainer! We hope you’ve enjoyed the latest instalment, we’ve been…