Time and time again I see the term “docker” while reviewing job postings. Today I decided to dig in. I went to docker.com and signed up for a free trial and created a free repository “invoicebuilder: for my Project: Bootstrap Invoice Builder. I then connected digital ocean, connected to a new docker slack channel, and connected to my github account. Wow that was super cool and I just gave away the keys to the castle.
Next I go back into the repository and connect it directly to the github repo and try to build it. This build failed but to be expected, these are not “docker” apps. Not yet. I know nothing about DockerFile. Not yet.
Deploying a “website” via docker will require that the “app” be completely deployable from scratch, including the commands in the dockerfile to setup the enviornment to run the application. This is really cool stuff. I have been wanting to work on some stuff like this, and that door is now open. Time to get to work!!
Docker, Docker Docker!!
Using the docker cloud also prevents me from having to do Command Line or Shell Commands on the actual deployment IP Address. As a test of eliminating the “Server Administration” steps of deploying a new application the docker cloud is pure gold. Why was this technology not available 20 years ago when I started building websites; i mean “applications”??
At the end of the day I have 2 docker repos built and running on the docker cloud. These do not do anything yet, but they are blank canvas and tied to the github repos with automated rebuilds after every github push. As I finish the development on the old cpanel hosting stack, I will be simultaneously deploying via docker on ubuntu and nginx. Bootstrap Invoice Builder will be a good demonstration for migrating existing codebase into docker containers.
Day 2: 1/19/2017
This morning I went to work immediately to prepare my Bootstrap Invoice Builder repository for dockerizing. I found a sample Dockerfile to get nginx running during application deployment. Since I don’t have the repository prepared to setup and create mysql database and users, I just created a index.html file that says “testing”. It took me 6 builds to get first build completed. During builds 4-6 the docker cloud really got slow about 10:00 AM CST. I am using the lowest spec free stuff so I cant complain much. My only real complaint now is waiting for builds with a full re-deployment test. Once I have the framework completely working, I will only be doing rapid testing on the DEV stack. A final push to the master repo, will update the Docker stack. Then I can then test production automation (docker->digitalocean) just to make sure its working at the end of a development cycle.
To get my ubuntu server cli done faster I created a new ubuntu server at digital ocean and will test my Dockerfile RUN commands more rapidly. I will also go through the changes for getting the repository and codebase dockerized for deployment on ubuntu. This is also first chance to play around with php 7.0 as this server is ubuntu 16.
With this new take on setting up an application using a bunch of free code and public repositories I do not want to put my user login information for mysql in the code base. Normally I would leave this file out of the repo and manually put it in place. That is not going to work for docker.
This is where I will get to do something very kewl. I am going to make sceneserver.com/api/ for all my future applications. During deployment, the automated setup will initiate a call to this api to see if this deployment has been authorized. If the request is allowed, the IP address will be stored for logging, tracking, and DNS automation later. The API will then respond with credentials necessary to use the sceneserver mysql db node (digital ocean). These credentials will then be stored in the local application during deployment. This will also allow me to prevent having mysql running on the docker built nodes.
The next phase of smashing the LAMP stack website mold will be completely deploying without php and having the website application 100% api driven with db and programming logic from subdomains and nodes that always exist. This is way cool.
Updating the App to Run on Ubuntu:
I made the repository public config file for time being. I then added necessary user and permissions in mysql node. I saved the database table creation as local file invoice.sql for later. I then added a RUN statement to Dockerfile to add the servers private IP to /etc/hosts on the docker container. This way the app will know where the db is hosted without using public DNS.
I then commit the current repo, so I can test the docker deploy, and do a manual pull. Now I can begin to test the app running on ubuntu and php7. I will rapid develop here for a bit and leave the LAMP stack behind now.
While working with the local ubuntu version of the repo on php7 I had to adjust Dockerfile to remove default nginx site, add my invoice.conf site, make a change in server setup (php7.0-fpm php7.0-fpm-mysql), start php7.0-fpm, and change required mysql_ statements to mysqli_ and even add the connection variable to some statements. Once this is done I can now see my application running the starter template.
Next I need to make some changes to the codebase. This was initially developed in a subfolder as /invoice/ on the lamp stack. It also used full relative typed paths in some code. Since this is now stand alone, the template paths will need to be defined. I then need to adjust the codebase to handle posting and saving changes.
Once I had the application properly inserting a new invoice, I needed to adjust for nginx routing of mod_rewrite urls. The lamp stack used .htaccess to route /[invoice_number]/ back to index.php?invoice_number=$1. I had some struggles with getting nginx to rewrite the folder into a _GET variable. This was supposed to be easy in nginx? I did get the request uri – “/ID/” to pass, so I can use this for now after chopping out the /s.
I then went through further testing of see, send, save, and settings. I had to setup sendmail and adjust my automation cron a bit and got messages sending. They did not arrive at gmail at first, but after some public dns entries the messages did arrive.
Working back on docker cloud my build has failed. The command to copy in the invoice.conf nginx file and symlink failed. I made a few adjustments trying to get the Dockerfile fully ready for tomorrow. There can be NO typos in the Dockerfile. LOL. At Docker 10 commit in github I am online and ready.
Day 3: 1/19/2017
Today was a long day of working with ubuntu 14, further dockerizing the server setup commands with php5 NOT php7, and learning how docker stacks, services, and containers work. By the end of the day at commit #26 my application is running and showing my index.html “testing”. I quickly build Docker #27 with a invoice.conf that will ignore .html file and run my application index.php. If everything works right, after the docker repository builds, I can redeploy and see my application operational. Yay that works. Now that this is working, it is time to do some rapid debugging on the ubuntu-14 server to get all the php working again nicely.
Now that I have my application working to full deployment I can now focus on launching the service onto a load balancer and multiple digital ocean nodes. When the post commit automated docker build works, the service is then created on a new digital ocean nodes dynamically. I should be able to see the application work across all the nodes. Now my identical website is online with 1 ip addresses on 2 different servers.
This is a successful test loop necessary to repeat this process for any project’s development life cycle. I cracked docker in just a few days to create 2 live copies of my application using a dev stack, github, docker, digital ocean, dynamic deployment, and all with notifications back to slack.
Day 4: 1/24/2017
This morning I did not do much with docker more than make sure docker did its job and it did. This is where docker shines. I did some minor repo changes to satisfy the software working on its original dev stack (sceneserver.com/invoice/) and made 2 commits. After dropping my daughter off at school I came back, logged in docker, and verified that the new container was updated and online. Amazing.
After lunch i spent some time securing my docker system by automating the retrieval of db, user, pass, and host during the dockerfile setup. This really was a very small amount of time. Now that the system works, I have an api call in my Dockerfile that will retreive the db.credentials from my host. This also sends a notification email back to me showing me the ip address where the auth request originated from. I previously thought it was deploying this onto the Digital Ocean node (as one was required to get this far). I notice that node is not actually used yet. So the next part is, getting this off the docker cloud and deployed into digital ocean node’s ip address.