Beginning A Docker Experience
Given the opportunity, I was able to take a class about Docker that I had been long excited about. Essentially, my excitement stemmed from my personal interests in Back-End Development. Today I will be walking you through what I learned and my experience with Docker.

Well, what is Docker?
If you take it directly from the source it tells you that Docker is a platform that was created for developers and sysadmins to build, run, and share applications with containers. The use of these containers to deploy your applications is called containerization. Docker was able to take a known concept of containers and created a way to deploy your own apps with it.
According to Scott Johnston, CEO of Docker, “Containers have been around for decades, and one of the reasons they did not take off was that they are a very difficult technology to use..”
Johnston then continues on to explain that containers were only used by the larger tech companies like Google, Facebook, IBM, etc. Docker was able to capitalize off this greatly by making containers so much easier to use and understand.
Containers and Images
Essentially a container is just a running process that has encapsulation features to isolate it from the host and other containers. Containers are very efficient and you can replace them or upgrade them without affecting any other containers. You have the option to use multiple containers for different staging environments like the development, test, and production stage.
A Docker image contains everything you need to run an application which includes but is not limited to the code, runtimes, and dependencies. A container is the instance of an image and you can create multiple containers in one image.
Difference Between using Containers and Virtual Machines
Containers and virtual machines are both alike in terms of isolation and allocation benefits. However they are different from each other because containers are virtualizing an operating system as a virtual machine uses hardware. My instructor, Dani, demonstrated the efficiency of both containers and speed virtual machines by analyzing the speed at which they copy operating systems. Virtual machines proved to be slower in boot time and using more space due to its method of copying an entire OS.


My Dockerfile
With Docker I was able to deploy my Django project that I am currently in the process of working on called GamePro. I created GamePro to showcase favorite games and share it amongst others. These days I’m working on the front-end to get closer to my end goal. Although GamePro still has some issues that need to be worked out Docker could still be utilized. No matter what bugs you have in your code it should not stop you from dockerizing your application as it shouldn’t hinder you.

The Dockerfile of my project uses 3 out of the 4 commands for best practices in writing Dockerfiles according to Docker’s documentation. Those commands are FROM, COPY, RUN, and CMD. FROM is used as the first layer to pull a base image from DockerHub for a starting point. COPY copies all of your code into the empty directory that you create. RUN builds your application. CMD is used to provide any defaults for your executing container. I haven’t implemented the CMD since I have no need for it, but I will do more research to see what its full capabilities are.
My Docker-Compose
Compose is a tool created to define and run multi-container Docker applications. Using compose you will have to use a YAML file to configure your application’s services. According to Docker, Compose is a three step process of
1. Defining your app’s environment with a Dockerfile so that it can be reproduced.2. Defining the services that make up your app in docker-compose.yml so that they can be run together in an isolated environment.3. Running docker-compose up and Compose starts and runs your application.
This is what my docker-compose file looked like

In my compose file I created a web service since my Django website requires it. The command line is used to override the default command. I am using line 6 to make my project run whenever I use docker-compose up. The ports line is just expose my ports.
Because I had used docker-compose for the settings I used it to build my application. To do this I used the following command.
docker-compose build
Deploying with CapRover
After I was able to dockerize my project I wanted to deploy it using CapRover. Now CapRover is a very easy to use app deployment and web service manager for many applications such as NodeJS, Python, MongoDB, PHP, and so much more I would be sitting here forever typing them out to you. CapRover is so useful because of how fast it is and according to them they use things like Docker, nginx, LetsEncrypt, and Netdata under the hood. It gives you a simple UI so that it comes off so easy to understand that you wouldn’t believe how complicated it is under the hood.
In order to deploy my project, I needed to obtain a domain. Luckily my instructor Dani recommended Github’s student education pack that allows college students access to many things, one of them being free domains. I was able to get a .tech domain from https://get.tech/ at a cost of exactly $0.00, the domain I was trying to get was initially $49.99 so I was very appreciative of this opportunity given to us by both get.tech and Github.
To get my server setup I used DigitalOcean and our class received a $100 credit from Dani to purchase a plan for our droplets. Droplets are linux based virtual machines that run on top of virtual hardware. I purchased a CapRover droplet and it was very easy to configure.
Afterwords I had to install CapRover using the command
npm install -g caprover
After that I went straight into getting my server set up using the command
caprover serversetup

In the screenshot above I had began setting up my server and then followed the steps they ask of you. They ask for server root domain which for me was the .tech domain that I had purchased to use for my server. After that they only ask you for a password for your CapRover dashboard and a valid email address.

This is my dashboard and as you can see the CapRover dashboard is incredibly simple as possible and very easy to use and understand. Here is where you create your app and get instructions on how you can deploy it. They will provide you several methods, I recommend using the github method.
One of the coolest features along with this is their NetData monitoring tool which is a popular and powerful tool that provides real-time performance and health monitoring. It is very fast, efficient, and it runs on all systems with no disruptions.

As you can see with NetData there is so much you can access which makes CapRover even better than I already thought it was.
Overall I had such a wonderful experience with Docker and CapRover I feel like I learned so much in a very short span of time and I would like to say if anyone is interested in learning more please go for it and explore the topics. I am very appreciative of my instructor Dani for all her help during this class and also my peers who did not hesitate to assist me whenever I needed them.