I often see the ‘mine is the bestest* of all’ fights about which programming language is superior to which. This is my statement in these fights.
*pun intended
Languages I wrote in
The first program I ever wrote was a ‘Who wants to be a millionaire’-like game in console written in Pascal. During my studies I was programming in tons of programming languages starting with Assembler, through C/C++, Java, Lisp etc.,but the one that I truly liked was C# so I begun my professional career as a C# developer. Still, from time to time I do put my hands on new languages, just to taste them and see how they work and feel like.
The one I have tried most recently was Python and I am completely astounded by its verbosity. First of all, I was a bit skeptical of interpreted languages. Up until not so long ago, I thought that a language that has no stiff construction, such as defined entry point (main function) or statically typed variables will be hard to work with. And then Python happened.
At the beginning, Python was quite uncommon for me of course. It lacked the braces and stiff program structure. Modules in place of namespaces or assemblies – I am still not sure which are closer to Python’s modules – were a bit misleading, not to mention spaces or tabs in the role of code blocks. Yet, after just 2 weeks of learning it, in one week I wrote a program that in C# took me about a month of work. It was an app that simulated voting process based on political views in 5 categories on a galaxy scale with millions of voters and 5 parties they could vote for. And although it was not as fast as the one written in C#, it was enough to finish the calculations in reasonable time. The next think I did, was a simple implementation of a blockchain-like messaging algorithm, which took me about 40 lines of Python code. The only code I wrote, was the actual ‘meat’ of the project. There was no need to define namespaces, access modifiers, types for variables (which in Python are inferred from variable usage) or to write braces. I sat down and wrote just the code I wanted to try. Now, thanks to learning Python and at the cost of code processing performance, writing a proof of concept or a prototype app is a matter of hours instead of days or weeks for me.
Learning every new language teaches me, that a programming language is just a tool. We should use programming languages just like a carpenter uses his tools. Selecting the right one, be it a chisel or a saw, for the task he is about to perform. Also, we do not always have to use the best, the fastest or the most compact solution we can find. Sometimes it is enough to use the one which is easier to write but slower in performance. For example, if you write a messenger for 20 people, most probably, you do not need to serve millions requests per second. That means, that you can choose a language in which the programs work slower but are easier to write.
Dear reader
If you are about to write a real-time system, then choose a compiled, low-level language such as C or C++. If you are writing an app that has to integrate with different third party APIs, you should probably go for Java or C#. For prototyping or scripting I would recommend Python.
Just stay open-minded and remember, that every language was created for a specific set of tasks with different design decisions taken. Learn new languages. Compare them. Use them. Be like the carpenter. Know your toolset. Keep expanding it. Select proper tools for any given task and respect the ones that you are not using right now. One day you may find them to better fit your specific need.
And please, do not join the ranks of ‘my programming language is better than yours’-developers.
Image: Part of ‘Battle of Oroi-Jalatu’, public domain
In the last post I described my requirements for Soundboard app. Now it is time to decide what can be considered the minimum viable product.
Milestones – first try
Milestone is a specific point of high importance or impact in a project. Taking into account the requirements I wrote down in the last post, I would split them into the following categories:
Audio processing
Play a sample
Stop a sample
Stop all samples
Loop a sample
Mix multiple samples
Set volume for a sample
Option to stop a sample, if another is started
Option to lower volume of a sample, if another’s is gained
Sample sources
Play a sample from an mp3 file
Play a sample from an acc file
Play a sample from a flack file
Play a sample from an internet stream
Soundscapes
Group samples
Group scenes
Save soundscape’s metadata to an archive
Save soundscape’s metadata and samples to an archive
Save soundscape’s samples in order
Load a soundscape from an archive
Load soundscape’s samples in order
Scenes
Save samples’ play state
Load samples’ play state
Save samples’ volume
Load samples’ volume
Save samples’ correlations
Load samples’ correlations
User interface
Touch events served
Soundscapes list
Soundscape view
Scenes list in a soundscape
Text label for a sample
Text label for a soundscape
Text label for a scene
Image label for a sample
Image label for a soundscape
Image label for a scene
From now on, the highest hierarchy items could be milestones all by themselves. In my opinion though, although completing such a milestone would mark a significant step in the application development process, it does not represent a high impact functionality leap, as the application after reaching such a milestone is not usable at all.
Milestones – MVP
The first milestone in the project I write will be met, when I will be able to use my application at the RPG session. That first testable version of an application is called the minimum viable product. It has all the necessary features implemented, so it may already be used in the target environment. In my project that would mean, that considering the audio processing, the app is capable of playing/stopping a sample and looping a sample. For a sample source I choose the file source, as I think that it will be easier to implement. Also, for my MVP I will limit myself to one file type, the mp3 should do the work for now. Saving the samples will be required later when I implement the soundscapes. I completely do not care for soundscapes and scenes . For now, all I need is to be able to play samples, even ungrouped. Last but not least, the user interface I need for now is limited to samples grid, play/stop buttons for the samples and text labels for the samples.
Summary
That is all for today, the first milestone is defined. Now I will add it the Gitlab issue tracking and I am ready for the development.
In the next post I will describe the project set-up and project structure.
I can’t imagine the world without music. I listen to it all day round and everywhere I go.
The need
Most recently I started an RPG campaign in which I am playing the role of the game master. In the myriads of things to prepare, the music was one of the first I though of. I’ve searched the web for a suitable piece of software and found out, that there is none that is touch friendly. In my opinion it’s crucial to have one, because during the session you have to switch the background music in a blink of an eye. There is no time to waste on searching for the mouse, then moving the cursor to a small play icon. On the other hand, although all the apps I found could be interacted with by touch interface, they were not designed for that purpose. So again, too small icons require the precision that is not achievable during the session and on the touch screen itself. What I need then, is an app that displays large sample icons that may be easily triggered by finger.
Requirements
The piece of software I write has to meet some requirements to be functional and operable.
First of all, as an audio processing app it has to be able to play single sounds, then to mix them. For shorter samples it has to be able to loop them. Also, some samples such as birds singing or water droplets falling from the cave vault must be much quieter than others for example the river or heavy traffic so the app is required to set volume for each sample. At first, I though that the app should have a master volume control but then I decided that every modern operating system already has a built-in master control so implementing it in my app would be a waste of time. The next thing to do would be to add a correspondence options between the sounds. What I mean is that, if one sound is played, the other should be turned off, or if you gain the volume of one, the volume of the other should be lowered.
Regarding the source of the sounds, I decided that my app should be able to play samples from an attached data storage and internet streams. Supported formats would be mp3, acc and flack.
In an RPG game players often change their location, let’s say they start in the forest, then get to the city where they find that the villain they are chasing has a secret base in a closed mine. Each of these sceneries has different background sounds. For example, in a cave you can hear bats squeaking, water drops and rocks sliding, but in a village there should be some crowd noises, a hammer hitting an anvil and sometimes a carriage riding on a paved street. I call these sceneries soundscapes. A soundscape is a group of samples that fit a scene. There should be a way to save and load a soundscape. While saving, there should be an option to save only which samples are used (metadata) or both the soundscape’s metadata and samples in one archive. While loading soundscape, samples should be positioned in the same place where they were during saving. There should also be a way of saving actually played sounds in a soundscape as scenes. For example, the soundscape is ‘the city’, the scene is ‘harbour’, ‘marketplace’, ‘administration building’ or ‘night’. They all share a set of samples used, but with some small differences in volume. Also, not all samples are active in a scene, but they are the same in the majority of scenes in a soundscape.
Last thing to discuss is the UI. As I wrote earlier, I want to use the app on a touch-enabled device, so I decided that the app’s interface will be using tiles for samples and tabs for soundscapes. Both the soundscape and the sample button should have a text name and an optional image. I am not yet sure how to orchestrate the scene interface so this topic will be considered later in development.
What’s next?
The requirements above are a mix of obligatory and ‘nice to have’ functions. In the next post I will create the vision of the minimum viable product – the very basic functionality the app has to offer to be considered done and usable.
Image: Sombre by Philippe Rouzet
I’ve just joined the Get Noticed! competition. This is my welcome post in which I will briefly introduce the contest’s idea and why I decided to join.
(a too many afters) After Get Noticed! 2016
This year’s edition is already the 3rd one. Last year I was not taking part in it, but I went to the closing gala. After that event, after traveling there and back again over 600 Km, after meeting with that year’s competitors, after seeing all the positive attitude, and finally after the after party, I’ve decided that, if Maciej Aniserowicz (the man behind the idea) announces the next edition of the competition, I will surely join it. And so I did.
Rules of the contest
During the contest all participants are obliged to post at least two blogposts a week between march 1st and may 31st for at least 10 weeks. That means that each participant has to produce at least 20 blogposts during the competition.
Another rule is that each participant has to be writing an open-source project of his choice for the duration of the competition. Each week one post has to directly relate to the OS project, another one has to loosely refer to IT.
Why to write the blog
In the next section I will discuss why exactly I have joined Get Noticed, but first I have to tell you, why it is worth to write a blog.
First of all, a blog is the greatest way of knowledge consolidation and repository I have ever used.
If you are learning something, write about it. Do a tutorial or write down your notes on more interesting parts of it. Thanks to that whenever you have to recall something, go to your blog, and find the article you are interested in. It may be hard to believe but it’s faster than googling it. What’s more, thanks to the fact that you wrote it, it’s easier to comprehend than any third party tutorials. Also by writing about what you are learning, you continuously consolidate your knowledge by repetition.
Despite the fact that I wrote only two blog post on Docker, I already used them as an entry-level training material for my colleagues. I could focus on my work while they were reading the text, then we could discuss the details. That’s another value of blogging, you may use your posts as a training material.
Last but not least, a blog is a great way to get potential job offers and finalize them. It shows your level of knowledge and communication skills. As an added value you may also count that it shows, that you are broadening your skillset outside working hours. As such you shine among other candidates.
Why did I join the competition?
I treat this competition as a motivation to keep blogging regularly and more often. Allegedly after 90 days of doing something it stops being a chore and becomes a habit. I hope that it’s true, and my experience shows it is. Therefore I am participating to make this chore to habit transition.
I’ve also seen how a blog may affect a community. I would love to be a part of it and have my own input into our collective knowledge, even though I am not a trendsetter. I am eager to educate myself so why not educate others in the process? As an old master told his apprentice: “If you want to master something, teach it to others”.
During research on how to use Docker, when I came to the point of running interactive containers I got interested in how Docker would behave, if -i or -t option was passed independently.
Below is what I found out.
Let’s start with -i only:
console
badamiak@dockerhost:~$ docker run -i debian
ping 172.17.0.1 -c 5
PING 172.17.0.1 (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: icmp_seq=0 ttl=64 time=0.097 ms
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.082 ms
64 bytes from 172.17.0.1: icmp_seq=3 ttl=64 time=0.095 ms
64 bytes from 172.17.0.1: icmp_seq=4 ttl=64 time=0.076 ms
--- 172.17.0.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.076/0.085/0.097/0.000 ms
tcpdump
badamiak@dockerhost:~$ sudo tcpdump -i docker0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:40:03.905470 IP6 fe80::42:acff:fe11:2 > ip6-allrouters: ICMP6, router solicitation, length 16
23:40:15.028432 ARP, Request who-has 172.17.0.1 tell 172.17.0.2, length 28
23:40:15.028448 ARP, Reply 172.17.0.1 is-at 02:42:16:34:b1:f4 (oui Unknown), length 28
23:40:15.028453 IP 172.17.0.2 > 172.17.0.1: ICMP echo request, id 5, seq 0, length 64
23:40:15.028475 IP 172.17.0.1 > 172.17.0.2: ICMP echo reply, id 5, seq 0, length 64
23:40:16.030929 IP 172.17.0.2 > 172.17.0.1: ICMP echo request, id 5, seq 1, length 64
23:40:16.030956 IP 172.17.0.1 > 172.17.0.2: ICMP echo reply, id 5, seq 1, length 64
23:40:17.032375 IP 172.17.0.2 > 172.17.0.1: ICMP echo request, id 5, seq 2, length 64
23:40:17.032405 IP 172.17.0.1 > 172.17.0.2: ICMP echo reply, id 5, seq 2, length 64
23:40:18.034833 IP 172.17.0.2 > 172.17.0.1: ICMP echo request, id 5, seq 3, length 64
23:40:18.034869 IP 172.17.0.1 > 172.17.0.2: ICMP echo reply, id 5, seq 3, length 64
23:40:19.036412 IP 172.17.0.2 > 172.17.0.1: ICMP echo request, id 5, seq 4, length 64
23:40:19.036441 IP 172.17.0.1 > 172.17.0.2: ICMP echo reply, id 5, seq 4, length 64
23:40:20.041641 ARP, Request who-has 172.17.0.2 tell 172.17.0.1, length 28
23:40:20.041679 ARP, Reply 172.17.0.2 is-at 02:42:ac:11:00:02 (oui Unknown), length 28
As we can see, input is redirected to container and container output is redirected back to host.
Now for -t only:
badamiak@dockerhost:~$ sudo tcpdump -i docker0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
23:46:15.081837 IP6 fe80::42:acff:fe11:2 > ip6-allrouters: ICMP6, router solicitation, length 16
23:46:19.089818 IP6 fe80::42:acff:fe11:2 > ip6-allrouters: ICMP6, router solicitation, length 16
^C
2 packets captured
2 packets received by filter
0 packets dropped by kernel
Although we called a ping command, ‘tcpdump’ shows that nothing was sent. That means that the input was not redirected to the container.
In summary, it is possible to work with -i only for interactive mode, although you will see the output of the called commands after they finish execution.
-t option activates terminal emulation mode meaning, that the output of the executing command will be printed during its execution.
Just try to compare how Docker behavior will differ if you call:
docker run debian ping google.com -c 5
and
docker run -t debian ping google.com -c 5
You may prefer to use -t only, if all you want to do is to check default container command output in real-time, without interacting with it.
In the previous part of Docker quickstart series, I’ve covered basic concepts behind Docker and its installation. I’ve also shown how to run a simple test container named “hello world” with the command:
docker run hello-world
In this part of Docker quickstart, I will start with running containers in interactive mode with volume mounting and port forwarding. Then I will show you how to attach to a running container, read a containers output and briefly explain container configuration info.
Docker run
parameterless
docker run <image>
In the first part of this quickstart, I’ve shown you how to test if your instance of Docker is running by starting a sample container. The CLI (command line interface) call I’ve used then was ‘run’.
In the most basic call, without any options, it just starts an image and executes its ENTRYPOINT or CMD. ENTRYPOINT and CMD are dockerfile directives and will be covered in another part of this quickstart. For now, all you have to know is that it runs default command specified for container.
with specific command
To start Docker with user specified command call:
docker run <image> <command>
Let’s say that all you want to do with your container is to test if it can connect with host. For that I will use the most basic tool which is ping. All I have to do to start ping from container is to call:
docker run debian ping 172.17.0.1 -c 5
console
badamiak@dockerhost:~$ docker run debian ping 172.17.0.1 -c 5
PING 172.17.0.1 (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: icmp_seq=0 ttl=64 time=0.096 ms
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.069 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.077 ms
64 bytes from 172.17.0.1: icmp_seq=3 ttl=64 time=0.067 ms
64 bytes from 172.17.0.1: icmp_seq=4 ttl=64 time=0.062 ms
--- 172.17.0.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.062/0.074/0.096/0.000 ms
So we’ve specified that Docker should create container using Debian image, then inside the container it should run ping to dockerhost with 5 packets.
interactive mode
Another way to run a container is to start it in interactive mode. For example, if you start the Debian image, it will immediately stop. It’s because no action command was specified to run.
Now let’s assume that you want to interact with your container. To do so, all you have to do is to add -i and -t switches to Docker’s ‘run’ command. -i switch runs the container in interactive mode, meaning your input will be redirected to the container. -t switch is used to emulate the terminal.
Command pattern there will be:
docker run -it <image> <command>
For example: as bash is the default command for Debian image one can skip the <command>. So, to start it in interactive mode all you have to do is type:
docker run -it debian
console
badamiak@dockerhost:~$ docker run -it debian
root@2f41c9ed26cb:/# exit
exit
badamiak@dockerhost:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2f41c9ed26cb debian "/bin/bash" 7 seconds ago Exited (0) 4 seconds ago admiring_hoover
badamiak@dockerhost:~$
As you can see, I am connected to Debian container CLI logged in as root.
At this point, I got interested in what would happen, if -i or -t was passed independently. Details on what happens under such conditions are here: Docker ‘run’ -i/-t independently.
removing container
Up to this moment every time we’ve started a container, after it’s been finished, it was left on the drive, ready to be reexecuted. Such behavior is not always expected, e.g. if we are not going to reuse existing containers, as with time the count of stored containers that are not running will stack up to a tremendous level and make it hard to manage containers. The easiest way to get rid of an exited container is to remove it right after its execution stops. Fortunately, we can specify that the instance of an image we create with ‘run’ command should be removed when it finishes as part of ‘run’ command itself. The switch that determines that is –rm.
docker run --rm debian
Now let’s see an example. First, I list all containers that are stored in exited status:
console
badamiak@dockerhost:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
badamiak@dockerhost:~$
So I start with no stored containers. Then I start the Docker container three times without –rm switch:
console
badamiak@dockerhost:~$ docker run debian
badamiak@dockerhost:~$ docker run debian
badamiak@dockerhost:~$ docker run debian
badamiak@dockerhost:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6252be2356ed debian "/bin/bash" 4 seconds ago Exited (0) 3 seconds ago modest_allen
1e10ce08a831 debian "/bin/bash" 17 seconds ago Exited (0) 15 seconds ago thirsty_lamarr
bca7d5ae1d47 debian "/bin/bash" 22 seconds ago Exited (0) 20 seconds ago berserk_engelbart
badamiak@dockerhost:~$
As you can see on the attached console listing, I have three stored containers in exited state, one for each time I’ve called Docker ‘run’. Then I start another Debian container, but this time with –rm switch:
console
badamiak@dockerhost:~$ docker run --rm debian
badamiak@dockerhost:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6252be2356ed debian "/bin/bash" 15 seconds ago Exited (0) 14 seconds ago modest_allen
1e10ce08a831 debian "/bin/bash" 28 seconds ago Exited (0) 27 seconds ago thirsty_lamarr
bca7d5ae1d47 debian "/bin/bash" 33 seconds ago Exited (0) 32 seconds ago berserk_engelbart
badamiak@dockerhost:~$
As you can see, despite having started another container, it is not visible on containers list as it got removed right after the execution finished.
This is just one of two main methods for removing existing containers. That one that is more than enough as long as you don’t use daemon containers, or you don’t reuse already existing containers. I will describe the other method another time in an article about containers management.
detached/daemon mode
Up to this point, I’ve described how to run a container that does its job and then exits, but sometimes you need to run your container as a background service. This container mode called ‘daemon’ or ‘detached state’ may be required for file hosting services, database servers, monitoring services or other continuously running applications. To run such an application you follow the ‘docker run’ command with ‘-d’ switch.
What the -d switch does, is it starts the application and then detaches the console from Docker process. For example, if you want to start nginx server you probably want it to run even if you disconnect from your Docker hosting machine. To achieve that, you have to detach your console from the nginx instance. To do that you just simply call:
docker run -d nginx
In the result Docker will return the container id and detach the console.
In the excerpt below, first, I simply start an nginx container. Console is attached to the process. Then I break the program with “ctrl + c”. Ps returns no results for nginx processes, as these were killed by break. After that, I start the detached container. What Docker does there is what follows:
starts the nginx container
detaches the console, which means that container process is running in an independent context
prints out the container id.
Now, when I call ps, it returns that nginx processes are running. I may now exit from the console and the nginx processes will still be running in the background.
Not so long ago I had to do research on how to run a dotnet application on Docker. Thanks to that I’ve learned the basics of Docker. The text below is based on the notes that I’ve written down in the process.
Docker is an application packaging platform and runtime. Each image (Docker’s name for application package) consists of layers containing application dependencies and the application itself. Layers may contain system, runtime, libraries, etc. What’s important is that each layer is readonly. Thanks to packaging one may start several separated instances of one image on one physical machine.
How it differs from a virtual machine
Taking the above into account one could say that Docker is in fact a virtual machine. Well, things are getting a bit complicated there.
Docker is not a virtual machine yet it virtualizes some resources. For example, each container has an isolated file system or network (although it may be changed). On the other hand, each process has its hook on the host machine. Just take a look at “ps” output:
As you can see, starting Docker container (instance of image) host has started several new processes. I am talking about PIDs 19495, 19510, 19522 and 19535. The container that I’ve started is a dotnet core web host. That’s the reason why there are running dotnet processes. As I don’t have dotnet installed on host, it proves that the container processes are delegated to Docker’s host.
When you consider the above, the difference between Docker and a virtual machine is getting clearer. It is that VMs are virtualizing whole hardware whilst Docker exposes “bare-metal” devices as they are seen by the host machine.
Another difference is that setting up a VM requires you to store whole stack used by your application. I mean that VMs have to store a lot more data than containers. VMs are using virtual storage devices that contain bootloader, operating system, drivers, libraries and last but not least your hosted application. If you want to scale an application horizontally you have to clone a complete virtual drive image. Docker architecture limits the amount of data that has to be stored for a container. By providing readonly layers it makes it possible to reuse the image, so several containers may run using single image. Atop of all readonly layers Docker inserts a read/write persistence layer. That persistence together with a container config (network, volumes, environment vars, etc.) is what differs the containers from each other. See the console output below to notice that running ubuntu under Docker consumed about 130MB for image, then starting new instance adds under 150KB to the disk usage. The reason is that both containers are using one image. The only stored information is the difference between the image and the container. It works a bit like snapshots in VMs but with Docker you can run several snapshots simultaneously.
console
badamiak@dockerhost:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
badamiak@dockerhost:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
badamiak@dockerhost:~$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 7736784 2429116 4891620 34% /
badamiak@dockerhost:~$ docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
f069f1d21059: Pull complete
ecbeec5633cf: Pull complete
ea6f18256d63: Pull complete
54bde7b02897: Pull complete
Digest: sha256:bbfd93a02a8487edb60f20316ebc966ddc7aa123c2e609185450b96971020097
Status: Downloaded newer image for ubuntu:latest
badamiak@dockerhost:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 0f192147631d 8 days ago 132.8 MB
badamiak@dockerhost:~$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 7736784 2595552 4725184 36% /
badamiak@dockerhost:~$ docker run --name instance1 ubuntu
badamiak@dockerhost:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec6458250c9b ubuntu "/bin/bash" 3 seconds ago Exited (0) Less than a second ago instance1
badamiak@dockerhost:~$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 7736784 2595672 4725064 36% /
badamiak@dockerhost:~$ docker run --name instance2 ubuntu
badamiak@dockerhost:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a1a5090e8e5 ubuntu "/bin/bash" 1 seconds ago Exited (0) Less than a second ago instance2
ec6458250c9b ubuntu "/bin/bash" 4 seconds ago Exited (0) 1 seconds ago instance1
badamiak@dockerhost:~$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 7736784 2595812 4724924 36% /
Installing Docker Engine
There are two ways to install Docker.
The easy way using shell script provided by Docker inc.
The hard way if you don’t trust third party shell scripts. The hard way can be found at Docker page.
While learning Docker I’ve used the easy way so I will describe this one here. Using the easy method you don’t have to care about any dependencies or setting up the system (except for installing it of course). As I am no Linux geek, I’ve decided to use the easy way.
Basically all you have to do to install Docker is to call:
wget -qO- get.docker.com | sh
What it does is:
“wget -qO- get.docker.com” downloads the script from ‘get.docker.com’. The ‘-q’ flag stands for quiet, and means that ‘wget’ won’t write progress to standard output (stdout). The ‘-O’ flag redirects downloaded content to the specified file. By using ‘-O–‘ we basically say that ‘wget’ should write the output to stdout which I use in the next step.
By using the pipe sign I redirect the output of the ‘left command’ to the ‘right command’s standard input stream (sdtin).
‘sh’ processes data from stdin. Due to the fact that I’ve redirected the ‘wget’ stdout to the ‘sh’, the ‘sh’ command executed the downloaded script.
After the script processing ends you may call ‘docker -v’ to verify installation.
badamiak@dockerhost:~$ docker -v
Docker version 1.11.2, build b9f10c9
Test run
After installation you may want to try to run your first container. Docker provides an official image for Hello-World application. All you have to do is to call ‘docker run hello-world’. If everything is working right you will see the line ‘Hello from Docker!’ in the result.
console
badamiak@dockerhost:~$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
badamiak@dockerhost:~$
Summary
After reading through this post you should be able to run your own instance of Docker and also a sample application contained in the hello-world image.
I hope that I’ve also explained the basic concepts behind Docker and its architecture.
In the next part of this tutorial I will cover running a parametrized container, setting up virtual networks and attaching host directories as volumes to a container.
If you have any questions or remarks please write a comment and I shall soon address your concerns.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok