Q2 summary

Long time no see

So, I was not posting for some time recently. This post is to update you ( and make a summary for me) on what I was doing during that time.

April

At the beginning of April I posted my last content to this blog.
Then I stopped blogging, yet it does not mean that I stopped coding.

soundboard

First of all, I took all the features I mentioned in Milestones, Minimum Viable Product and wrote them down in gitlab issue tracker. Then I assigned some of them to the MVP milestone.

fsharp

While still thinking about what language to write the soundboard in (I am considering writing it in C#, F# or Python) I took a free F# course available on Udemy (https://www.udemy.com/starting-fsharp/learn/v4/content), which I recommend as it covers the very basics of functional and F# programming.

Finishing the 4.5-hour course took me approx. a week (coding included) and I was awarded with the certificate of completion (https://www.udemy.com/certificate/UC-9BZ4L4Y9/). The code I produced during the course is available at my github: https://github.com/badamiak/udemy-fsharp-course.

May

gitlab to github

I started May with moving all finished projects from my private repository to github (https://github.com/badamiak).

ssl for gitlab

Then I moved on to devops stuff and introduced the ssl certificate to my private gitlab instance. The goal was to start using the ssl as the default method of transfering all project files using ssl channel instead of an unencrypted http one.

The certification path in my case has only two levels –  first the private, self-signed, root CA (Certification Authority), then a subdomain cert signed by the private CA.

At first I had some problems with my certification chain as I had no entry for a CN (Common Name) in my certs. Then I tried to generate the subdomain certificate with proper CN, again without success due to lacking CN record in the root cert. After regenerating the root CA cert and providing proper CN record values for both root and subdomain certs (and adding my CA to my trusted CAs list) my ssl connections were identified as secured, at least by Firefox and Edge browsers. Chrome still throws the ‘wrong CN’ error for the subdomain cert but as for now it’s ok for me as I do not use that browser. Yet, it should be fixed in the near future.

drawing course

Finally, I started a drawing course (available on https://www.udemy.com/the-ultimate-drawing-course-beginner-to-advanced/ as for the date this post was published) and finished first two sections of it.

The first section is a control drawing that will be used to compare the drawing skill at the beginning of the course and after finishing it, the second one focuses on drawing the very basic element of every drawing that is a line.

June

python

A while ago I’ve started learning the python language. After finishing the ‘learn python the hard way’ (https://www.learnpythonthehardway.org) course and another one on Udemy (only the basics of python section, without any frameworks), I decided to improve my python scripting skills by solving the Project Euler (https://projecteuler.net/) problems. As for now I have solved the first six of them.

While solving mentioned problems I was using the ‘git flow’ tool. I’ve never used that toolset before, but I’ve heard a few words about it here and there so I decided to test it. After using it for a while I find it to be a quite useful and time-saving tool.

first apprentice

A friend of mine, after a few months of training, has started looking for his first full-time job as a PHP developer. I consider him to be a kind of apprentice of mine as I helped him in solving some issues he had on his training project. I also taught him the basics of git, and briefly refreshed the regex usage. For the 3 months mentioned in this post I was giving him directions on what to read or search for while solving the problems he had encountered and what he should learn next to start his career in IT.

drawing course

Throughout June I finished the next 2 sections of the drawing course that covered drawing of shapes, adding shadows and composing drawings of the basic geometric objects.

Q3 plans

July is a summer break for me, so my plans for that month are to chillax a bit.

It seems that some of my friends would like to learn to code, so now I am preparing a short course covering the very basics of software development such as logic, program flow and basic tooling. In the course I will use python as it has the easiest syntax of all the programming languages known to me.

Due to the course I am spending the whole August preparing the material. Then I am taking a short one week break at the beginning of September and starting the course midmonth.

image: Time by ashley.adcox Licence cc-by-nc-nd 2.0

Programming languages – the battle royale

I often see the ‘mine is the bestest* of all’ fights about which programming language is superior to which. This is my statement in these fights.
*pun intended

Languages I wrote in

The first program I ever wrote was a ‘Who wants to be a millionaire’-like game in console written in Pascal. During my studies I was programming in tons of programming languages starting with Assembler, through C/C++, Java, Lisp etc.,but the one that I truly liked was C# so I begun my professional career as a C# developer. Still, from time to time I do put my hands on new languages, just to taste them and see how they work and feel like.

The one I have tried most recently was Python and I am completely astounded by its verbosity. First of all, I was a bit skeptical of interpreted languages. Up until not so long ago, I thought that a language that has no stiff construction, such as defined entry point (main function) or statically typed variables will be hard to work with. And then Python happened.

At the beginning, Python was quite uncommon for me of course. It lacked the braces and stiff program structure. Modules in place of namespaces or assemblies – I am still not sure which are closer to Python’s modules – were a bit misleading, not to mention spaces or tabs in the role of code blocks. Yet, after just 2 weeks of learning it, in one week I wrote a program that in C# took me about a month of work. It was an app that simulated voting process based on political views in 5 categories on a galaxy scale with millions of voters and 5 parties they could vote for. And although it was not as fast as the one written in C#, it was enough to finish the calculations in reasonable time. The next think I did, was a simple implementation of a blockchain-like messaging algorithm, which took me about 40 lines of Python code. The only code I wrote, was the actual ‘meat’ of the project. There was no need to define namespaces, access modifiers, types for variables (which in Python are inferred from variable usage) or to write braces. I sat down and wrote just the code I wanted to try. Now, thanks to learning Python and at the cost of code processing performance, writing a proof of concept or a prototype app is a matter of hours instead of days or weeks for me.

Learning every new language teaches me, that a programming language is just a tool. We should use programming languages just like a carpenter uses his tools. Selecting the right one, be it a chisel or a saw, for the task he is about to perform. Also, we do not always have to use the best, the fastest or the most compact solution we can find. Sometimes it is enough to use the one which is easier to write but slower in performance. For example, if you write a messenger for 20 people, most probably, you do not need to serve millions requests per second. That means, that you can choose a language in which the programs work slower but are easier to write.

Dear reader

If you are about to write a real-time system, then choose a compiled, low-level language such as C or C++. If you are writing an app that has to integrate with different third party APIs, you should probably go for Java or C#. For prototyping or scripting I would recommend Python.

Just stay open-minded and remember, that every language was created for a specific set of tasks with different design decisions taken. Learn new languages. Compare them. Use them. Be like the carpenter. Know your toolset. Keep expanding it. Select proper tools for any given task and respect the ones that you are not using right now. One day you may find them to better fit your specific need.

And please, do not join the ranks of ‘my programming language is better than yours’-developers.

 

Image: Part of ‘Battle of Oroi-Jalatu’, public domain

Soundboard – Milestones, Minimum Viable Product

In the last post I described my requirements for Soundboard app. Now it is time to decide what can be considered the minimum viable product.

Milestones – first try

Milestone is a specific point of high importance or impact in a project. Taking into account the requirements I wrote down in the last post, I would split them into the following categories:

  • Audio processing
    • Play a sample
    • Stop a sample
    • Stop all samples
    • Loop a sample
    • Mix multiple samples
    • Set volume for a sample
    • Option to stop a sample, if another is started
    • Option to lower volume of a sample, if another’s is gained
  • Sample sources
    • Play a sample from an mp3 file
    • Play a sample from an acc file
    • Play a sample from a flack file
    • Play a sample from an internet stream
  • Soundscapes
    • Group samples
    • Group scenes
    • Save soundscape’s metadata to an archive
    • Save soundscape’s metadata and samples to an archive
    • Save soundscape’s samples in order
    • Load a soundscape from an archive
    • Load soundscape’s samples in order
  • Scenes
    • Save samples’ play state
    • Load samples’ play state
    • Save samples’ volume
    • Load samples’ volume
    • Save samples’ correlations
    • Load samples’ correlations
  • User interface
    • Touch events served
    • Soundscapes list
    • Soundscape view
    • Scenes list in a soundscape
    • Text label for a sample
    • Text label for a soundscape
    • Text label for a scene
    • Image label for a sample
    • Image label for a soundscape
    • Image label for a scene

From now on, the highest hierarchy items could be milestones all by themselves. In my opinion though, although completing such a milestone would mark a significant step in the application development process, it does not represent a high impact functionality leap, as the application after reaching such a milestone is not usable at all.

Milestones – MVP

The first milestone in the project I write will be met, when I will be able to use my application at the RPG session. That first testable version of an application is called the minimum viable product. It has all the necessary features implemented, so it may already be used in the target environment. In my project that would mean, that considering the audio processing, the app is capable of playing/stopping a sample and looping a sample. For a sample source I choose the file source, as I think that it will be easier to implement. Also, for my MVP I will limit myself to one file type, the mp3 should do the work for now. Saving the samples will be required later when I implement the soundscapes. I completely do not care for soundscapes and scenes . For now, all I need is to be able to play samples, even ungrouped. Last but not least, the user interface I need for now is limited to samples grid, play/stop buttons for the samples and text labels for the samples.

Summary

That is all for today, the first milestone is defined. Now I will add it the Gitlab issue tracking and I am ready for the development.

In the next post I will describe the project set-up and project structure.

 

Image: Stones by Judy Dean

Soundboard – project introduction

I can’t imagine the world without music. I listen to it all day round and everywhere I go.

The need

Most recently I started an RPG campaign in which I am playing the role of the game master. In the myriads of things to prepare, the music was one of the first I though of. I’ve searched the web for a suitable piece of software and found out, that there is none that is touch friendly. In my opinion it’s crucial to have one, because during the session you have to switch the background music in a blink of an eye. There is no time to waste on searching for the mouse, then moving the cursor to a small play icon. On the other hand, although all the apps I found could be interacted with by touch interface, they were not designed for that purpose. So again, too small icons require the precision that is not achievable during the session and on the touch screen itself. What I need then, is an app that displays large sample icons that may be easily triggered by finger.

Requirements

The piece of software I write has to meet some requirements to be functional and operable.

First of all, as an audio processing app it has to be able to play single sounds, then to mix them. For shorter samples it has to be able to loop them. Also, some samples such as birds singing or water droplets falling from the cave vault must be much quieter than others for example the river or heavy traffic so the app is required to set volume for each sample. At first, I though that the app should have a master volume control but then I decided that every modern operating system already has a built-in master control so implementing it in my app would be a waste of time. The next thing to do would be to add a correspondence options between the sounds. What I mean is that, if one sound is played, the other should be turned off, or if you gain the volume of one, the volume of the other should be lowered.

Regarding the source of the sounds, I decided that my app should be able to play samples from an attached data storage and internet streams. Supported formats would be mp3, acc and flack.

In an RPG game players often change their location, let’s say they start in the forest, then get to the city where they find that the villain they are chasing has a secret base in a closed mine. Each of these sceneries has different background sounds. For example, in a cave you can hear bats squeaking, water drops and rocks sliding, but in a village there should be some crowd noises, a hammer hitting an anvil and sometimes a carriage riding on a paved street. I call these sceneries soundscapes. A soundscape is a group of samples that fit a scene. There should be a way to save and load a soundscape. While saving, there should be an option to save only which samples are used (metadata) or both the soundscape’s metadata and samples in one archive. While loading soundscape, samples should be positioned in the same place where they were during saving. There should also be a way of saving actually played sounds in a soundscape as scenes. For example, the soundscape is ‘the city’, the scene is ‘harbour’, ‘marketplace’, ‘administration building’ or ‘night’. They all share a set of samples used, but with some small differences in volume.  Also, not all samples are active in a scene, but they are the same in the majority of scenes in a soundscape.

Last thing to discuss is the UI. As I wrote earlier, I want to use the app on a touch-enabled device, so I decided that the app’s interface will be using tiles for samples and tabs for soundscapes. Both the soundscape and the sample button should have a text name and an optional image. I am not yet sure how to orchestrate the scene interface so this topic will be considered later in development.

What’s next?

The requirements above are a mix of obligatory and ‘nice to have’ functions. In the next post I will create the vision of the minimum viable product – the very basic functionality the app has to offer to be considered done and usable.
Image: Sombre by Philippe Rouzet

Get Noticed! – introduction

Image: 2012 07 07_Flower by higara57

 

I’ve just joined the Get Noticed! competition. This is my welcome post in which I will briefly introduce the contest’s idea and why I decided to join.

(a too many afters) After Get Noticed! 2016

This year’s edition is already the 3rd one. Last year I was not taking part in it, but I went to the closing gala. After that event, after traveling there and back again over 600 Km, after meeting with that year’s competitors, after seeing all the positive attitude, and finally after the after party, I’ve decided that, if Maciej Aniserowicz (the man behind the idea) announces the next edition of the competition, I will surely join it. And so I did.

Rules of the contest

During the contest all participants are obliged to post at least two blogposts a week between march 1st and may 31st for at least 10 weeks. That means that each participant has to produce at least 20 blogposts during the competition.

Another rule is that each participant has to be writing an open-source project of his choice for the duration of the competition. Each week one post has to directly relate to the OS project, another one has to loosely refer to IT.

Why to write the blog

In the next section I will discuss why exactly I have joined Get Noticed, but first I have to tell you, why it is worth to write a blog.

First of all, a blog is the greatest way of knowledge consolidation and repository I have ever used.
If you are learning something, write about it. Do a tutorial or write down your notes on more interesting parts of it. Thanks to that whenever you have to recall something, go to your blog, and find the article you are interested in. It may be hard to believe but it’s faster than googling it. What’s more, thanks to the fact that you wrote it, it’s easier to comprehend than any third party tutorials. Also by writing about what you are learning, you continuously consolidate your knowledge by repetition.

Despite the fact that I wrote only two blog post on Docker, I already used them as an entry-level training material for my colleagues. I could focus on my work while they were reading the text, then we could discuss the details. That’s another value of blogging, you may use your posts as a training material.

Last but not least, a blog is a great way to get potential job offers and finalize them. It shows your level of knowledge and communication skills. As an added value you may also count that it shows, that you are broadening your skillset outside working hours. As such you shine among other candidates.

Why did I join the competition?

I treat this competition as a motivation to keep blogging regularly and more often. Allegedly after 90 days of doing something it stops being a chore and becomes a habit. I hope that it’s true, and my experience shows it is. Therefore I am participating to make this chore to habit transition.

I’ve also seen how a blog may affect a community. I would love to be a part of it and have my own input into our collective knowledge, even though I am not a trendsetter. I am eager to educate myself so why not educate others in the process? As an old master told his apprentice: “If you want to master something, teach it to others”.

Docker ‘run’ with -i/-t independently

During research on how to use Docker, when I came to the point of running interactive containers I got interested in how Docker would behave, if -i or -t option was passed independently.
Below is what I found out.

Let’s start with -i only:


As we can see, input is redirected to container and container output is redirected back to host.
Now for -t only:



Although we called a ping command, ‘tcpdump’ shows that nothing was sent. That means that the input was not redirected to the container.
In summary, it is possible to work with -i only for interactive mode, although you will see the output of the called commands after they finish execution.
-t option activates terminal emulation mode meaning, that the output of the executing command will be printed during its execution.
Just try to compare how Docker behavior will differ if you call:

docker run debian ping google.com -c 5

and

docker run -t debian ping google.com -c 5

You may prefer to use -t only, if all you want to do is to check default container command output in real-time, without interacting with it.

Image of whale

Docker quickstart – pt 2 – running containers – standard, interactive, daemon modes

Docker quickstart series

  1. Docker quickstart – installation
  2. Docker quickstart – running containers – standard, interactive, daemon modes

Intro

In the previous part of Docker quickstart series, I’ve covered basic concepts behind Docker and its installation. I’ve also shown how to run a simple test container named “hello world” with the command:

docker run hello-world

In this part of Docker quickstart, I will start with running containers in interactive mode with volume mounting and port forwarding. Then I will show you how to attach to a running container, read a containers output and briefly explain container configuration info.

Docker run

parameterless

docker run <image>

In the first part of this quickstart, I’ve shown you how to test if your instance of Docker is running by starting a sample container. The CLI (command line interface) call I’ve used then was ‘run’.
In the most basic call, without any options, it just starts an image and executes its ENTRYPOINT or CMD. ENTRYPOINT and CMD are dockerfile directives and will be covered in another part of this quickstart. For now, all you have to know is that it runs default command specified for container.

with specific command

To start Docker with user specified command call:

docker run <image> <command>

Let’s say that all you want to do with your container is to test if it can connect with host. For that I will use the most basic tool which is ping. All I have to do to start ping from container is to call:

docker run debian ping 172.17.0.1 -c 5

So we’ve specified that Docker should create container using Debian image, then inside the container it should run ping to dockerhost with 5 packets.

interactive mode

Another way to run a container is to start it in interactive mode. For example, if you start the Debian image, it will immediately stop. It’s because no action command was specified to run.
Now let’s assume that you want to interact with your container. To do so, all you have to do is to add -i and -t switches to Docker’s ‘run’ command. -i switch runs the container in interactive mode, meaning your input will be redirected to the container. -t switch is used to emulate the terminal.
Command pattern there will be:

docker run -it <image> <command>

For example: as bash is the default command for Debian image one can skip the <command>. So, to start it in interactive mode all you have to do is type:

docker run -it debian

As you can see, I am connected to Debian container CLI logged in as root.

At this point, I got interested in what would happen, if -i or -t was passed independently. Details on what happens under such conditions are here: Docker ‘run’ -i/-t independently.

removing container

Up to this moment every time we’ve started a container, after it’s been finished, it was left on the drive, ready to be reexecuted. Such behavior is not always expected, e.g. if we are not going to reuse existing containers, as with time the count of stored containers that are not running will stack up to a tremendous level and make it hard to manage containers. The easiest way to get rid of an exited container is to remove it right after its execution stops. Fortunately, we can specify that the instance of an image we create with ‘run’ command should be removed when it finishes as part of ‘run’ command itself. The switch that determines that is –rm.

docker run --rm debian

Now let’s see an example. First, I list all containers that are stored in exited status:

So I start with no stored containers.  Then I start the Docker container three times without –rm switch:

As you can see on the attached console listing, I have three stored containers in exited state, one for each time I’ve called Docker ‘run’. Then I start another Debian container, but this time with –rm switch:

As you can see, despite having started another container, it is not visible on containers list as it got removed right after the execution finished.

This is just one of two main methods for removing existing containers. That one that is more than enough as long as you don’t use daemon containers, or you don’t reuse already existing containers. I will describe the other method another time in an article about containers management.

detached/daemon mode

Up to this point, I’ve described how to run a container that does its job and then exits, but sometimes you need to run your container as a background service. This container mode called ‘daemon’ or ‘detached state’ may be required for file hosting services, database servers, monitoring services or other continuously running applications. To run such an application you follow the ‘docker run’ command with ‘-d’ switch.

What the -d switch does, is it starts the application and then detaches the console from Docker process. For example, if you want to start nginx server you probably want it to run even if you disconnect from your Docker hosting machine. To achieve that, you have to detach your console from the nginx instance. To do that you just simply call:

docker run -d nginx

In the result Docker will return the container id and detach the console.

In the excerpt below, first, I simply start an nginx container. Console is attached to the process. Then I break the program with “ctrl + c”. Ps returns no results for nginx processes, as these were killed by break. After that, I start the detached container. What Docker does there is what follows:

  1. starts the nginx container
  2. detaches the console, which means that container process is running in an independent context
  3. prints out the container id.

Now, when I call ps, it returns that nginx processes are running. I may now exit from the console and the nginx processes will still be running in the background.

Summary

In this part I’ve covered how to start a container in three different modes which are:

  • standard – start task, exit container after task is finished
  • interactive – start task that takes input and/or prints user queries, then exit container after task is finished
  • daemon – start task then detach terminal and leave container running in the background

In the next post I will cover how to start a container that:

  • shares data with its host
  • is exposed to the internet
  • connects with other containers using virtual network
  • has initialized environmental variables
Image of whale

Docker quickstart – pt 1 – installation

Not so long ago I had to do research on how to run a dotnet application on Docker. Thanks to that I’ve learned the basics of Docker. The text below is based on the notes that I’ve written down in the process.

Docker quickstart series

 

  1. Docker quickstart – installation
  2. Docker quickstart – running containers – standard, interactive, daemon modes

 

What is Docker

Docker is an application packaging platform and runtime. Each image (Docker’s name for application package) consists of layers containing application dependencies and the application itself. Layers may contain system, runtime, libraries, etc. What’s important is that each layer is readonly. Thanks to packaging one may start several separated instances of one image on one physical machine.

How it differs from a virtual machine

Taking the above into account one could say that Docker is in fact a virtual machine. Well, things are getting a bit complicated there.

Docker is not a virtual machine yet it virtualizes some resources. For example, each container has an isolated file system or network (although it may be changed). On the other hand, each process has its hook on the host machine. Just take a look at “ps” output:

As you can see, starting Docker container (instance of image) host has started several new processes. I am talking about PIDs 19495, 19510, 19522 and 19535. The container that I’ve started is a dotnet core web host. That’s the reason why there are running dotnet processes. As I don’t have dotnet installed on host, it proves that the container processes are delegated to Docker’s host.

When you consider the above, the difference between Docker and a virtual machine is getting clearer. It is that VMs are virtualizing whole hardware whilst Docker exposes “bare-metal” devices as they are seen by the host machine.

Another difference is that setting up a VM requires you to store whole stack used by your application. I mean that VMs have to store a lot more data than containers. VMs are using virtual storage devices that contain bootloader, operating system, drivers, libraries and last but not least your hosted application. If you want to scale an application horizontally you have to clone a complete virtual drive image. Docker architecture limits the amount of data that has to be stored for a container. By providing readonly layers it makes it possible to reuse the image, so several containers may run using single image. Atop of all readonly layers Docker inserts a read/write persistence layer. That persistence together with a container config (network, volumes, environment vars, etc.) is what differs the containers from each other. See the console output below to notice that running ubuntu under Docker consumed about 130MB for image, then starting new instance adds under 150KB to the disk usage. The reason is that both containers are using one image. The only stored information is the difference between the image and the container. It works a bit like snapshots in VMs but with Docker you can run several snapshots simultaneously.

Installing Docker Engine

There are two ways to install Docker.
The easy way using shell script provided by Docker inc.
The hard way if you don’t trust third party shell scripts. The hard way can be found at Docker page.

While learning Docker I’ve used the easy way so I will describe this one here. Using the easy method you don’t have to care about any dependencies or setting up the system (except for installing it of course). As I am no Linux geek, I’ve decided to use the easy way.

Basically all you have to do to install Docker is to call:

wget -qO- get.docker.com | sh

What it does is:

  1. “wget -qO- get.docker.com” downloads the script from ‘get.docker.com’. The ‘-q’ flag stands for quiet, and means that ‘wget’ won’t write progress to standard output (stdout). The ‘-O’ flag redirects downloaded content to the specified file. By using ‘-O‘ we basically say that ‘wget’ should write the output to stdout which I use in the next step.
  2. By using the pipe sign I redirect the output of the ‘left command’ to the ‘right command’s standard input stream (sdtin).
  3. ‘sh’ processes data from stdin. Due to the fact that I’ve redirected the ‘wget’ stdout to the ‘sh’, the ‘sh’ command executed the downloaded script.

After the script processing ends you may call ‘docker -v’ to verify installation.

badamiak@dockerhost:~$ docker -v
Docker version 1.11.2, build b9f10c9

Test run

After installation you may want to try to run your first container. Docker provides an official image for Hello-World application. All you have to do is to call ‘docker run hello-world’. If everything is working right you will see the line ‘Hello from Docker!’ in the result.

Summary

After reading through this post you should be able to run your own instance of Docker and also a sample application contained in the hello-world image.
I hope that I’ve also explained the basic concepts behind Docker and its architecture.
In the next part of this tutorial I will cover running a parametrized container, setting up virtual networks and attaching host directories as volumes to a container.
If you have any questions or remarks please write a comment and I shall soon address your concerns.