Docker is an interesting technology that over the past 2 years has gone from an idea, to being used by organizations all over the world to deploy applications. In today's article I am going to cover how to get started with Docker by "Dockerizing" an existing application. The application in question is actually this very blog!
What is Docker
Before we dive into learning the basics of Docker let's first understand what Docker is and why it is so popular. Docker, is an operating system container management tool that allows you to easily manage and deploy applications by making it easy to package them within operating system containers.
Containers vs. Virtual Machines
Containers may not be as familiar as virtual machines but they are another method to provide Operating System Virtualization. However, they differ quite a bit from standard virtual machines.
Standard virtual machines generally include a full Operating System, OS Packages and eventually an Application or two. This is made possible by a Hypervisor which provides hardware virtualization to the virtual machine. This allows for a single server to run many standalone operating systems as virtual guests.
Containers are similar to virtual machines in that they allow a single server to run multiple operating environments, these environments however, are not full operating systems. Containers generally only include the necessary OS Packages and Applications. They do not generally contain a full operating system or hardware virtualization. This also means that containers have a smaller overhead than traditional virtual machines.
Containers and Virtual Machines are often seen as conflicting technology, however, this is often a misunderstanding. Virtual Machines are a way to take a physical server and provide a fully functional operating environment that shares those physical resources with other virtual machines. A Container is generally used to isolate a running process within a single host to ensure that the isolated processes cannot interact with other processes within that same system. In fact containers are closer to BSD Jails and
chroot'ed processes than full virtual machines.
What Docker provides on top of containers
Docker itself is not a container runtime environment; in fact Docker is actually container technology agnostic with efforts planned for Docker to support Solaris Zones and BSD Jails. What Docker provides is a method of managing, packaging, and deploying containers. While these types of functions may exist to some degree for virtual machines they traditionally have not existed for most container solutions and the ones that existed, were not as easy to use or fully featured as Docker.
Now that we know what Docker is, let's start learning how Docker works by first installing Docker and deploying a public pre-built container.
Starting with Installation
As Docker is not installed by default step 1 will be to install the Docker package; since our example system is running Ubuntu 14.0.4 we will do this using the Apt package manager.
# apt-get install docker.io Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: aufs-tools cgroup-lite git git-man liberror-perl Suggested packages: btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki git-svn The following NEW packages will be installed: aufs-tools cgroup-lite docker.io git git-man liberror-perl 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 7,553 kB of archives. After this operation, 46.6 MB of additional disk space will be used. Do you want to continue? [Y/n] y
To check if any containers are running we can execute the
docker command using the
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ps function of the
docker command works similar to the Linux
ps command. It will show available Docker containers and their current status. Since we have not started any Docker containers yet, the command shows no running containers.
Deploying a pre-built nginx Docker container
One of my favorite features of Docker is the ability to deploy a pre-built container in the same way you would deploy a package with
apt-get. To explain this better let's deploy a pre-built container running the nginx web server. We can do this by executing the
docker command again, however, this time with the
# docker run -d nginx Unable to find image 'nginx' locally Pulling repository nginx 5c82215b03d1: Download complete e2a4fb18da48: Download complete 58016a5acc80: Download complete 657abfa43d82: Download complete dcb2fe003d16: Download complete c79a417d7c6f: Download complete abb90243122c: Download complete d6137c9e2964: Download complete 85e566ddc7ef: Download complete 69f100eb42b5: Download complete cd720b803060: Download complete 7cc81e9a118a: Download complete
run function of the
docker command tells Docker to find a specified Docker image and start a container running that image. By default, Docker containers run in the foreground, meaning when you execute
docker run your shell will be bound to the container's console and the process running within the container. In order to launch this Docker container in the background I included the
-d (detach) flag.
docker ps again we can see the nginx container running.
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande
In the above output we can see the running container
desperate_lalande and that this container has been built from the
Images are one of Docker's key features and is similar to a virtual machine image. Like virtual machine images, a Docker image is a container that has been saved and packaged. Docker however, doesn't just stop with the ability to create images. Docker also includes the ability to distribute those images via Docker repositories which are a similar concept to package repositories. This is what gives Docker the ability to deploy an image like you would deploy a package with
yum. To get a better understanding of how this works let's look back at the output of the
docker run execution.
# docker run -d nginx Unable to find image 'nginx' locally
The first message we see is that
docker could not find an image named nginx locally. The reason we see this message is that when we executed
docker run we told Docker to startup a container, a container based on an image named nginx. Since Docker is starting a container based on a specified image it needs to first find that image. Before checking any remote repository Docker first checks locally to see if there is a local image with the specified name.
Since this system is brand new there is no Docker image with the name nginx, which means Docker will need to download it from a Docker repository.
Pulling repository nginx 5c82215b03d1: Download complete e2a4fb18da48: Download complete 58016a5acc80: Download complete 657abfa43d82: Download complete dcb2fe003d16: Download complete c79a417d7c6f: Download complete abb90243122c: Download complete d6137c9e2964: Download complete 85e566ddc7ef: Download complete 69f100eb42b5: Download complete cd720b803060: Download complete 7cc81e9a118a: Download complete
This is exactly what the second part of the output is showing us. By default, Docker uses the Docker Hub repository, which is a repository service that Docker (the company) runs.
Like GitHub, Docker Hub is free for public repositories but requires a subscription for private repositories. It is possible however, to deploy your own Docker repository, in fact it is as easy as
docker run registry. For this article we will not be deploying a custom registry service.
Stopping and Removing the Container
Before moving on to building a custom Docker container let's first clean up our Docker environment. We will do this by stopping the container from earlier and removing it.
To start a container we executed
docker with the
run option, in order to stop this same container we simply need to execute the
docker with the
kill option specifying the container name.
# docker kill desperate_lalande desperate_lalande
If we execute
docker ps again we will see that the container is no longer running.
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
However, at this point we have only stopped the container; while it may no longer be running it still exists. By default,
docker ps will only show running containers, if we add the
-a (all) flag it will show all containers running or not.
# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande
In order to fully remove the container we can use the
docker command with the
# docker rm desperate_lalande desperate_lalande
While this container has been removed; we still have a nginx image available. If we were to re-run
docker run -d nginx again the container would be started without having to fetch the nginx image again. This is because Docker already has a saved copy on our local system.
To see a full list of local images we can simply run the
docker command with the
# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE nginx latest 9fab4090484a 5 days ago 132.8 MB
Building our own custom image
At this point we have used a few basic Docker commands to start, stop and remove a common pre-built image. In order to "Dockerize" this blog however, we are going to have to build our own Docker image and that means creating a Dockerfile.
With most virtual machine environments if you wish to create an image of a machine you need to first create a new virtual machine, install the OS, install the application and then finally convert it to a template or image. With Docker however, these steps are automated via a Dockerfile. A Dockerfile is a way of providing build instructions to Docker for the creation of a custom image. In this section we are going to build a custom Dockerfile that can be used to deploy this blog.
Understanding the Application
Before we can jump into creating a Dockerfile we first need to understand what is required to deploy this blog.
The blog itself is actually static HTML pages generated by a custom static site generator that I wrote named; hamerkop. The generator is very simple and more about getting the job done for this blog specifically. All the code and source files for this blog are available via a public GitHub repository. In order to deploy this blog we simply need to grab the contents of the GitHub repository, install Python along with some Python modules and execute the
hamerkop application. To serve the generated content we will use nginx; which means we will also need nginx to be installed.
So far this should be a pretty simple Dockerfile, but it will show us quite a bit of the Dockerfile Syntax. To get started we can clone the GitHub repository and creating a Dockerfile with our favorite editor;
vi in my case.
# git clone https://github.com/madflojo/blog.git Cloning into 'blog'... remote: Counting objects: 622, done. remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622 Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done. Resolving deltas: 100% (242/242), done. Checking connectivity... done. # cd blog/ # vi Dockerfile
FROM - Inheriting a Docker image
The first instruction of a Dockerfile is the
FROM instruction. This is used to specify an existing Docker image to use as our base image. This basically provides us with a way to inherit another Docker image. In this case we will be starting with the same nginx image we were using before, if we wanted to start with a blank slate we could use the Ubuntu Docker image by specifying
## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <firstname.lastname@example.org>
In addition to the
FROM instruction, I also included a
MAINTAINER instruction which is used to show the Author of the Dockerfile.
As Docker supports using
# as a comment marker, I will be using this syntax quite a bit to explain the sections of this Dockerfile.
Running a test build
Since we inherited the nginx Docker image our current Dockerfile also inherited all the instructions within the Dockerfile used to build that nginx image. What this means is even at this point we are able to build a Docker image from this Dockerfile and run a container from that image. The resulting image will essentially be the same as the nginx image but we will run through a build of this Dockerfile now and a few more times as we go to help explain the Docker build process.
In order to start the build from a Dockerfile we can simply execute the
docker command with the
# docker build -t blog /root/blog Sending build context to Docker daemon 23.6 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <email@example.com> ---> Running in c97f36450343 ---> 60a44f78d194 Removing intermediate container c97f36450343 Successfully built 60a44f78d194
In the above example I used the
-t (tag) flag to "tag" the image as "blog". This essentially allows us to name the image, without specifying a tag the image would only be callable via an Image ID that Docker assigns. In this case the Image ID is
60a44f78d194 which we can see from the
docker command's build success message.
In addition to the
-t flag, I also specified the directory
/root/blog. This directory is the "build directory", which is the directory that contains the Dockerfile and any other files necessary to build this container.
Now that we have run through a successful build, let's start customizing this image.
Using RUN to execute apt-get
The static site generator used to generate the HTML pages is written in Python and because of this the first custom task we should perform within this
Dockerfile is to install Python. To install the Python package we will use the Apt package manager. This means we will need to specify within the
apt-get update and
apt-get install python-dev are executed; we can do this with the
## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <firstname.lastname@example.org> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pip
In the above we are simply using the
RUN instruction to tell Docker that when it builds this image it will need to execute the specified
apt-get commands. The interesting part of this is that these commands are only executed within the context of this container. What this means is even though
python-pip are being installed within the container, they are not being installed for the host itself. Or to put it simplier, within the container the
pip command will execute, outside the container, the
pip command does not exist.
It is also important to note that the Docker build process does not accept user input during the build. This means that any commands being executed by the
RUN instruction must complete without user input. This adds a bit of complexity to the build process as many applications require user input during installation. For our example, none of the commands executed by
RUN require user input.
Installing Python modules
With Python installed we now need to install some Python modules. To do this outside of Docker, we would generally use the
pip command and reference a file within the blog's Git repository named
requirements.txt. In an earlier step we used the
git command to "clone" the blog's GitHub repository to the
/root/blog directory; this directory also happens to be the directory that we have created the
Dockerfile. This is important as it means the contents of the Git repository are accessible to Docker during the build process.
When executing a build, Docker will set the context of the build to the specified "build directory". This means that any files within that directory and below can be used during the build process, files outside of that directory (outside of the build context), are inaccessible.
In order to install the required Python modules we will need to copy the
requirements.txt file from the build directory into the container. We can do this using the
COPY instruction within the
## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <email@example.com> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pip ## Create a directory for required files RUN mkdir -p /build/ ## Add requirements file and run pip COPY requirements.txt /build/ RUN pip install -r /build/requirements.txt
Dockerfile we added 3 instructions. The first instruction uses
RUN to create a
/build/ directory within the container. This directory will be used to copy any application files needed to generate the static HTML pages. The second instruction is the
COPY instruction which copies the
requirements.txt file from the "build directory" (
/root/blog) into the
/build directory within the container. The third is using the
RUN instruction to execute the
pip command; installing all the modules specified within the
COPY is an important instruction to understand when building custom images. Without specifically copying the file within the Dockerfile this Docker image would not contain the
requirements.txt file. With Docker containers everything is isolated, unless specifically executed within a Dockerfile a container is not likely to include required dependencies.
Re-running a build
Now that we have a few customization tasks for Docker to perform let's try another build of the blog image again.
# docker build -t blog /root/blog Sending build context to Docker daemon 19.52 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <firstname.lastname@example.org> ---> Using cache ---> 8e0f1899d1eb Step 2 : RUN apt-get update ---> Using cache ---> 78b36ef1a1a2 Step 3 : RUN apt-get install -y python-dev python-pip ---> Using cache ---> ef4f9382658a Step 4 : RUN mkdir -p /build/ ---> Running in bde05cf1e8fe ---> f4b66e09fa61 Removing intermediate container bde05cf1e8fe Step 5 : COPY requirements.txt /build/ ---> cef11c3fb97c Removing intermediate container 9aa8ff43f4b0 Step 6 : RUN pip install -r /build/requirements.txt ---> Running in c50b15ddd8b1 Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1)) Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2)) <truncated to reduce noise> Successfully installed jinja2 PyYaml mistune markdown MarkupSafe Cleaning up... ---> abab55c20962 Removing intermediate container c50b15ddd8b1 Successfully built abab55c20962
From the above build output we can see the build was successful, but we can also see another interesting message;
---> Using cache. What this message is telling us is that Docker was able to use its build cache during the build of this image.
Docker build cache
When Docker is building an image, it doesn't just build a single image; it actually builds multiple images throughout the build processes. In fact we can see from the above output that after each "Step" Docker is creating a new image.
Step 5 : COPY requirements.txt /build/ ---> cef11c3fb97c
The last line from the above snippet is actually Docker informing us of the creating of a new image, it does this by printing the Image ID;
cef11c3fb97c. The useful thing about this approach is that Docker is able to use these images as cache during subsequent builds of the blog image. This is useful because it allows Docker to speed up the build process for new builds of the same container. If we look at the example above we can actually see that rather than installing the
python-pip packages again, Docker was able to use a cached image. However, since Docker was unable to find a build that executed the
mkdir command, each subsequent step was executed.
The Docker build cache is a bit of a gift and a curse; the reason for this is that the decision to use cache or to rerun the instruction is made within a very narrow scope. For example, if there was a change to the
requirements.txt file Docker would detect this change during the build and start fresh from that point forward. It does this because it can view the contents of the
requirements.txt file. The execution of the
apt-get commands however, are another story. If the Apt repository that provides the Python packages were to contain a newer version of the
python-pip package; Docker would not be able to detect the change and would simply use the build cache. This means that an older package may be installed. While this may not be a major issue for the
python-pip package it could be a problem if the installation was caching a package with a known vulnerability.
For this reason it is useful to periodically rebuild the image without using Docker's cache. To do this you can simply specify
--no-cache=True when executing a Docker build.
Deploying the rest of the blog
With the Python packages and modules installed this leaves us at the point of copying the required application files and running the
hamerkop application. To do this we will simply use more
## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <email@example.com> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pip ## Create a directory for required files RUN mkdir -p /build/ ## Add requirements file and run pip COPY requirements.txt /build/ RUN pip install -r /build/requirements.txt ## Add blog code nd required files COPY static /build/static COPY templates /build/templates COPY hamerkop /build/ COPY config.yml /build/ COPY articles /build/articles ## Run Generator RUN /build/hamerkop -c /build/config.yml
Now that we have the rest of the build instructions, let's run through another build and verify that the image builds successfully.
# docker build -t blog /root/blog/ Sending build context to Docker daemon 19.52 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <firstname.lastname@example.org> ---> Using cache ---> 8e0f1899d1eb Step 2 : RUN apt-get update ---> Using cache ---> 78b36ef1a1a2 Step 3 : RUN apt-get install -y python-dev python-pip ---> Using cache ---> ef4f9382658a Step 4 : RUN mkdir -p /build/ ---> Using cache ---> f4b66e09fa61 Step 5 : COPY requirements.txt /build/ ---> Using cache ---> cef11c3fb97c Step 6 : RUN pip install -r /build/requirements.txt ---> Using cache ---> abab55c20962 Step 7 : COPY static /build/static ---> 15cb91531038 Removing intermediate container d478b42b7906 Step 8 : COPY templates /build/templates ---> ecded5d1a52e Removing intermediate container ac2390607e9f Step 9 : COPY hamerkop /build/ ---> 59efd1ca1771 Removing intermediate container b5fbf7e817b7 Step 10 : COPY config.yml /build/ ---> bfa3db6c05b7 Removing intermediate container 1aebef300933 Step 11 : COPY articles /build/articles ---> 6b61cc9dde27 Removing intermediate container be78d0eb1213 Step 12 : RUN /build/hamerkop -c /build/config.yml ---> Running in fbc0b5e574c5 Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux <truncated to reduce noise> Successfully created file /usr/share/nginx/html//archive.html Successfully created file /usr/share/nginx/html//sitemap.xml ---> 3b25263113e1 Removing intermediate container fbc0b5e574c5 Successfully built 3b25263113e1
Running a custom container
With a successful build we can now start our custom container by running the
docker command with the
run option, similar to how we started the nginx container earlier.
# docker run -d -p 80:80 --name=blog blog 5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1
Once again the
-d (detach) flag was used to tell Docker to run the container in the background. However, there are also two new flags. The first new flag is
--name, which is used to give the container a user specified name. In the earlier example we did not specify a name and because of that Docker randomly generated one. The second new flag is
-p, this flag allows users to map a port from the host machine to a port within the container.
The base nginx image we used exposes port 80 for the HTTP service. By default, ports bound within a Docker container are not bound on the host system as a whole. In order for external systems to access ports exposed within a container the ports must be mapped from a host port to a container port using the
-p flag. The command above maps port 80 from the host, to port 80 within the container. If we wished to map port 8080 from the host, to port 80 within the container we could do so by specifying the ports in the following syntax
From the above command it appears that our container was started successfully, we can verify this by executing
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog
At this point we now have a running custom Docker container. While we touched on a few Dockerfile instructions within this article we have yet to discuss all the instructions. For a full list of Dockerfile instructions you can checkout Docker's reference page, which explains the instructions very well.
Another good resource is their Dockerfile Best Practices page which contains quite a few best practices for building custom Dockerfiles. Some of these tips are very useful such as strategically ordering the commands within the Dockerfile. In the above examples our Dockerfile has the
COPY instruction for the
articles directory as the last
COPY instruction. The reason for this is that the
articles directory will change quite often. It's best to put instructions that will change oftenat the lowest point possible within the Dockerfile to optimize steps that can be cached.
In this article we covered how to start a pre-built container and how to build, then deploy a custom container. While there is quite a bit to learn about Docker this article should give you a good idea on how to get started. Of course, as always if you think there is anything that should be added drop it in the comments below.
Posted by Benjamin Cane
It’s been a while since my last update but regular readers will know I’ve been suffering with serious health problems since September of this year. To be honest I hadn’t written much for over a year before that but it’s great to have a legitimate excuse for slow updates now. There are finally some developments to share and I won’t rehash the things I’ve already written about in earlier posts. If you want the full story then you can read back.
Previously on Dan…
So, when we last left the story I was waiting for some kind of diagnosis and news of any possible further treatment from The Christie in Manchester. I was called for more CT scans and blood tests a couple of weeks ago and then I met the consultant earlier this week.
The big news is that they think they’ve got a proper diagnosis at last. (Drum roll please) I have a condition called Pseudomyxoma peritonei (that’ll get you a good score in Scrabble). I won’t go into much detail here because it’s not pleasant and those who want to read more can easily find information online. Basically it means I have tumours in my abdomen and probably on my appendix. They produce this mucus that builds up and can cause great pressure internally. It would explain many of the symptoms I’ve experienced over many years and the doctor even admitted I might have had it quite a while.
There’s good news though. It is operable and given my age and fitness there’s a 75% chance of a complete cure. It involves major surgery and it won’t be easy, I will also be given chemotherapy via a pump while still unconscious. This should kill any cells they can’t see or cut out. It just ensures everything is taken care of. All in all the operation takes 12 hours and obviously I’ll need months to recover but the big thing to remember here is I could end up fitter than I’ve been in years.
So my basic plan for now is to enjoy Christmas & New Year, get as fit as I can and then undergo surgery in January. Probably the middle to end of Jan. I don’t have a date yet.
It’s a relief to finally know what’s wrong with me and also that it’s treatable because I’ve been in limbo so long waiting. The wheel of fortune was spinning and I may not have won a caravan or a holiday in Tenerife (game show reference look it up younger readers), but I have won a chance to beat this and get fully fit again. That’s what I shall be doing in 2016. I considered calling this post “the wheel of misfortune” but I honestly don’t feel like that. It’s going to be a long road but I’ve never backed out of a fight in my life and I’m ready to hit this head on. The Christie is the absolute best place in the country to do this, they have an amazing record.
So now you’re all up to date. I hope to see many of you over the holiday season and I hope you all have a good time as I will be. Any more details I will let you know.
Ciao for now,
[ A version of this blog post was crossposted on Conservancy's blog. ]
I'm quite delighted with my career choice. As an undergraduate and even in graduate school, I still expected my career extend my earlier careers in the software industry: a mixture of software developer and sysadmin. I'd probably be a DevOps person now, had I stuck with that career path.
Instead, I picked the charity route: which (not financially, but work-satisfaction-wise) is like winning a lottery. There are very few charities related to software freedom, and frankly, if (like me) you believe in universal software freedom and reject proprietary software entirely, there are two charities for you: the Free Software Foundation, where I used to work, and Software Freedom Conservancy, where I work now.
But software freedom is not merely an ideology for me. I believe the ideology matters because I see the lives of developers and users are better when they have software freedom. I first got a taste of this IRL when I attended the earliest Perl conferences in the late 1990s. My friend James and I stayed in dive motels and even slept in a rental car one night to be able to attend. There was excitement in the Perl community (my first Free Software community). I was exhilarated to meet in person the people I'd seen only as god-like hackers posting on perl5-porters. James was so excited he asked me to take a picture of him jumping as high as he could with his fist in the air in front of the main conference banner. At the time, I complained; I was mortified and felt like a tourist taking that picture. But looking back, I remember that James and I felt that same excitement and just were expressing it differently.
I channeled that thrill into finding a way that my day job would focus on software freedom. As an activist since my teenage years, I concentrated specifically on how I could preserve, protect and promote this valuable culture and ideology in a manner that would assure the rights of developers and users to improve and share the software they write and use.
I've enjoyed the work; I attend more great conferences than I ever
imagined I would, where now people occasionally walk up to me with the same
kind of fanboy reverence that I reserved for Larry Wall,
RMS and the heroes of my
Free Software generation. I like my work. I've been careful, however, to
avoid a sense of entitlement. Since I read it in 1991, I have never
forgotten RMS' point
in the GNU
Most of us cannot manage to get any money for
standing on the street and making faces. But we are not, as a result,
condemned to spend our lives standing on the street making faces, and
starving. We do something else., a point he continues
in his regular speeches,
I [could] just … give up those principles and start
… writing proprietary software. I looked for another alternative,
and there was an obvious one. I could leave the software field and do
something else. Now I had no other special noteworthy skills, but I'm sure
I could have become a waiter. Not at a fancy restaurant; they wouldn’t
hire me; but I could be a waiter somewhere. And many programmers, they say
to me, “the people who hire programmers demand [that I write
proprietary software] and if I don’t do [it], I’ll starve”. It’s
literally the word they use. Well, as a waiter, you’re not going to
RMS' point is not merely to expose the
I have to
program, even it's proprietary, because that's what companies pay me to
do, but also to expose the sense of entitlement in assuming a
fundamental right to do the work you want. This applies not just to
software authorship (the work I originally trained for) but also the
political activism and non-profit organizational work that I do now.
I've spent most of my career at charities because I believe deeply that I should take actions that advance the public good, and because I have a strategic vision for the best methods to advance software freedom. My strategic goals to advance software freedom include two basic tenants: (a) provide structure for Free Software projects in a charitable home (so that developers can focus on writing software, not administration, and so that the projects aren't unduly influenced by for-profit corporations) and (b) uphold and defend Free Software licensing, such as copyleft, to ensure software freedom.
I don't, however, arrogantly believe that these two priorities are inherently right. Strategic plans work toward a larger goal, and pursing success of a larger ideological mission requires open-mindedness regarding strategies. Nevertheless, any strategy, once decided, requires zealous pursuit. It's with this mindset that I teamed up with my colleague, Karen Sandler, to form Software Freedom Conservancy.
Conservancy, like most tiny charities, survives on the determination of its small management staff. Karen Sandler, Conservancy's Executive Director, and I have a unique professional collaboration. She and I share a commitment to promoting and defending moral principles in the context of software freedom, along with an unrelenting work ethic to match. I believe fundamentally that she and I have the skills, ability, and commitment to meet these two key strategic goals for software freedom.
Yet, I don't think we're entitled to do this work. And, herein there's another great feature of a charity. A charity not only serves the public good; the USA IRS also requires that a charity be funded primarily by donations from the public.
I like this feature for various reasons. Particularly, in the context of
the fundraiser that
Conservancy announced this week, I think about it terms of seeking a
mandate from the public. As Conservancy poises to begin its tenth year,
Karen and I as its leaders stand at a crossroads. For financial reasons of
the organization's budget, we've been thrust to test this question:
the public of Free Software users and developers actually want the
work that we do?.
While I'm nervous that perhaps the answer is
no, I'm nevertheless
not afraid to ask the question. So, we've asked. We asked all of you to
show us that you want our work to continue. We set two levels, matching
the two strategic goals I mentioned. (The second is harder and more
expensive to do than the first, so we've asked many more of you to support
us if you want it.)
It's become difficult in recent years to launch a non-profit fundraiser
(which have existed for generations) and not think of the relatively recent
advent of gofundme, Kickstarter, and the like. These new systems provide a
(sadly, usually proprietary software) platform for people to ask the
Is my business idea and/or personal goal worth your money?.
While I'm dubious about those sites, I do believe in democracy
enough to build my career on a structure that requires an election (of
sorts). Karen and I don't need you to go to the polls and cast your
ballot, but we do ask you consider if what we do for a living at
Conservancy is worth US$10 per month to you. If it is, I hope you'll
“cast a vote” for Conservancy
and become a Conservancy
Libreoffice provides a shortcut to fill the cells with random numbers of what ever range we need. To fill the cells with random numbers, follow the procedure below.
Open libreoffice calculator/spreadsheet and select the range of cells in which random numbers have to be filled.
Now click on Edit->fill->random numbers
It will prompt a menu as shown below.
In the menu
Cell range: Is the range of cells that has been selected for filling with random numbers
Distribution: In case we are specific about the distribution of random numbers we can select one of the distributions. If we want only integers we can select Uniform integers
Seed: All random number generators us a seed value to generate random numbers, if we are specific about certain seed we can specify the same by selecting the check box and entering the seed value. else we can leave the default value.
Maximum and minimum: The range between which we want the random numbers to lie.
Afte entering the values as we need click on ok, and the range selected will be filled with the random numbers of hte given range.
In case we want the random number to be in decimal, by default the random number generator will generate numbers with 8 decimal places.
If we do not need such high precision, we can truncate the value to one or two decimal places by right clicking on the cells and selecting format cells.
Select the numbers tab
Select category as number and then choose the number of decimal places required and click on ok.
John the Ripper (JtR) is a well known security utility to crack passwords. In its usual use case JrR is used to brute force password hashes which requires access to the user database to get the username and password hash.
In this post I look at using JtR to recover a partially remembered password. In my case I needed to create some custom password generation rules and, as I had no access to the password hash, it was an encrypted file, I needed to use John to simply generate the passwords so I could use them in a script.
John has 4 modes:
- single - which attempts to guess passwords based on username and other GECOS information stored in /etc/passwd and /etc/shadow files. JtR expects its target password file to contain the username and hash together - i.e. a concatenation of the passwd and shadow file and a utility called "unshadow" to create a single file for you,
- wordlist - a dictionary of values is provided and JtR iterates over the word list hashing values and, optionally, applying transformation rules on the dictionary words to account for variations like uppercase or lowercase, character substitution and other common approaches to human based password generation,
- Incremental - This is where JtR will generate passwords systematically in an attempt to brute force guess the password. This is extremely slow and, unless the password is short, will probably never finish in an acceptable amount of time,
- external - This option allow the user to defined their own password generation strategies using a subset of c
Initially I was hoping the word list mode would meet my needs. I planned on putting the partially remember portion of the password in word list file and then adding a custom rule to generated the guesses. Reading and absorbing the documentation around John is a slow process, there are few references and mostly you need to read the configuration file to figure out how the rules work. After several hours I concluded that word list and rules would not work for me.
External mode allows one to write rules for generating password guesses. There are three functions that can be implemented:
- init() - set up your one time initialisation
- filter() - function to determine if candidate password should be used. Hashing can be expensive so discarding candidate password you don't want to try speeds things up. This is useful for incremental mode
- generate() - this function will generate a candidate password
- restore() - restore the run once it has been interrupted. Basically need to save global state to enable restart
The one that comes to mind is the ability to select the rules you wish to apply to a run. Oline documentation claims something like
"john --wordlist=wordlist.txt --rules=MyRule --
In normal usage it expect you to have a database of encrypted password to run against.
Jono Bacon, Bryan Lunduke, Stuart Langridge and myself bring you Bad Voltage, in which we are curmudgeonly, we are ethical philosophers, and:
- 00:16:47 Review: the Blue Yeti USB microphone. Almost by coincidence, the whole Bad Voltage team have purchased the Yeti USB mic from Blue Microphones, and so we all review it together
- 00:27:20 The rise of self-driving cars brings up the question of algorithmic morality; how should the car be programmed in the event of an unavoidable accident? Protect the driver at all costs; reduce loss of life overall even if the owner gets the short end of that stick; what? This is a big decision that needs to be made: how do we think this should be handled?
- 00:39:50 THe UK government have recently started making more noises about banning encryption from being used by ordinary people, to prevent terrorists from being able to communicate without security services reading it. It’s the Crypto Wars and the Clipper chip, all over again. Meanwhile, Apple have made a big point of how they work hard to protect their customers’ privacy by ensuring that iMessages are end-to-end encrypted and so forth. Clearly, these proposals are in opposition. The question is this: if Apple declared that these government proposals were incompatible with their customers’ privacy and so threatened to pull out of the UK market… who would blink first? And would Apple do this? And is it OK that they might have this level of power?
Listen to 1×54: The Trolley Problem
From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.
This is just a really quick follow up to let you know that even though it’s been a month since my last post I don’t really have anything new to report. I’m still waiting to see a doctor at The Christie in Manchester and get some kind of real diagnosis or prognosis. Hopefully then I can find out where all this goes from here and make some plans.
I did finally get called to Manchester this week for CT scans and blood tests. I’m not holding my breath on getting the results any time soon but at least it’s something.
I do feel a little better and I’m glad to say the surgical wound on my side is almost healed but it’s still far from 100%. I have to get it packed and redressed every 2 or 3 days. I’m exhausted most of the time but not really in any pain, so things could be a lot worse.
They say “no news is good news” but in this case it’s just frustrating. I can only hope it doesn’t take too much longer and as soon there’s anything significant to share with you I will do.
Bye for now,
OnePlus is a mobile phone manufacturer famous for selling the OnePlus One with pre-installed CyanogenMod, instead of a bloated custom Android as most manufacturers do. Their newest model OnePlus X was released on November 5th 2015 and after a few days of use, it seems to live up to its promises.
Most Chinese brands suffer from the lack of finishing and low quality. OnePlus is something completely different. Everything about OnePlus seems different. It is like a completely new generation. The One Plus website is exiting. Their marketing is based almost entirely on using social media – and in a good way! The device we ordered from China’s Silicon Valley Shenzhen arrived in just a few business days. The packaging had a premium feel to it and what was inside matched the expectations set by the hype on their website.
Excellent craftsmanship with shiny glass-like panels in front and back, connected by a metallic bevel. A vivid display and a responsive and fast Android 5.1.1 experience inside. Dual SIM card capability, great camera and 3 GB of RAM are just some of the high-end technical features. There is no need to repeat those details, as they are already well presented at the original site. All we need to say is that those promises are true and the quality is unexpectedly good. It definitely competes with the other high-end mobile phones in the range of 500-700 euros. And at what price is the OnePlus X available? Only 269 € in Europe, including taxes and tolls.
The only drawbacks we noted are related to problems in Android itself. Everything in what OnePlus has added and customized is done with good judgement and is a step forward.
For a Finns like us it was also delightful to notice that their website is also available in Finnish, and that the language is actually flawless and not an amateur translation. Naturally the device operating system Android also has Finnish as an option.
Have any of you already experienced the OnePlus X? Feel free to share your thoughts in the comments!
We can use the formula average to find the average marks scored by the class as shown below.
Now if we want to copy the averages from this sheet to another sheet a simple copy paste will not work. A simple copy paste will result in the pasted value being as below.
This is because when we do a simple copy paste we care copying the fomula not the value that has bee derived from the formula. To paste the value alone, ignoring the actual formula, in the new sheet click on Eidt-> paste special
A menu as shown below will be displayed.
From this menu select what ever that needs to be pasted.Uncheck the paste all option and select that we need to paste numbers and text and remove the check next to the fomula option. Now click Ok and we will see that only the numbers of the average value will get pasted as shown below.
How to fix black screen after login in Ubuntu 14.04?
(Ohje suomeksi lopussa.)
A lot of Linux-support customers have contacted us recently asking to fix their Ubuntu laptops and workstations that suddently stopped working. The symptom is that after entering the username and password in the login screen, they are unable to get in. Instead they see a flickering screen that then goes all black for a while, and then returns back to the login screen.
This problem is caused by an update that didn’t install cleanly and left the graphical desktop environment in a broken state.
The fix is to open a text console by pressing Ctrl+Alt+F1 and then logging in in text mode. Once in, issue these commands to complete the upgrade successfully:
sudo dpkg --configure -a sudo apt-get update sudo apt-get upgrade -y
sudo reboot, Ubuntu restarts, and then you can log in again normally.
Sisäänkirjautumisen jälkeen näkyvän mustan ruudun korjaaminen Ubuntu 14.04:ssä
Useat Linux-tuki.fi:n asiakkaat ovat viime päivinä ottaneet meihin yhteyttä tilatakseen tukea Ubuntu-läppärin tai -työaseman korjaukseen, kun se yllättäen lakkasi toimimasta oikein. Oire on, että sisäänkirjautumisessa, käyttäjätunnuksen ja salasanan syöttämisen jälkeen ruutu vilkkuu ja on hetken musta. Tämän jälkeen näyttö tulee takaisin kirjautumisnäkymään.
Ongelma johtuu Ubuntun päivityksestä, joka on epäonnistunut ja jättänyt graafisen työpöytäympäristön toimimattomaan tilaan.
Korjauksen voi tehdä itse avaamalla tekstipäätteen painamalla Ctrl+Alt+F1 ja kirjautumalla sisään tekstitilassa. Sen jälkeen voi ajaa päivityksen loppuun onnistuneesti komentamalla:
sudo dpkg --configure -a sudo apt-get update sudo apt-get upgrade -y
sudo reboot. Ubuntun uudelleenkäynnistymisen jälkeen sisäänkirjautuminen ja käyttö pitäisi onnistua normaalisti.