LinuxPlanet Blogs

By Linux Geeks, For Linux Geeks.

Archive for the ‘technology’ Category

How to create good OpenPGP keys

without comments

The OpenPGP standard and the most popular open source program that implements it, GnuPG, have been well tested and widely deployed over the last decades. At least for the time being they are considered to be cryptographically unbroken tools for encrypting and verifying messages and other data.

photo: keys

Due to the lack of easy-to-use tools and integrated user interfaces, large scale use of OpenPGP, in for example encrypting emails, hasn’t happened. There are however some new interesting efforts like EnigmailMailPile, Mailvelope and End-to-end that might change the game. There are also new promising tools in the area of key management (establishing trust between parties) like Gnome Keysign and

Despite the PGP’s failure to solve email encryption globally, OpenPGP has been very successful in other areas. For example it is the de-facto tool for signing digital data. If you download a software package online, and want to verify that the package you have on your computer is actually the same package as released by the original author (and not a tampered one), you can use the OpenPGP signature of the author to verify authenticity. Also, even though it is not easy enough for day-to-day usage, if a person wants to send a message to another person and they want to send it encrypted, using OpenPGP is still the only solution for doing it. Alternative messaging channels like Hangouts or Telegram are just not enough widely used, so email prevails – and for email OpenPGP is the best encryption tool.

How to install GnuPG?

Installing GnuPG is easy. Just use the software manager of your Linux distro to install it, or download the installation package for Mac OS X via

There are two generations of GnuPG, the 2.x series and the 1.4.x series. For compatibility reasons it is still advisable to use the 1.4.x versions.

How to create keys?

Without you own key you can only send encrypted data or verify the signature of other users. In order to be able to receive encrypted data or to sign some data yourself, you need to create a key pair for yourself. The key pair consists for two keys:

  • a secret key you shall protect and which is the only key that can be used to decrypt data sent to you or to make signatures
  • a public key which you publish and which others use to encrypt data for you or use to verify your signatures

Before you generate your keys, you need to edit your gpg configuration file to make sure the strongest algorithms are used instead of the default options in GnuPG. If you are using a very recent version of GnuPG it might already have better defaults.

For brevity, we only provide the command line instructions here. Edit the config file by running for example nano ~/.gnupg/gpg.conf and adding the algorithm settings:

personal-digest-preferences SHA512
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed

If the file does not exist, just run gpg and press Ctrl-C to cancel. This will create the configuration directory and file automatically.

Once done with that preperation, actually generate the key by running gpg --gen-key

For key type select “(1) RSA and RSA (default)“. RSA is the preferred algorithm nowadays and this option also automatically creates a subkey for encryption, something that might be useful later but which you don’t immediately need to learn about.

As the key size enter “4096” as 2048 bit keys are not considered strong enough anymore.

A good value for expiration is 3 years, so enter “3y” when asked for how long the key should be valid. Don’t worry – you don’t have to create a new key again. You can some day update your key expiry date, even after it expired. Having keys that never expires is bad practice. Old never-expiring keys might come back haunting you some day.

For the name and email choose your real name and real email. OpenPGP is not an anonymity tool, but a tool to encrypt to and verify signatures of other users. Other people will be evaluating if a key is really yours, so having a false name would be confusing.

When GnuPG asks for a comment, don’t enter anything. Comments are unnecessary and sometimes simply confusing, so avoid making one.

The last step is to define a passphrase. Follow the guidelines of our password best practices article and choose a complex yet easy to remember password, and make sure you never forget it.

$ gpg --gen-key 
gpg (GnuPG) 1.4.10; Copyright (C) 2008 Free Software Foundation, Inc.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 3y
Key expires at Mon 05 Mar 2018 02:39:23 PM EET
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <>"

Real name: Lisa Simpson
Email address:
You selected this USER-ID:
    "Lisa Simpson <>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 284 more bytes)

gpg: key 3E44A531 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2018-03-05
pub   4096R/3E44A531 2015-03-06 [expires: 2018-03-05]
      Key fingerprint = 4C63 2BAB 4562 5E09 392F  DAA4 C6E4 158A 3E44 A531
uid                  Lisa Simpson <>
sub   4096R/75BB2DC6 2015-03-06 [expires: 2018-03-05]


At this stage you are done and can start using your new key. For different usages of OpenPGP you need to consult other documentation or install software that makes it easy. All software that use OpenPGP will automatically detect your ~/.gnupg directory in your home folder and use the keys from there.

Store securely

Make sure you home directory is encrypted, or maybe even your whole hard drive. On Linux it is easy with eCryptfs or LUKS/dm-crypt. If your hard drive is stolen or your keys leak in some other way, the thief can decrypt all your data and impersonate you by signing things digitally with your key.

Also if you don’t make regular backups of your home directory, you really should start doing it now so that you don’t lose your key or any other data either.

Additional identities (emails)

If you want to add more email addresses in the key gpg --edit-key 12345678 and in the prompt enter command adduid, which will start the dialog for adding another name and email on your key.

More guides

Encryption, and in particular secure unbreakable encryption is really hard. Good tools can hide away the complexity, but unfortunately modern tools and operating systems don’t have these features fully integrated yet. Users need to learn some of the technical stuff to be able to use different tools themselves.

Because OpenPGP is difficult to use, the net is full of lots of different guides. Unfortunately most of them are outdated or have errors. Here are a few guides we can recommend for futher reading:

Written by Otto Kekäläinen

March 6th, 2015 at 8:38 am

A guide to modern WordPress deployment (part 2)

without comments


Recently we published part one in this series on our brand new WordPress deployment platform in which we covered some of the server side technologies that constitute our next-gen WordPress platform.

In part 2 we’ll be briefly covering the toolkit we put together to easily manage the Linux containers that hold individual installations of WordPress.

4. WP-CLI, WordPress on the Command Line

We use the WordPress command line interface to automate everything you would usually have to do in the wp-admin interface. Using WP-CLI removes the inconvenience of logging into a client’s site and clicking around in the WP-admin to perform basic actions like changing option values or adding users.

We’ve been using WP-CLI as part of our install-, backup- and update processes for quite some time now. Quick, simple administration actions, especially when done in bulk is where the command line interface for WordPress really reveals its powers.

Check out the famous 5-minute install compressed into 3 easy lines with the WP-CLI:

wp core download
wp core config --dbname=wordpress --dbuser=dbuser --dbpass=dbpasswd
wp core install --url= --title="An Orange Website" --admin=anttiviljami --admin_password=supersecret

5. Git, Modern Version Control for Everything

We love Git and use it for pretty much everything we do! For WordPress, we rely on Git for deployment and development in virtually all our own projects (including this one!).

Our system is built for developers who use Git for deployment. We provide a Bedrock-like environment for an easy WordPress deployment experience and even offer the ability to easily set up identical environments for development and staging.

The main difference between Bedrock and our layout is the naming scheme. We found that it’s better to provide a familiar directory structure for the majority of our clients who may not be familiar with Bedrock, so we didn’t go with the /app and /wp directory naming scheme and instead went with /wp-content and /wordpress to provide a non-confusing separation between the WP core and the application.

Bedrock directory structure:

└── web
    ├── app
    │   ├── mu-plugins
    │   ├── plugins
    │   └── themes
    ├── wp-config.php
    ├── index.php
    └── wp

Seravo WordPress layout:

└── htdocs
    ├── wp-content
    │   ├── mu-plugins
    │   ├── plugins
    │   └── themes
    ├── wp-config.php
    ├── index.php
    └── wordpress

Our users can easily jump straight into development regardless of whether they want to use modern deployment techniques with dependency management and Git version control, or the straight up old-fashioned way of copying and editing files (which still seems to be the predominant way to do things with WordPress).

6. Composer, Easy Package Management for PHP

As mentioned earlier, our platform is built for Git and the modern WordPress development stack. This includes the use of dependency management with Composer – the package manager for PHP applications.

We treat the WordPress core, language packs, plugins, themes and their dependencies just like any other component in a modern web application. By utilising Composer as the package manager for WordPress, keeping your dependencies up to date and installed becomes just a matter of having the composer.json file included in your repositories. This way you don’t have to include any code from third party plugins or themes in your own repositories.

With Composer, you also have the ability to choose whether to always use the most recent version of a given plugin or a theme, or stay with a version that’s known to work with your site. This can be extremely useful with large WordPress installations that depend on lots of different plugins and dependencies that may sometimes have compatibility issues between versions.

7. Extra: PageSpeed for Nginx

Now, Pagespeed really doesn’t have much to do with managing WordPress or Linux containers. Rather it’s a cutting edge post-processor and cache developed and used by Google that’s free and open source! Since we hadn’t yet officially deployed it on our platform when we published our last article, we’re going to include it here as an extra.

The PageSpeed module for Nginx takes care of a large set of essential website optimisations automat(g)ically. It implements optimisations to entire webpages according to best practices by looking at your application’s output and analysing it. Really useful things like asset minification, concatenation and optimisation are handled by the PageSpeed module, so our users get the best possible experience using our websites.

Here are just some of the things PageSpeed will automatically handle for you:

  • Javascript and CSS minification
  • Image optimisation
  • Combining Javascript and CSS
  • Inlining small CSS
  • Lazy loading images
  • Flattening CSS @imports
  • Deferring Javascript
  • Moving stylesheets to the head
  • Trimming URLs

We’re really excited about introducing the power of PageSpeed to our client sites and will be posting more about the benefits of using the Nginx PageSpeed module with WordPress in the near future. The results so far have been simply amazing.

More information

More information for Finnish-speaking readers available at

Please feel free to ask us about our WordPress platform via email at or in the comment section below.

Here’s how to patch Ubuntu 8.04 or anything where you have to build bash from source

without comments

UPDATED: I have updated the post to include the post from gb3 as well as additional patches and some tests

Just a quick post to help those who might be running older/unsupported distributions of linux, mainly Ubuntu 8.04 who need to patch their version of bash due to the recent exploit here:

I found this post and can confirm it works:

Here are the steps(make a backup of /bin/bash just in case):

#assume that your sources are in /src
cd /src
#download all patches
for i in $(seq -f "%03g" 1 28); do wget$i; done
tar zxvf bash-4.3.tar.gz
cd bash-4.3
#apply all patches
for i in $(seq -f "%03g" 1 28);do patch -p0 < ../bash43-$i; done
#build and install
./configure --prefix=/ && make && make install
cd ../../
rm -r src

To test for exploits CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, and CVE-2014-7187 I have found the following information at this link

To check for the CVE-2014-6271 vulnerability

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

it should NOT echo back the word vulnerable.

To check for the CVE-2014-7169 vulnerability
(warning: if yours fails it will make or overwrite a file called /tmp/echo that you can delete after, and need to delete before testing again )

cd /tmp; env X='() { (a)=>\' bash -c "echo date"; cat echo

it should say the word date then complain with a message like cat: echo: No such file or directory. If instead it tells you what the current datetime is then your system is vulnerable.

To check for CVE-2014-7186

bash -c 'true < || echo "CVE-2014-7186 vulnerable, redir_stack"

it should NOT echo back the text CVE-2014-7186 vulnerable, redir_stack.

To check for CVE-2014-7187

(for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | bash || echo "CVE-2014-7187 vulnerable, word_lineno"

it should NOT echo back the text CVE-2014-7187 vulnerable, word_lineno.

Written by leftyfb

September 25th, 2014 at 11:03 am

Posted in Linux,technology,Ubuntu

A guide to modern WordPress deployment (part 1)

without comments


Seravo & WordPress

As a Linux and open source specialist company, Seravo provides services to many companies that run Linux in a web server. Not surprisingly, in many of these cases, the top-level software running on the server is of course, the world’s most popular CMS, WordPress. We love it!

In the process of administering and developing a number of WordPress sites for quite some time now, we’ve discovered an arsenal of useful ways to optimise and automate WordPress, some of which we’ve published right here on our blog:

Throughout 2014, we’ve expanded our WordPress expertise and in the process, combined our practices into a full WordPress deployment platform. We’re confident our solution is the next step forward from traditional WordPress hosting services.

In the spirit of openness in the WordPress community, we’re happy to present the details of our deployment platform and which technologies lie under it in this series of blog posts.

1. LXC – A full OS for every WordPress installation

As one of the starting points to our platform, we wanted every individual WordPress installation to have its own full Linux environment. Instead of going the traditional route to virtualisation with VMs seen in most generic hosting solutions, we chose a more recent technology called Linux containers or LXC for short.

Each WordPress instance resides within its own, robust Linux container which provides a lightweight, flexible way to sandbox applications. By using LXC as a means of virtualisation, we’ve greatly reduced the overhead required for hosting websites in a clustered environment, thus increasing overall server performance.

As each WordPress container is also a completely standalone system in itself, it has been extremely easy to clone and transfer instances between hosts and even other WordPress platforms.

2. Nginx, HHVM and MariaDB for amazing performance

Instead of a more traditional LAMP (Linux, Apache, MySQL and PHP) environment. We utilised the newest in technologies for running WordPress.

  • Nginx, the fastest and most flexible HTTP server available
  • HHVM, a new and improved PHP engine developed and used by Facebook
  • MariaDB, a faster drop-in-replacement for MySQL server

The combination of these technologies enable us to offer WordPress performance unheard of when compared to LAMP environments. Additionally, all of these components are extremely configurable so that fine-tuning their performance could be a blog post all on its own.

3. Secure administration with TLS on SPDY/3.0

The drawbacks of building a HTTPS secured WordPress site have always been the inconvenience of acquiring an SSL certificate for each domain used and the increased server load from the additional computation required for secure protocols.

We didn’t want our users to throw away security for convenience, so we went in search for a solution.

First, we enabled the use of an open networking protocol called SPDY, which is the basis for the upcoming HTTP/2 protocol. SPDY/3 is already supported by all major browsers and offers a significant increase in server side performance in comparison to standard HTTPS. This allows us to effortlessly serve large amounts of secure HTTPS traffic with almost no performance penalty.

To avoid having to acquire separate SSL certificates for all our separate WordPress installations, we developed HTTPS Domain Alias – a WordPress plugin that allows the use of a separate domain name for wp-admin. All our clients now get their own subdomain for WordPress administration at *, which can be securely accessed over HTTPS for a secure WordPress admin panel.

Keep reading

Read part 2 of this series, in which we discuss the management aspects of multiple WordPress installations and useful tools for general WordPress development and security.

More information for Finnish-speaking readers available at

Written by antti

September 22nd, 2014 at 6:00 am

Turn any computer into a wireless access point with Hostapd

without comments

Linux hotspotDo you want to make a computer function as a WLAN base station, so that other computers can use as it as their wifi access point? This can easily be done using the open source software Hostapd and compatible wifi hardware.

This is a useful thing to do if computer acting as a firewall or as a server in the local network, and you want to avoid adding new appliances that all require their own space and cables in you already crowded server closet. Hostapd enables you to have full control of your WLAN access point and also enhances security. By using Hostapd the system will be completely in your control, every line of code can be audited and the source of all software can be verified and all software can be updated easily. It is quite common that active network devices like wifi access points are initially fairly secure small appliances with Linux inside, but over time their vendors don’t provide timely security updates and local administrators don’t care to install them via some clumsy firmware upgrade mechanism. With a proper Linux server admins can easily SSH into it and run upgrades using the familiar and trusted upgrade channels that Linux server distributions provide.

The first step in creating wireless base station with Hostapd is to make sure the WLAN hardware supports running in access point mode. Examples are listed in the hostapd documentation. A good place to shop for WLAN cards with excellent Linux drivers is and in their product descriptions the WLAN card supported operation modes are nicely listed.

The next step is to install the software called Hostapd by Jouni Malinen and others. This is a very widely used software and it most likely is available in your Linux distribution by default. Many of the WLAN router appliances available actually are small Linux computers running hostapd inside, so by running hostapd on a proper Linux computer will give you at least all the features available in the WIFI routers, including advanced authentication and logging.

Our example commands are for Ubuntu 14.04. You need to have access to install hostapd and dnsmasq Dnsmasq is a small DNS/DHCP server which we’ll use in this setup. To start simply run:

sudo apt-get install hostapd dnsmasq

After that you need to create and edit the configuration file:

zcat /usr/share/doc/hostapd/examples/hostapd.conf.gz | sudo tee -a   /etc/hostapd/hostapd.conf

The configuration file /etc/hostapd/hostapd.conf is filled with configuration examples and documentation in comments. The relevant parts for a simple WPA2 protected 802.11g  network with the SSID ‘Example-WLAN‘ and password ‘PASS‘ are:


Next you need to edit the network interfaces configuration to force the WLAN card to only run in the access point mode. Assuming that the access point network will use the address space 192.168.8.* the file /etc/network/interfaces should look something like this:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet static
hostapd /etc/hostapd/hostapd.conf

Then we need to have a DNS relay and DHCP server on our wlan0 interface so the clients actually get a working Internet connection, and this can be accomplished by configuring dnsmasq. Like hostapd it also has a very verbose configuration file /etc/dnsmasq.conf, but the relevant parts look like this:


Next we need to make sure that the Linux kernel forwards traffic from our wireless network onto other destination networks. For that you need to edit the file /etc/sysctl.conf and make sure it has lines like this:


We need to activate NAT in the built-in firewall of Linux to make sure the traffic going out uses the external address as its source address and thus can be routed back. It can be done for example by appending the following line to the file /etc/rc.local:

iptables -t nat -A POSTROUTING -s ! -d  -j MASQUERADE

Some WLAN card hardware might have a virtual on/off switch. If you have such hardware you might need to also run rfkill to enable the hardware using a command like rfkill unblock 0.

The same computer also runs Network Manager (as for example Ubuntu does by default) you need to edit it’s settings so that if won’t interfere with the new wifi access point. Make sure file /etc/NetworkManager/NetworkManager.conf looks like this:


Now all configuration should be done. To be sure all changes take effect, finish by rebooting the computer.

If everything is working, a new WLAN network should be detected by other devices.
On the WLAN-server you’ll see similar output from these commands:

$ iw wlan0 info
Interface wlan0
        ifindex 3
        type AP
        wiphy 0

$ iwconfig 
wlan0     IEEE 802.11bgn  Mode:Master  Tx-Power=20 dBm   
          Retry  long limit:7   RTS thr:off   Fragment thr:off
          Power Management:off

$ ifconfig
wlan0     Link encap:Ethernet  HWaddr f4:ec:38:de:c8:d2  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::f6ec:38ff:fede:c8d2/64 Scope:Link
          RX packets:5463040 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8166528 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:861148382 (861.1 MB)  TX bytes:9489973056 (9.4 GB)

Written by Otto Kekäläinen

August 27th, 2014 at 9:25 am

Optimal Sailfish SDK workflow with QML auto-reloading

without comments

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

Sailfish is the Linux based operating system used in Jolla phones. Those who develop apps for Jolla use the Sailfish SDK (software development kit), which is basically a customized version of Qt Creator. Sailfish OS apps are written using the Qt libraries and typically in the C++ programming language. The user interfaces of Sailfish apps are however written in a declarative language called QML. The syntax of QML is a custom markup language and includes a subset of CSS and JavaScript to define style and actions. QML files are not compiled but stay as plain text files when distributed with the app binaries and are interpreted at run-time.

While SailfishOS IDE (Qt Creator) is probably pretty good for C++ programming with Qt libraries, and the Sailfish flavour comes nicely bundled with complete Sailfish OS instances as virtual machines (one for building binaries and one for emulating running the binaries) the overall workflow is not very optimal from a QML development point of view. Each time a developer presses the Play button to launch his app, Qt Creator builds the app from scratch, packages it, deploys it on the emulator (or a real device if set up to do so) and only then actually runs the app. After doing some changes to the source code, the developer needs to remember to press Stop and then press Play to build, deploy and start the app again. Even on a super fast machine this cycle takes at least 10 seconds.

It would be a much more optimal workflow if relaunching the app after QML source code changes would happen in only 1 second or even less. Using Entr it is possible.

Enter Entr

Entr is an multi platform app which uses the operating system facilities to watch for file changes and to run a command the instant a watched file is changed. To install Entr on a Sailfish OS emulator or device, ssh to the emulator or device, add the community repository chum and install the package entr with (Note the chum for also exists, but the repo is empty.):

ssu ar chum
pkcon refresh
pkcon install entr

After this change to the directory where your app and it’s QML files reside and run entr:

cd /usr/share/harbour-seravo-news/qml/
find . -name *.qml | entr -r /usr/bin/harbour-seravo-news

The find command will make sure all QML files in current or any subdirectory will be watched. Running entr with parameter -r will make sure it kills the program before running it again. The name of our app in this example here is seravo-news (available in the Jolla store if you are interested).

With this the app would automatically reload it any of the QML files change. To do this mount the app directory on the emulator (or device) to your local system using SSH:

mkdir mountpoint
sshfs mountpoint/

Then finally open Qt Creator, point it to the files in the mountpoint directory and start editing. Every time you’ve edited QML files and you feel like you want to see how the result looks like, simply press Ctrl+S to save and watch the magic! It’s even easier than what web developers are used to do then pressing F5 to reload, because on the emulator (or device) there is nothing you need to do, just look at it while the app auto-restarts directly.

Remember to copy or directly git commit your files from the mountpoint directory when you’re completed writing the QML files.

Entr has been packaged for SailfishOS by Seravo staff. Subscribe to our blog to get notified when we post about how to package and build RPM packages using the Open Build System and submit them for inclusion in SailfishOS Chum repository.

Written by Otto Kekäläinen

April 15th, 2014 at 2:37 am

Open source for office workers

without comments

Document Freedom logoOpen source software is great and it’s not only great for developers who can code and use the source directly. Open source is a philosophy. Open source is for technology like what democracy is for society: it isn’t magically superior right away, but it enables a process which over time leads to best results – or at least avoids the worst results. A totalitarian regime might be efficient and benevolent, but there is a big risk it will become corrupt and get bad. And then a totalitarian regime gets bad, it can be really, really ugly.

Because of this philosophy even regular office workers should strive for maximizing their use of open source software. To help ordinary non-technical people Seravo has contributed to the VALO-CD project, which in 2008-2013 created a collection of the best Free and Open Source Software for Windows, which is available both in Finnish and English. The CD (contents suitable also for a USB stick) and related materials are still available for download.

We have also participated in promoting open standards. Most recently we helped the Free Software Foundation Europe publish a press releases in Finland regarding the Document Freedom Day. Also the theme of our latest Seravo-salad was the OpenDocument Format. Open standards are essential in making sure users can access their own data and open files in different programs. Open standards is also about programs being able to communicate with each other directly using publicly defined protocols and interfaces.

Information technology is rather challenging, and understanding abstract principles like open source and open standards does not happen in one go. Seravo is proud to support the oldest open source competence center in Europe, the Finnish Center for Open Systems and Solutions COSS ry which has promoted open technologies in Finland since 2003.

When it comes down to details, training is needed. This and last year we have cooperated with the Visio educational centre in Helsinki to provide courses on how to utilize open source software in non-profit organizations.

Learn more

We have recently published the following presentations in Finnish so people can learn more by themselves:

Written by Otto Kekäläinen

March 31st, 2014 at 4:56 am

How’s (battery) life with Jolla?

without comments

Some years ago Nokia conducted a large survey among its customers on what people most liked about Nokia phones. One of the top features turned out to be their unrivaled battery life. Despite the hype around screen resolutions, processor performance and software versions, one of the most important features for a mobile device is simply how long until you have to charge it again.

Jolla phone uptime shows device has been continously on for 8 days and 13 hours

Jolla phone uptime shows the device has continuously been turned on for 8 days and 13 hours

Back in 2012 we wrote about how to get an Android phone last for a week without recharging. On a mobile phone, the single most significant power hog is the display. On the other hand with the display turned off, the biggest energy hogs are the wireless radio devices. The Android phone in our example lasted for an entire week after locking it in 2G mode only, thus utilising only the most basic GSM network connection with all other forms of connectivity disabled.

The Nokia Maemo series devices and the Meego N9 smart phone already sported a feature where if the device was not in active use, it would automatically downgrade the network connections or disable them. When a user the opened an application that requires network access, the network was re-enabled automatically without extra user intervention. This feature is also present in Jolla phones. This is the reason why Jolla users every now and then see the “Connecting” notification; the connections are disabled but are automatically invoked upon request for network access.

We tested this feature by having all networks (3G, WLAN, Bluetooth, GPS) enabled in the Jolla phone settings and by having e-mail updates active with instant message presence turned on, but with no further active usage of the device. The results showed that the Jolla phone battery lasted for over 8 days! The screenshot attached is from SailTime by Mikko Ahlroth, which visualises the output of the uptime Linux command.

Keeps on running…

But wait, that was not all! The unique hardware feature present in the Jolla Phone is, of course The Other Half (abbreviated TOH), an exchangeable back side of the device. One of the connectors of TOH is a I2C connection, which is able to transfer both data and power. This makes possible TOHs that supplement the main battery within the device. In fact, the main battery is also located on the backside, so it could completely be replaced with a TOH that connects directly to the connectors the original battery would use.

First examples of this have already emerged. Olli-Pekka Heinsuo has created two battery supplementing Other Halves: the Power Half, which holds an extra battery for increased capacity and the Solar Half, which hosts a solar panel that directly charges the device. Olli-Pekka is attending the Seravo sponsored Jolla and Sailfish Hack Day next Saturday. If you wish to attend, please make sure to swiftly register to the event as the attendance capacity is limited!

Solar Half Power Half

Written by Otto Kekäläinen

March 27th, 2014 at 6:20 am

Installing Node.js on SUSE Linux Enterprise

without comments

SUSE logo
The officially supported collection of software in SUSE Linux Enterprise Linux 11 Service Pack 3 does not contain all conceivable Linux software, but in the Open Build System there are tons of software that is build for SLES 11SP3. Installing these software packages and repositories is of course on your own risk, as they are not a part of the officially supported offering. In practice however it works quite well to have a officially supported SLES base and on top of that a handful of additions that you care about yourself.

Node.js on SUSE

The JavaScript server side engine Node.js is an example of a program that you might need to complete your mission, but which isn’t in SUSE by default. You could of course download and install the sources from, but that isn’t a very good option in terms of maintenance, security and automatic updates. The optimal way is to browse the public SUSE instance of the Open Build Service at By entering the search term “nodejs” and from the results clicking “Show other versions” and opening the section “SUSE SLE-11 SP 3″ and “Show unstable versions” you can see all the repositories that contain SLES11SP3 compatible packages with the name “nodejs”.

The first colon separated part in the repository names indicate what the type of the repository is. Repos named “home:” something belong to individual users (similar to a PPA repository at for those who are familiar with Ubuntu). Other names are project names, and thus more likely to have a group of maintainers and thus preferred over individual users repositories. In this case the best repository is likely to be the official devel tools subproject nodejs at “devel:languages:nodejs”.

Importing the openSUSE repository public key

Importing the openSUSE repository public key

Once the correct line is identified simply click on the “1 Click Install” link and a .ymp file will be downloaded and opened with the SUSE package manager. This .ymp file contains both the package name and repository information. If installation is executed, the repository will permanently be added to the system and the package in question installed, and in future also automatically updated. Just like PPAs in Ubuntu this repository is single-purpose and only contains a Node.js packages so no other package on the system will be affected of overridden by updates from this repository, so it is fairly safe to use. One click install also has a command line tool, so alternatively you could run:


Git and SUSE

When deploying your Node.js apps you most likely also need the Git version management software. With the same principles above you can simply install it by running:


Using these same principles you can install any software from the openSUSE instance of the Open Build Service. Just browse to (often abbreviated as s.o.o) and start searching!

Written by Otto Kekäläinen

March 3rd, 2014 at 5:13 am

The Jolla phone – first impressions

without comments

Jolla phone lineupThe first device running SailfishOS, the successor of Meego, has finally been released. It’s elegant and beautiful both on the outside and inside. It has multiple unique features that makes it unlike any mobile device we’ve seen so far.

We have been waiting for Jolla to release their phone for more than a year and finally it has happened. It is certainly not an easy task to make the world’s greatest mobile device and fulfill all the expectations people have for Jolla, but they have indeed succeeded in doing something amazing. The last Nokia Meego device N9 was very good and appraised for its gesture based interface and now two years later we can find that all new Google apps, Windows Metro and Ubuntu phone among others are built around swiping. In Jolla SailfishOS the gesture based interface is refined and feels almost magical to use.

The hardware

The device rocks an clean and elegant Nordic design. It is simply beautiful. There are no front facing buttons but on the side there is a power key and volume buttons. The volume buttons double as camera buttons if the camera is open. The back facing camera has an LED flash, auto-focus and an 8 megapixel sensor which is more than enough. In fact, the default wide screen camera mode takes 6,1 megapixel pictures, which many agree is the most optimal file size-to-quality ratio. There is also a smaller front facing camera for video call use. The internal storage is 16 GB and there is a slot for an external microSD card where users can attach e.g. a 64 GB card. The screen resolution is 960×540 pixels which looks very sharp on the beautiful 4,5 inch screen – no pixels can be distinguished with the bare eye. All the usual sensors are included (compass, gyrometer, acceleration, ambient light). The GPS is a dual chip with Glonass, so it will be able to figure out the location quickly – even inside vehicles or indoors. The battery is user-replaceable and has a capacity of 2100 mAh.

Jolla and Other Halves

Jolla and Other Halves

All of the above can be found in other top-of-line smart phones as well. However, there is one hardware feature that is unique to Jolla: The Other Half. The Other Half is a concept of user-changeable smart back cover. The basic covers included in the package feature an embedded NFC chip that makes the Sailfish UI change color and theme (called Ambience). However, the Other Half could connect to the main device also using a I2C connector. I2C is a bus standard common in all sorts of electronic devices and it can transmit both power and communication. Using this bus, anybody could make all kinds of imaginative Other Halves. Hopefully a keyboard will be one of the first Other Halves to come to the market. The battery connectors are also facing the back cover, so it should be easy for any manufacturer to make a giant 10 000 mhA battery filled Other Half. And of course the Other Half could also use Bluetooth or other generic means to communicate with the main device. The official specs will soon be released, so even home 3D-printing enthusiasts may produce their own Other Halves. It will be very exciting to see what kind of Other Halves start to appear in the future.

The software: SailfishOS

The software is indeed something altogether unique. Most readers of this blog are familiar with the story of Maemo-Meego-Eflop. Now the big question is, does SailfishOS offer something it’s competition does not? – Yes! The swipe based UI is a bit weird the first 5 minutes you use it, but once your muscle memory catches up, you’ll notice your fingers swiping all devices you touch and your brain wondering why those other devices require unnecessary amounts of thought to use. Android is easy to use, but even after using SailfishOS just for one day, going back to the Android world of multiple desktops, widgets, app menus and such starts to look rather complex. SailfishOS is just so natural you need to experience it yourself.

Jolla main screen showing off multitaskingVisualise this: you take your device out of your pocket and then double tap on the screen with your thumb to activate it. Then, without moving the place your thumb is at you just swipe down a little bit and feel the device vibrate three times as the selection passes over the pull down menu options. Even without looking at the device you know when the third option (camera) is selected. Then you simply lift your thumb and the camera opens. Then you point at something, move your thumb slightly down and then touch to take the picture. Then you might want to look at the picture you just took? Just swipe left and the picture is visible in full screen. Pinch to zoom. When you are done and you e.g. want to look at the clock, just swipe left starting from the edge of the screen and you get to see the main view with time, battery status, open apps overview etc. Maybe in the middle of that you decide you still have time to capture some more photos. Instead of swiping all the way to the end, you reverse direction in the middle and return back to the camera. All this with very smooth animations and responsive feeling.

On day-to-day basis a very important part of the user experience is the on-screen keyboard. SailfishOS uses the same Maliit keyboard familiar to many from the N9, which is excellent. It is strange that this open source keyboard component hasn’t been picked up in other mobile Linux “distributions”.

Jolla themselves talk a lot about the UI theme system called Ambiance. Basically you pick a background picture and then ambiance applies automatically the picture and matching colors all over the user interface. Most of the time it is stylish, but sometimes the system thinks bright red is a good colour for UI elements and you wish you could set the colours manually; but you can’t.

Multitasking is a feature that sounds technical, but when you have the app overview open and see the real time minified versions of the apps (think Gnome Shell) the feature and its advantages are easy to understand. It was great to be able to have Osmand download a 150 MB offline map file uninterrupted while browsing some photos at the same time. It was great to be able to load up a music video on YouTube and be able to listen to it without having the screen turned on at all or while reading an e-mail in another app.

The settings center is fantastic. In fact, the whole way the app settings, contacts, accounts and everything in general seems to be integrated very well; but then again that was already true with Nokia N900. This is an area where the Maemo-Meego was a pioneer and the competition hasn’t catched up yet. Linux geeks will love that in the settings there is an option to enable the developer mode. With developer mode enabled, the app Terminal is visible. This app is in fact the famous FingerTerm app with a special four row keyboard with all the special keys needed in terminal use. The keyboard sits translucent on top of the content, so the maximal screen area is available for terminal output and the app works well in both landscape and portrait mode. Linux geeks and developers will also respect the fact that SailfishOS is a true GNU/Linux system running Linux kernel 3.4 with a fully functional shell, software is managed as RPM packages with Zypper and the SailfishOS project is very open to new contributors. If you want to read about how this software is shipped, checkout the SailfishOS site and upstream projects Mer and Nemo.

The SailfishOS label says ‘beta’ but the system itself seems mature and stable, at least in our use so far. All the functionality that belongs to a modern mobile OS is there, but when it comes to things apps should do, there are still many things lacking.

The software: HTML5, Jolla and Android apps

Sailfish architecture

Sailfish architecture

Jolla has its own app store where you can browse and install native Qt/QML-based apps. At the moment however, there are very few apps, but then again all the basic apps you need like an alarm clock, e-mail, maps, calendar etc. are included.

At Seravo we hold the belief that in the long term, browser based HTML5 apps will be more important than native apps. Therefore to us the mobile browser is more important than any of the native apps combined. It should be is easy to search the web, enter URLs, open new tabs, save bookmarks etc. All of this can be done with the current browser in SailfishOS. The browser seems to work pretty fast and flawlessly. We heard at the launch event that the rendering engine is Gecko (same as in Firefox) so it is likely to have good support for HTML5 features. However in terms of usability, Chrome for Android is still the best mobile browser we’ve used so far. In particular the SailfishOS browser does not seem to have support for landscape mode. Hopefully while SailfishOS matures, the browser will grow to be more polished as well.

Android apps can be run using Alien Dalvik (probably some sort of virtual machine layer). You can either get both free and paid apps from the bundled Yandex store or you can get individual .apk files from other sources and install them manually. If you prefer open source only, then one option would be to use F-Droid, which can be installed by simply opening in the browser, downloading the apk and activating it.

Interesting times ahead

On the SailfishOS website it states that “We believe this will act as a refreshing sea-wind that will help push the industry forward.” Indeed it is refreshing and is leads the industry forward by leaps and bounds. At the same time there are also interesting developments going on with FirefoxOS, Ubuntu and Tizen. Whenever these are compared, Jolla seems to get the best reviews. However, SailfishOS and particularly the ecosystem around it has just started to grow so nothing is certain yet. The only thing we can be sure of is that we live in very interesting times. In the coming years, billions of people are going to buy new mobile phones and to many of them, their mobile phone is going to be their primary device to get online and get involved in the information society. If you wish to become part of this, you can get involved in the SailfishOS community. SailfishOS can, in fact, be installed on other devices as well as the Jolla phone, but nevertheless we recommend, in particular to Linux fans, to scroll down at and sign up for availablility notifications so you can eventually get a Jolla phone for yourself.


Jolla phone lineup Close up back Close up front Jolla and the Other Half Jolla and Other Halves Sailfish architecture Jolla main screen showing off multitasking SailfishOS settings menu SailfishOS dialer with Snow Ambiance SailfishOS dialer with Dark Ambiance Maliit keyboard Vibrating pull down menu Remorce notification allows user to cancel action within 5 seconds in browser Browser tabs Main screen with some apps open Photos app F-Droid on SailfishOS Osmand on SailfishOS Developer mode Jolla with terminal open (Fingerterm) Jolla and Dell XPS 13 with Gnome Shell

Official video

Written by Otto Kekäläinen

November 28th, 2013 at 7:38 am