LinuxPlanet Blogs

By Linux Geeks, For Linux Geeks.

Archive for the ‘technology’ Category

Continuous integration testing for WordPress plugins on Github using Travis CI

without comments



We have open sourced and published some plugins on We only publish them to and do the development in Github. Our goal is to keep them simple but effective. Quite a few people are using them actively and some of them have contributed back by creating additional features or fixing bugs/docs. It’s super nice to have contributions from someone else but it’s hard to see if those changes break your existing features. We all do mistakes from time to time and it’s easier to recover if you have good test coverage. Automated integration tests can help you out in these situations.

Choosing Travis CI

As we use for hosting our code and wanted a tool which integrates really well with Github. Travis works seamlessly with Github and it’s free to use in open source projects. Travis gives you ability to run your tests in coordinated environments which you can modify to your preferences.


You need to have a Github account in order to setup Travis for your projects.

How to use

1. Sign up for free Travis account

Just click the link on the page and enter your Github credentials

2. Activate testing in Travis. Go to your accounts page from right corner.


Then go to your Organisation page (or choose a project of your own) and activate the projects you want to be tested in Travis.


3. Add .travis.yml into the root of your project repository. You can use samples from next section.


After you have pushed to Github just wait for couple of seconds and your tests should activate automatically.

Configuring your tests

I think the hardest part of Travis testing is just getting started. That’s why I created testing template for WordPress projects. You can find it in our Github repository. Next I’m going to show you a few different cases of how to use Travis. We are going to split tests into unit tests with PHPUnit and integration tests with RSpec, Poltergeist and PhantomJS.

#1 Example .travis.yml, use Rspec integration tests to make sure your plugin won’t break anything else

This is the easiest way to use Travis with your WordPress plugin. This installs latest WP and activates your plugin. It checks that your frontpage is working and that you can log into admin panel. Just drop this .travis.yml  into your project and start testing :)!

sudo: false
language: php

  on_success: never
  on_failure: change

  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

    - php: nightly

  - git clone wp-tests
  - bash wp-tests/bin/ test root '' localhost $WP_VERSION

  - cd wp-tests/spec && bundle exec rspec test.rb

#2 Example .travis.yml, which uses phpunit and rspec integration tests

  1. Copy phpunit.xml and tests folder from: into your project

  2. Edit tests/bootstrap.php line containing PLUGIN_NAME according to your plugin:

  1. Add .travis.yml file

sudo: false
language: php

  on_success: never
  on_failure: change

  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

    - php: nightly

  # Install composer packages before trying to activate themes or plugins
  # - composer install
  - git clone wp-tests
  - bash wp-tests/bin/ test root '' localhost $WP_VERSION

  - phpunit
  - cd wp-tests/spec && bundle exec rspec test.rb

For this to be useful you need to add the tests according to your plugin.

To get you started see how I did it for our plugin WP-Dashboard-Log-Monitor.

Few useful links:

If you want to contribute for better WordPress testing put an issue or pull request in our WordPress testing template.

Seravo can help you using PHPUnit, Rspec and Travis in your projects,
please feel free to ask us about our WordPress testing via email at or in the comment section below.


Written by onni

July 2nd, 2015 at 6:04 am

Posted in technology

Reviewing the Ubuntu Phone: is it just for geeks?

without comments



Few weeks ago I found a pretty black box waiting on my desk at the office. There it was, the BQ Aquaris E4.5, Ubuntu edition. Now available for sale all over Europe, the world’s first Ubuntu phone had arrived to the eager hands of Seravo. (Working in an open office with a bunch of other companies dealing more or less with IT, one can now easily get attention by not just talking on the phone but about it, too.)

The Ubuntu phone has been developed for a while, and now it has found its first users and can really be reviewed in practice. Can Ubuntu handle the expectations and demands of a modern mobile user? My personal answer, after getting to know my phone and see what it can and cannot do, is yes, but not yet.

But let’s get back to the pretty black box. For a visual (and not-that-technical) person such as myself, the amount of thought put into the design of the Ubuntu phone is very pleasing. The mere layout of the packaging of the box is very nice, both to the eye and from the point of view of usability. The same goes (at least partly) for the phone and its operating system itself: the developers themselves claim that “Ubuntu Phone has been designed with obsessive attention to detail” and that “form follows function throughout”. So it is not only the box that is pretty.



Swiping through the scopes

When getting familiar with the Ubuntu phone, one can simply follow clear insctructions to get the most relevant settings in place. A nice surprise was that the system has been translated into Finnish – and to a whole bunch of other odd languages ranging from Catalan to Uyghur.

The Ubuntu phone tries to minimalize the effort of browsing through several apps, and introduces the scopes. “Ubuntu’s scopes are like individual home screens for different kinds of content, giving you access to everything from movies and music to local services and social media, without having to go through individual apps.” This is a fine idea, and works to a certain point. I myself would have though appreciated an easier way to adjust and modify my scopes, so that they would indeed serve my everyday needs. It is for instance not possible to change the location for the Today section, so my phone still thinks that I’m interested in the weather and events near Helsinki (which is not the case, as my hometown Tampere is lightyears or at least 160 kilometers away from the capital).

Overall, swiping is the thing with the Ubuntu phone. One can swipe from the left, swipe from the right, swipe from the top and the bottom and all through the night, never finding a button to push in order to get to the home screen. There are no such things as home buttons or home screens. This acquires practice until one gets familiar with it.



Designed for the enthusiasts

A friend of mine once said that in order to really succeed, ecological products must be able to compete with non-ecological ones in usability – or at times even beat them in that area. A green product that does not work, can never achieve popularity. The same though can be applied to open source products as well: as the standard is high, the philosophy itself is not enough if the end product fails to do what it should.

This thought in mind, I was happy to notice that the Ubuntu phone is not only new and exiting, but also pretty usable in everyday work. There are, though, bugs and lacks of features and some pretty relevant apps missing from the selection. For services like Facebook, Twitter, Google+ or Google Maps, Ubuntu phone uses web apps. If one is addicted to Instagram or WhatsApp, one should still wait until purchasing an Ubuntu phone. Telegram, a nice alternative for instant messaging is though available, and so is the possibility to view one’s Instagram feed. It also remains a mystery to me what benefits sigining in to Ubuntu One can bring to the user – except for updates, which are indeed longed for.

To conclude, I would state that at this point the Ubuntu phone is designed for the enthusiasts and developers, and should keep on evolving to become popular with the masses. The underlying idea of open source should of course be supported, and it is expected to see the Ubuntu phone develop in the near future. Hopefully the upcoming updates will fix the most relevant bugs and the app selection will fill the needs of an average mobile phone user.


Read more about the Ubuntu Phone.

Written by Sanna Saarikangas

June 4th, 2015 at 12:05 am

Posted in phone,technology,Ubuntu

Why and how to publish a plugin at

without comments

The first ever WordCamp was held in Finland on May 8th and 9th in Tampere. Many from our staff participated in the event and Seravo was also one of the sponsors.

On Friday Otto Kekäläinen had a talk with the title “Contributing to – Why you (and your company) should publish plugins at”. On Saturday he held workshops titled “How to publish a plugin at” and Onni Hakala held a workshop about how to develop with WordPress using Git, Composer, Vagrant and other great tools.


Below are the slides from these presentations and workshops:


WordCamp Workshop on modern dev tools by Onni Hakala (in Finnish)


See also our recap on WordCamp Finland 2015 in Finnish:




(Photos by Jaana Björklund)


Written by Otto Kekäläinen

May 13th, 2015 at 6:27 am

OpenFOAM – Open Computational Fluid Dynamics

without comments

OpenFOAM (Open source Field Operation And Manipulation) is a numerical CFD (Computational Fluid Dynamics) solver and a pre/postprocessing software suite.

Special care has been taken to enable automatic parallelization of applications written using OpenFOAM high-level syntax. Parallelization can be further extended by using a clustering software such as OpenMPI that distributes simulation workload to multiple worker nodes.

Pre/post-processing tools like ParaView enable graphical examination of the simulation set-up and results.

The project code is free software and it is licensed under the Gnu General Public License and maintained by the OpenFOAM Foundation.

A parellel version called OpenFOAM-extend  is a fork maintained by Wikki Ltd that provides a large collection of community generated code contributions that can be used with the official OpenFOAM version.

What does it actually do?

OpenFOAM is aimed at solving continuum mechanical problems. Continuum mechanics deals with the analysis of kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles.

OpenFOAM has an extensive range of features to solve complex gas/fluid flows involving chemical reactions, turbulence, heat transfer, solid dynamics, electromagnetics and much more!

The software suite is used widely in the engineering and scientific fields concerning simulations of fluid flows in pipes, engines, combustion chambers, pumps and other diverse use cases.


How is it used?

In general, the workflow adheres to the following steps:

  • pre-process
    • physical modeling
    • input mesh generation
    • visualizing the input geometry
    • setting simulation parameters
  • solving
    • running the simulation
  •  post-process
    • examining output data
    • visualizing the output data
    • refining the simulation parameters
    • rerunning the simulation to achieve desired results

Later we will see an example of a 2d water flow simulation following these steps.


What can Seravo do to help a customer running OpenFOAM?

Seravo can help your organization by building and maintaining a platform for running OpenFOAM and related software.

Our services include:

  • installing the host platform OS
  • host platform security updates and maintenance
  • compiling, installing and updating the OpenFOAM and OpenFOAM-extend suites
  • cluster set-up and maintenance
  • remote use of visualization software

Seravo has provided above-mentioned services in building a multinode OpenFOAM cluster to its customers.


OpenFOAM example: a simplified laminar flow 2d-simulation of a breaking water dam hitting an obstacle in an open container

N.B. Some steps are omitted for brevity!

Input files for simulation are ascii text files with defined open format.

Inside the working directory of a simulation case, we have many files defining the simulation environment and parameters, for example (click filename for sample view):

  • constant/polyMesh/blockMeshDict
    • defines the physical geometries; walls, water, air
  • system/controlDict
    • simulation parameters that define the time range and granularity of the run
  • constant/transportProperties
    • defines material properties of air and water used in simulation
  • numerous other control files define properties such as gravitational acceleration, physical properties of the container materials and so on

In this example, the simulated timeframe will be one second with output snapshot every 0,01 seconds.

OpenFOAM simulation input geometry

OpenFOAM simulation input geometry


After input files have been massaged to desired consistency, commands are executed to check and process the input files for actual simulation run:

  1. process input mesh (blockMesh)
  2. initialize input conditions (setFields)
  3. optional: visually inspect start conditions (paraFoam/paraview)

Solver application in this case will be OpenFOAM provided “interFoam”, which is a solver for 2 incompressible fluids. It tracks the material interfaces and mesh motion.

After setup, the simulation is executed by running the interFoam command (sample output).

OpenFOAM cluster running full steam on 40 CPU cores.

OpenFOAM cluster running simulation full steam on 40 CPU cores.

After about 40 seconds, the simulation is complete and results can be visualized and inspected with ParaView:

Simulation output at 0 seconds.

Simulation output at 0 seconds.

Simulation output at 0,2 seconds.

Simulation output at 0,2 seconds.


And here is a fancy gif animation of the whole simulation output covering one second of time:



Written by Tero Auvinen

April 10th, 2015 at 4:27 am

How to create good OpenPGP keys

without comments

The OpenPGP standard and the most popular open source program that implements it, GnuPG, have been well tested and widely deployed over the last decades. At least for the time being they are considered to be cryptographically unbroken tools for encrypting and verifying messages and other data.

photo: keys

Due to the lack of easy-to-use tools and integrated user interfaces, large scale use of OpenPGP, in for example encrypting emails, hasn’t happened. There are however some new interesting efforts like EnigmailMailPile, Mailvelope and End-to-end that might change the game. There are also new promising tools in the area of key management (establishing trust between parties) like Gnome Keysign and

Despite the PGP’s failure to solve email encryption globally, OpenPGP has been very successful in other areas. For example it is the de-facto tool for signing digital data. If you download a software package online, and want to verify that the package you have on your computer is actually the same package as released by the original author (and not a tampered one), you can use the OpenPGP signature of the author to verify authenticity. Also, even though it is not easy enough for day-to-day usage, if a person wants to send a message to another person and they want to send it encrypted, using OpenPGP is still the only solution for doing it. Alternative messaging channels like Hangouts or Telegram are just not enough widely used, so email prevails – and for email OpenPGP is the best encryption tool.

How to install GnuPG?

Installing GnuPG is easy. Just use the software manager of your Linux distro to install it, or download the installation package for Mac OS X via

There are two generations of GnuPG, the 2.x series and the 1.4.x series. For compatibility reasons it is still advisable to use the 1.4.x versions.

How to create keys?

Without you own key you can only send encrypted data or verify the signature of other users. In order to be able to receive encrypted data or to sign some data yourself, you need to create a key pair for yourself. The key pair consists for two keys:

  • a secret key you shall protect and which is the only key that can be used to decrypt data sent to you or to make signatures
  • a public key which you publish and which others use to encrypt data for you or use to verify your signatures

Before you generate your keys, you need to edit your gpg configuration file to make sure the strongest algorithms are used instead of the default options in GnuPG. If you are using a very recent version of GnuPG it might already have better defaults.

For brevity, we only provide the command line instructions here. Edit the config file by running for example nano ~/.gnupg/gpg.conf and adding the algorithm settings:

personal-digest-preferences SHA512
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed

If the file does not exist, just run gpg and press Ctrl-C to cancel. This will create the configuration directory and file automatically.

Once done with that preperation, actually generate the key by running gpg --gen-key

For key type select “(1) RSA and RSA (default)“. RSA is the preferred algorithm nowadays and this option also automatically creates a subkey for encryption, something that might be useful later but which you don’t immediately need to learn about.

As the key size enter “4096” as 2048 bit keys are not considered strong enough anymore.

A good value for expiration is 3 years, so enter “3y” when asked for how long the key should be valid. Don’t worry – you don’t have to create a new key again. You can some day update your key expiry date, even after it expired. Having keys that never expires is bad practice. Old never-expiring keys might come back haunting you some day.

For the name and email choose your real name and real email. OpenPGP is not an anonymity tool, but a tool to encrypt to and verify signatures of other users. Other people will be evaluating if a key is really yours, so having a false name would be confusing.

When GnuPG asks for a comment, don’t enter anything. Comments are unnecessary and sometimes simply confusing, so avoid making one.

The last step is to define a passphrase. Follow the guidelines of our password best practices article and choose a complex yet easy to remember password, and make sure you never forget it.

$ gpg --gen-key 
gpg (GnuPG) 1.4.10; Copyright (C) 2008 Free Software Foundation, Inc.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 3y
Key expires at Mon 05 Mar 2018 02:39:23 PM EET
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <>"

Real name: Lisa Simpson
Email address:
You selected this USER-ID:
    "Lisa Simpson <>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 284 more bytes)

gpg: key 3E44A531 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2018-03-05
pub   4096R/3E44A531 2015-03-06 [expires: 2018-03-05]
      Key fingerprint = 4C63 2BAB 4562 5E09 392F  DAA4 C6E4 158A 3E44 A531
uid                  Lisa Simpson <>
sub   4096R/75BB2DC6 2015-03-06 [expires: 2018-03-05]


At this stage you are done and can start using your new key. For different usages of OpenPGP you need to consult other documentation or install software that makes it easy. All software that use OpenPGP will automatically detect your ~/.gnupg directory in your home folder and use the keys from there.

Store securely

Make sure you home directory is encrypted, or maybe even your whole hard drive. On Linux it is easy with eCryptfs or LUKS/dm-crypt. If your hard drive is stolen or your keys leak in some other way, the thief can decrypt all your data and impersonate you by signing things digitally with your key.

Also if you don’t make regular backups of your home directory, you really should start doing it now so that you don’t lose your key or any other data either.

Additional identities (emails)

If you want to add more email addresses in the key gpg --edit-key 12345678 and in the prompt enter command adduid, which will start the dialog for adding another name and email on your key.

More guides

Encryption, and in particular secure unbreakable encryption is really hard. Good tools can hide away the complexity, but unfortunately modern tools and operating systems don’t have these features fully integrated yet. Users need to learn some of the technical stuff to be able to use different tools themselves.

Because OpenPGP is difficult to use, the net is full of lots of different guides. Unfortunately most of them are outdated or have errors. Here are a few guides we can recommend for futher reading:

Written by Otto Kekäläinen

March 6th, 2015 at 8:38 am

A guide to modern WordPress deployment (part 2)

without comments


Recently we published part one in this series on our brand new WordPress deployment platform in which we covered some of the server side technologies that constitute our next-gen WordPress platform.

In part 2 we’ll be briefly covering the toolkit we put together to easily manage the Linux containers that hold individual installations of WordPress.

4. WP-CLI, WordPress on the Command Line

We use the WordPress command line interface to automate everything you would usually have to do in the wp-admin interface. Using WP-CLI removes the inconvenience of logging into a client’s site and clicking around in the WP-admin to perform basic actions like changing option values or adding users.

We’ve been using WP-CLI as part of our install-, backup- and update processes for quite some time now. Quick, simple administration actions, especially when done in bulk is where the command line interface for WordPress really reveals its powers.

Check out the famous 5-minute install compressed into 3 easy lines with the WP-CLI:

wp core download
wp core config --dbname=wordpress --dbuser=dbuser --dbpass=dbpasswd
wp core install --url= --title="An Orange Website" --admin=anttiviljami --admin_password=supersecret

5. Git, Modern Version Control for Everything

We love Git and use it for pretty much everything we do! For WordPress, we rely on Git for deployment and development in virtually all our own projects (including this one!).

Our system is built for developers who use Git for deployment. We provide a Bedrock-like environment for an easy WordPress deployment experience and even offer the ability to easily set up identical environments for development and staging.

The main difference between Bedrock and our layout is the naming scheme. We found that it’s better to provide a familiar directory structure for the majority of our clients who may not be familiar with Bedrock, so we didn’t go with the /app and /wp directory naming scheme and instead went with /wp-content and /wordpress to provide a non-confusing separation between the WP core and the application.

Bedrock directory structure:

└── web
    ├── app
    │   ├── mu-plugins
    │   ├── plugins
    │   └── themes
    ├── wp-config.php
    ├── index.php
    └── wp

Seravo WordPress layout:

└── htdocs
    ├── wp-content
    │   ├── mu-plugins
    │   ├── plugins
    │   └── themes
    ├── wp-config.php
    ├── index.php
    └── wordpress

Our users can easily jump straight into development regardless of whether they want to use modern deployment techniques with dependency management and Git version control, or the straight up old-fashioned way of copying and editing files (which still seems to be the predominant way to do things with WordPress).

6. Composer, Easy Package Management for PHP

As mentioned earlier, our platform is built for Git and the modern WordPress development stack. This includes the use of dependency management with Composer – the package manager for PHP applications.

We treat the WordPress core, language packs, plugins, themes and their dependencies just like any other component in a modern web application. By utilising Composer as the package manager for WordPress, keeping your dependencies up to date and installed becomes just a matter of having the composer.json file included in your repositories. This way you don’t have to include any code from third party plugins or themes in your own repositories.

With Composer, you also have the ability to choose whether to always use the most recent version of a given plugin or a theme, or stay with a version that’s known to work with your site. This can be extremely useful with large WordPress installations that depend on lots of different plugins and dependencies that may sometimes have compatibility issues between versions.

7. Extra: PageSpeed for Nginx

Now, Pagespeed really doesn’t have much to do with managing WordPress or Linux containers. Rather it’s a cutting edge post-processor and cache developed and used by Google that’s free and open source! Since we hadn’t yet officially deployed it on our platform when we published our last article, we’re going to include it here as an extra.

The PageSpeed module for Nginx takes care of a large set of essential website optimisations automat(g)ically. It implements optimisations to entire webpages according to best practices by looking at your application’s output and analysing it. Really useful things like asset minification, concatenation and optimisation are handled by the PageSpeed module, so our users get the best possible experience using our websites.

Here are just some of the things PageSpeed will automatically handle for you:

  • Javascript and CSS minification
  • Image optimisation
  • Combining Javascript and CSS
  • Inlining small CSS
  • Lazy loading images
  • Flattening CSS @imports
  • Deferring Javascript
  • Moving stylesheets to the head
  • Trimming URLs

We’re really excited about introducing the power of PageSpeed to our client sites and will be posting more about the benefits of using the Nginx PageSpeed module with WordPress in the near future. The results so far have been simply amazing.

More information

More information for Finnish-speaking readers available at

Please feel free to ask us about our WordPress platform via email at or in the comment section below.

Here’s how to patch Ubuntu 8.04 or anything where you have to build bash from source

without comments

UPDATED: I have updated the post to include the post from gb3 as well as additional patches and some tests

Just a quick post to help those who might be running older/unsupported distributions of linux, mainly Ubuntu 8.04 who need to patch their version of bash due to the recent exploit here:

I found this post and can confirm it works:

Here are the steps(make a backup of /bin/bash just in case):

#assume that your sources are in /src
cd /src
#download all patches
for i in $(seq -f "%03g" 1 28); do wget$i; done
tar zxvf bash-4.3.tar.gz
cd bash-4.3
#apply all patches
for i in $(seq -f "%03g" 1 28);do patch -p0 < ../bash43-$i; done
#build and install
./configure --prefix=/ && make && make install
cd ../../
rm -r src

To test for exploits CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, and CVE-2014-7187 I have found the following information at this link

To check for the CVE-2014-6271 vulnerability

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

it should NOT echo back the word vulnerable.

To check for the CVE-2014-7169 vulnerability
(warning: if yours fails it will make or overwrite a file called /tmp/echo that you can delete after, and need to delete before testing again )

cd /tmp; env X='() { (a)=>\' bash -c "echo date"; cat echo

it should say the word date then complain with a message like cat: echo: No such file or directory. If instead it tells you what the current datetime is then your system is vulnerable.

To check for CVE-2014-7186

bash -c 'true < || echo "CVE-2014-7186 vulnerable, redir_stack"

it should NOT echo back the text CVE-2014-7186 vulnerable, redir_stack.

To check for CVE-2014-7187

(for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | bash || echo "CVE-2014-7187 vulnerable, word_lineno"

it should NOT echo back the text CVE-2014-7187 vulnerable, word_lineno.

Written by leftyfb

September 25th, 2014 at 11:03 am

Posted in Linux,technology,Ubuntu

A guide to modern WordPress deployment (part 1)

without comments


Seravo & WordPress

As a Linux and open source specialist company, Seravo provides services to many companies that run Linux in a web server. Not surprisingly, in many of these cases, the top-level software running on the server is of course, the world’s most popular CMS, WordPress. We love it!

In the process of administering and developing a number of WordPress sites for quite some time now, we’ve discovered an arsenal of useful ways to optimise and automate WordPress, some of which we’ve published right here on our blog:

Throughout 2014, we’ve expanded our WordPress expertise and in the process, combined our practices into a full WordPress deployment platform. We’re confident our solution is the next step forward from traditional WordPress hosting services.

In the spirit of openness in the WordPress community, we’re happy to present the details of our deployment platform and which technologies lie under it in this series of blog posts.

1. LXC – A full OS for every WordPress installation

As one of the starting points to our platform, we wanted every individual WordPress installation to have its own full Linux environment. Instead of going the traditional route to virtualisation with VMs seen in most generic hosting solutions, we chose a more recent technology called Linux containers or LXC for short.

Each WordPress instance resides within its own, robust Linux container which provides a lightweight, flexible way to sandbox applications. By using LXC as a means of virtualisation, we’ve greatly reduced the overhead required for hosting websites in a clustered environment, thus increasing overall server performance.

As each WordPress container is also a completely standalone system in itself, it has been extremely easy to clone and transfer instances between hosts and even other WordPress platforms.

2. Nginx, HHVM and MariaDB for amazing performance

Instead of a more traditional LAMP (Linux, Apache, MySQL and PHP) environment. We utilised the newest in technologies for running WordPress.

  • Nginx, the fastest and most flexible HTTP server available
  • HHVM, a new and improved PHP engine developed and used by Facebook
  • MariaDB, a faster drop-in-replacement for MySQL server

The combination of these technologies enable us to offer WordPress performance unheard of when compared to LAMP environments. Additionally, all of these components are extremely configurable so that fine-tuning their performance could be a blog post all on its own.

3. Secure administration with TLS on SPDY/3.0

The drawbacks of building a HTTPS secured WordPress site have always been the inconvenience of acquiring an SSL certificate for each domain used and the increased server load from the additional computation required for secure protocols.

We didn’t want our users to throw away security for convenience, so we went in search for a solution.

First, we enabled the use of an open networking protocol called SPDY, which is the basis for the upcoming HTTP/2 protocol. SPDY/3 is already supported by all major browsers and offers a significant increase in server side performance in comparison to standard HTTPS. This allows us to effortlessly serve large amounts of secure HTTPS traffic with almost no performance penalty.

To avoid having to acquire separate SSL certificates for all our separate WordPress installations, we developed HTTPS Domain Alias – a WordPress plugin that allows the use of a separate domain name for wp-admin. All our clients now get their own subdomain for WordPress administration at *, which can be securely accessed over HTTPS for a secure WordPress admin panel.

Keep reading

Read part 2 of this series, in which we discuss the management aspects of multiple WordPress installations and useful tools for general WordPress development and security.

More information for Finnish-speaking readers available at

Written by antti

September 22nd, 2014 at 6:00 am

Turn any computer into a wireless access point with Hostapd

without comments

Linux hotspotDo you want to make a computer function as a WLAN base station, so that other computers can use as it as their wifi access point? This can easily be done using the open source software Hostapd and compatible wifi hardware.

This is a useful thing to do if computer acting as a firewall or as a server in the local network, and you want to avoid adding new appliances that all require their own space and cables in you already crowded server closet. Hostapd enables you to have full control of your WLAN access point and also enhances security. By using Hostapd the system will be completely in your control, every line of code can be audited and the source of all software can be verified and all software can be updated easily. It is quite common that active network devices like wifi access points are initially fairly secure small appliances with Linux inside, but over time their vendors don’t provide timely security updates and local administrators don’t care to install them via some clumsy firmware upgrade mechanism. With a proper Linux server admins can easily SSH into it and run upgrades using the familiar and trusted upgrade channels that Linux server distributions provide.

The first step in creating wireless base station with Hostapd is to make sure the WLAN hardware supports running in access point mode. Examples are listed in the hostapd documentation. A good place to shop for WLAN cards with excellent Linux drivers is and in their product descriptions the WLAN card supported operation modes are nicely listed.

The next step is to install the software called Hostapd by Jouni Malinen and others. This is a very widely used software and it most likely is available in your Linux distribution by default. Many of the WLAN router appliances available actually are small Linux computers running hostapd inside, so by running hostapd on a proper Linux computer will give you at least all the features available in the WIFI routers, including advanced authentication and logging.

Our example commands are for Ubuntu 14.04. You need to have access to install hostapd and dnsmasq Dnsmasq is a small DNS/DHCP server which we’ll use in this setup. To start simply run:

sudo apt-get install hostapd dnsmasq

After that you need to create and edit the configuration file:

zcat /usr/share/doc/hostapd/examples/hostapd.conf.gz | sudo tee -a   /etc/hostapd/hostapd.conf

The configuration file /etc/hostapd/hostapd.conf is filled with configuration examples and documentation in comments. The relevant parts for a simple WPA2 protected 802.11g  network with the SSID ‘Example-WLAN‘ and password ‘PASS‘ are:


Next you need to edit the network interfaces configuration to force the WLAN card to only run in the access point mode. Assuming that the access point network will use the address space 192.168.8.* the file /etc/network/interfaces should look something like this:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet static
hostapd /etc/hostapd/hostapd.conf

Then we need to have a DNS relay and DHCP server on our wlan0 interface so the clients actually get a working Internet connection, and this can be accomplished by configuring dnsmasq. Like hostapd it also has a very verbose configuration file /etc/dnsmasq.conf, but the relevant parts look like this:


Next we need to make sure that the Linux kernel forwards traffic from our wireless network onto other destination networks. For that you need to edit the file /etc/sysctl.conf and make sure it has lines like this:


We need to activate NAT in the built-in firewall of Linux to make sure the traffic going out uses the external address as its source address and thus can be routed back. It can be done for example by appending the following line to the file /etc/rc.local:

iptables -t nat -A POSTROUTING -s ! -d  -j MASQUERADE

Some WLAN card hardware might have a virtual on/off switch. If you have such hardware you might need to also run rfkill to enable the hardware using a command like rfkill unblock 0.

The same computer also runs Network Manager (as for example Ubuntu does by default) you need to edit it’s settings so that if won’t interfere with the new wifi access point. Make sure file /etc/NetworkManager/NetworkManager.conf looks like this:


Now all configuration should be done. To be sure all changes take effect, finish by rebooting the computer.

If everything is working, a new WLAN network should be detected by other devices.
On the WLAN-server you’ll see similar output from these commands:

$ iw wlan0 info
Interface wlan0
        ifindex 3
        type AP
        wiphy 0

$ iwconfig 
wlan0     IEEE 802.11bgn  Mode:Master  Tx-Power=20 dBm   
          Retry  long limit:7   RTS thr:off   Fragment thr:off
          Power Management:off

$ ifconfig
wlan0     Link encap:Ethernet  HWaddr f4:ec:38:de:c8:d2  
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::f6ec:38ff:fede:c8d2/64 Scope:Link
          RX packets:5463040 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8166528 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:861148382 (861.1 MB)  TX bytes:9489973056 (9.4 GB)

Written by Otto Kekäläinen

August 27th, 2014 at 9:25 am

Optimal Sailfish SDK workflow with QML auto-reloading

without comments

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

Sailfish is the Linux based operating system used in Jolla phones. Those who develop apps for Jolla use the Sailfish SDK (software development kit), which is basically a customized version of Qt Creator. Sailfish OS apps are written using the Qt libraries and typically in the C++ programming language. The user interfaces of Sailfish apps are however written in a declarative language called QML. The syntax of QML is a custom markup language and includes a subset of CSS and JavaScript to define style and actions. QML files are not compiled but stay as plain text files when distributed with the app binaries and are interpreted at run-time.

While SailfishOS IDE (Qt Creator) is probably pretty good for C++ programming with Qt libraries, and the Sailfish flavour comes nicely bundled with complete Sailfish OS instances as virtual machines (one for building binaries and one for emulating running the binaries) the overall workflow is not very optimal from a QML development point of view. Each time a developer presses the Play button to launch his app, Qt Creator builds the app from scratch, packages it, deploys it on the emulator (or a real device if set up to do so) and only then actually runs the app. After doing some changes to the source code, the developer needs to remember to press Stop and then press Play to build, deploy and start the app again. Even on a super fast machine this cycle takes at least 10 seconds.

It would be a much more optimal workflow if relaunching the app after QML source code changes would happen in only 1 second or even less. Using Entr it is possible.

Enter Entr

Entr is an multi platform app which uses the operating system facilities to watch for file changes and to run a command the instant a watched file is changed. To install Entr on a Sailfish OS emulator or device, ssh to the emulator or device, add the community repository chum and install the package entr with (Note the chum for also exists, but the repo is empty.):

ssu ar chum
pkcon refresh
pkcon install entr

After this change to the directory where your app and it’s QML files reside and run entr:

cd /usr/share/harbour-seravo-news/qml/
find . -name *.qml | entr -r /usr/bin/harbour-seravo-news

The find command will make sure all QML files in current or any subdirectory will be watched. Running entr with parameter -r will make sure it kills the program before running it again. The name of our app in this example here is seravo-news (available in the Jolla store if you are interested).

With this the app would automatically reload it any of the QML files change. To do this mount the app directory on the emulator (or device) to your local system using SSH:

mkdir mountpoint
sshfs mountpoint/

Then finally open Qt Creator, point it to the files in the mountpoint directory and start editing. Every time you’ve edited QML files and you feel like you want to see how the result looks like, simply press Ctrl+S to save and watch the magic! It’s even easier than what web developers are used to do then pressing F5 to reload, because on the emulator (or device) there is nothing you need to do, just look at it while the app auto-restarts directly.

Remember to copy or directly git commit your files from the mountpoint directory when you’re completed writing the QML files.

Entr has been packaged for SailfishOS by Seravo staff. Subscribe to our blog to get notified when we post about how to package and build RPM packages using the Open Build System and submit them for inclusion in SailfishOS Chum repository.

Written by Otto Kekäläinen

April 15th, 2014 at 2:37 am