LinuxPlanet Blogs

By Linux Geeks, For Linux Geeks.

Archive for the ‘technology’ Category

Ubuntu Phone and Unity vs Jolla and SailfishOS

without comments

With billions of devices produced, Android is by far the most common Linux-based mobile operating system to date. Of the less known competitors, Ubuntu phone and Jolla are the most interesting. Both are relatively new and neither one has quite yet all the features Android provides, but they do have some areas of innovation where they clearly lead Android.

Jolla phone and Ubuntu phone (Bq Aquaris .45 model)

Jolla phone and Ubuntu phone (Bq Aquaris 4.5 model)

Jolla is the name of the company behind the SailfishOS. Their first device entered stores in the fall of 2013 and since then SailfishOS has received many updates and SailfishOS 2.0 is supposed to be released soon together with the new Jolla device. A review of the Jolla phone can be read in the Seravo blog article from 2013. Most of the Jolla staff are former Nokia employees with lots of experience from Maemo and Meego, which SailfishOS inherits a lot from.

Ubuntu phone is the name of the mobile operating system by Canonical, famous from the desktop and server operating system Ubuntu. The first Ubuntu phones entered stores in the winter of 2015. Even though Ubuntu and also Ubuntu phone have been developed for many years, they can still be considered runner-ups in comparison to Jolla, because they have much less production usage experience with the bug fixes and incremental improvements it brings. A small review of the Ubuntu phone can also be read in the Seravo blog.

In comparison to Android, both of these have the following architectural benefits:

  • based on full-stack Linux environments which are much more generic and universal than the Android’s somewhat limited flavour of Linux
  • utilizes Qt and QML technologies to deliver modern user experience with smooth and fast graphics instead of a Java virtual machine environment like Android does
  • are to their development model more open and provide better opportunities for third parties to customize and contribute
  •  are not tied to the Google ecosystem, which to some user groups is a vital security and policy benefit

The last point about not being tightly knit to an ecosystem can also be a huge drawback. Users have learned to expect that their computing is an integrated experience. The million dollar question here is, will either one grow big enough to form it’s own ecosystem? Even though there are billions of people in the world who want to use a mobile phone, there probably isn’t enough mindshare to support big ecosystems around both of these mobile operating systems, so it all boils down to which of these two is better, which one is more likely to please a bigger share of users?

To find an answer to that we did some basic comparisons.

Ease of use

Both of these fulfill the basic requirements of a customer grade product. They are localized to multiple languages, well packaged, include interactive tutorials to help users learn the new system and they include all the basic apps built-in, including phone, messages, contacts, email, camera, maps, alarm clock etc.

The Ubuntu phone UI is somewhat familiar to anyone who has used Ubuntu on the desktop as it uses the Unity user interface by Canonical. In phones the Unity version is 8, while the latest Ubuntu 15.04 for desktops still ships Unity 7 series. In Unity there is a vertical bar with favourite apps that appears to the left of the screen. Instead of a traditional home screen there is the Dash, with search based views and also notification type of views. To save screen estate most menus and bars only appear on swipe across one of the edges. Swipe is also used to switch between apps and to return to the Dash screen.

The UI in the Jolla phone is mostly unlike anything most people have ever seen. The general look is cool and futuristic with ambient themes. The UI interaction is completely built around swiping, much like it was in the Nokia N9 (Meego variant). Once you’ve used a little bit the device and get familiar with the gestures, it is becomes incredibly effortless and fast to use.

The Ubuntu phone UI looks crisp and clean, but it requires quite a lot of effort to do basic things. After using both devices for a few months Jolla and SailfishOS feels simply better to use. Most of the criticism of Ubuntu’s Unity on desktop also applies to Unity in Ubuntu phone:

  • In Ubuntu the app bar only fits a few favourite apps nicely. If you want browse the list of all apps, you need to click and swipe many times until you arrive at the app listing. In comparison to Gnome 3 on the desktop and how it is done in Jolla phones, accessing the list of installed applications is just one action away and very fast to do.
  • Switching between open apps in Ubuntu is slow. The deck of apps looks nice, but it only fits four apps at a time, while in Gnome 3 opening the shell immediately shows all open windows and in Jolla the main view also shows all open apps. In Jolla there is additionally so called cover actions, so you can control some features of the running apps directly from the overview without even opening them.
  • Search as the primary interaction model in a generic device does not work. Ubuntu on the desktop has shown that it is too much asked for users to always know what they want by name. In the Ubuntu phone search is a little bit less dominant, but still searches and scopes are quite central. The Unity approach is suboptimal, as users need to remember by heart all kinds of names. The Nokia Z launcher is a much better implementation of a search based UI, as it can anticipate what the user might want to search in the first place and the full list of apps is just one touch gesture away.

Besides having a fundamentally better UI, the Jolla phone seems to have most details also done better. For example, if a user does not touch the screen for a while, it will dim a bit before shutting down, and if a user quickly does some action, the screen wakes up again to the situation where it was. In Ubuntu, the screen will simply shut off after some time of inactivity and it requires the user to open the lock screen, possibly entering a PIN code, even if the screen was shut off only for a second. Another example is that in Jolla, if the user rotates the device but does not want the screen orientation to change, the user only needs to touch the screen while turning it. In Ubuntu the user needs to go to the settings and lock the rotation, and can only then return to the app they where using and turn the device without an undesired change in rotation. A third example is that in Jolla you can “go back” in most views by swiping back. That can be done easily with either thumb. In fact the whole SailfishOS can be used with just one hand, let it be the right or the left hand. In Ubuntu navigating backwards requires the user to press an arrow icon in the upper left corner, which is impossible to do with your thumb if you hold the device with your right hand, so you often end up needing to use two hands while interacting with the Ubuntu phone UI.

To be fair, Ubuntu phone is quite new and they might not have discovered these kind of shortcomings yet as they haven’t got real end user feedback that much. On the other hand, the Unity in Ubuntu desktops has not improved much over time despite all criticism received. Jolla and SailfishOS had mostly all things done correctly from the start, which maybe means it was simply designed by more competent UI designers.

App switching Apps list Settings view. Jolla ambient theme image visible in the background

Browser experience

Despite all cool native apps and the things they can do, our experience says that the single most app in any smart device is still the Internet browser. Therefore it is essential that the browser in mobile devices is nothing less than perfect.

Both Ubuntu and Jolla have their own browser implementations instead of using something like Google Chrome as such. As the screenshot below shows, both have quite similar look and feel in their browsers and there is also support for multiple tabs.

Built-in browser Browser tabs

Performance and battery life

As both Ubuntu phone with Unity and Jolla phone with SailfishOS are built using Qt and QML it is no surprise both have very fast and responsive UIs that render smoothly. This is a really big improvement over average Android devices, which often suffer from lagging rendering.

Ubuntu phone has however one big drawback. Many of the apps use HTML5 inside the Qt view, and those HTML5 apps load lots of external assets without prefetching or caching them properly like well made HTML5 apps with offline manifests should do. In practice this means for example that browsing the Ubuntu app store is very fast, but the app icons and screenshots in the active view load slower than what one could ever wait, that is for longer than tens of seconds. This phenomenon is visible in the Ubuntu app store screenshot below.

The Jolla battery life has been measured and documented in our blog previously. When we started using the Ubuntu phone the battery life was terrible and it ran out in a day even when with the screen off all the time. Later upgrades seem to however fixed some drain, as now the battery life is much better. We have however not measured and documented it properly yet.

App ecosystem, SDK and developer friendliness

Both Ubuntu and SailfishOS have their own SDK and QML based native apps. The Jolla phone however includes it’s own implementation of a virtual Java machine, so it supports also Android apps (though not always all features in them). Ubuntu has chosen not to be able to run any kind of Android apps. Oubuntu-jolla-storen the other hand the focus of Ubuntu seems to be on HTML5 apps. At least the maps app in Ubuntu is a plain HTML version of Google Maps and the Ubuntu store is filled with mostly HTML5 apps and real native apps are hard to find. In the Jolla store real native apps and Android apps are easy to spot as Android apps have a green icon next to their entry in the Jolla app store.

Both platforms include features to let the advanced users get a root shell on them. In Jolla one can go to the settings and enable developer mode, which includes activating remote SSH access so that developers can easily access their devices command line interfaces. In Ubuntu it is simply a matter of opening the command prompt app and entering the screen lock PIN code as the password to get access.

SailfishOS package management uses Zypper and RPM packages. In Ubuntu phone Snappy and Deb packages are used.

The interesting thing with Ubuntu is it’s potential to be integrated with the Ubuntu desktop experience. So far in our testing we didn’t notice any particular integration. In fact we even failed to get the Ubuntu phone connected to any of our Ubuntu laptops and desktops, while attaching a Jolla to a Linux desktop machine immediately registers as a USB device with the mount point name “Jolla”. To our knowledge this is however a dimension that is under heavy development at Ubuntu and they should soon reveal some big news regarding the convergence of the Ubuntu desktop and mobile.

For a company like Seravo, the openness of the technology is important. SailfishOS has some disadvantage here, because it includes closed source parts. Much of SailfishOS is though upstreamed into fully open source projects Mer and Nemo. Ubuntu seems to promise that Ubuntu Phone is open source and developed in the public with opportunities for external contributions.

Conclusion

Both of these Linux-based mobile operating systems are interesting. Both share many of pieces of their technology stack, most notably Qt. There really should be more competition to Android. Based on our experiences Jolla and SailfishOS would be the technically and usability wise superior alternative, but then again Ubuntu could be able to leverage on it’s position as the most popular Linux distribution in desktops and servers. The competition is tight, which can have both negative and positive effects. We hope that the competition will fuel innovation on all fronts.

Written by Otto Kekäläinen

August 14th, 2015 at 1:28 am

Continuous integration testing for WordPress plugins on Github using Travis CI

without comments

seravo-travis-testing-builds

Intro

We have open sourced and published some plugins on wordpress.org. We only publish them to wordpress.org and do the development in Github. Our goal is to keep them simple but effective. Quite a few people are using them actively and some of them have contributed back by creating additional features or fixing bugs/docs. It’s super nice to have contributions from someone else but it’s hard to see if those changes break your existing features. We all do mistakes from time to time and it’s easier to recover if you have good test coverage. Automated integration tests can help you out in these situations.

Choosing Travis CI

As we use Github.com for hosting our code and wanted a tool which integrates really well with Github. Travis works seamlessly with Github and it’s free to use in open source projects. Travis gives you ability to run your tests in coordinated environments which you can modify to your preferences.

Requirements

You need to have a Github account in order to setup Travis for your projects.

How to use

1. Sign up for free Travis account

Just click the link on the page and enter your Github credentials

2. Activate testing in Travis. Go to your accounts page from right corner.

travis-accounts-onni-hakala

Then go to your Organisation page (or choose a project of your own) and activate the projects you want to be tested in Travis.

activate-travis-testing-buttons

3. Add .travis.yml into the root of your project repository. You can use samples from next section.

travis-yml-in-github

After you have pushed to Github just wait for couple of seconds and your tests should activate automatically.

Configuring your tests

I think the hardest part of Travis testing is just getting started. That’s why I created testing template for WordPress projects. You can find it in our Github repository. Next I’m going to show you a few different cases of how to use Travis. We are going to split tests into unit tests with PHPUnit and integration tests with RSpec, Poltergeist and PhantomJS.

#1 Example .travis.yml, use Rspec integration tests to make sure your plugin won’t break anything else

This is the easiest way to use Travis with your WordPress plugin. This installs latest WP and activates your plugin. It checks that your frontpage is working and that you can log into admin panel. Just drop this .travis.yml  into your project and start testing :)!


sudo: false
language: php

notifications:
  on_success: never
  on_failure: change

php:
  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

env:
  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

matrix:
  allow_failures:
    - php: nightly

before_script:
  - git clone https://github.com/Seravo/wordpress-test-template wp-tests
  - bash wp-tests/bin/install-wp-tests.sh test root '' localhost $WP_VERSION

script:
  - cd wp-tests/spec && bundle exec rspec test.rb

#2 Example .travis.yml, which uses phpunit and rspec integration tests

  1. Copy phpunit.xml and tests folder from: https://github.com/Seravo/wordpress-test-template into your project

  2. Edit tests/bootstrap.php line containing PLUGIN_NAME according to your plugin:

define('PLUGIN_NAME','your-plugin-name-here.php');
  1. Add .travis.yml file

sudo: false
language: php

notifications:
  on_success: never
  on_failure: change

php:
  - nightly # PHP 7.0
  - 5.6
  - 5.5
  - 5.4

env:
  - WP_PROJECT_TYPE=plugin WP_VERSION=latest WP_MULTISITE=0 WP_TEST_URL=http://localhost:12000 WP_TEST_USER=test WP_TEST_USER_PASS=test

matrix:
  allow_failures:
    - php: nightly

before_script:
  # Install composer packages before trying to activate themes or plugins
  # - composer install
  - git clone https://github.com/Seravo/wordpress-test-template wp-tests
  - bash wp-tests/bin/install-wp-tests.sh test root '' localhost $WP_VERSION

script:
  - phpunit
  - cd wp-tests/spec && bundle exec rspec test.rb

For this to be useful you need to add the tests according to your plugin.

To get you started see how I did it for our plugin WP-Dashboard-Log-Monitor.

Few useful links:

If you want to contribute for better WordPress testing put an issue or pull request in our WordPress testing template.

Seravo can help you using PHPUnit, Rspec and Travis in your projects,
please feel free to ask us about our WordPress testing via email at wordpress@seravo.fi or in the comment section below.

 

Written by onni

July 2nd, 2015 at 6:04 am

Posted in technology

Reviewing the Ubuntu Phone: is it just for geeks?

without comments

ubuntuphone1

 

Few weeks ago I found a pretty black box waiting on my desk at the office. There it was, the BQ Aquaris E4.5, Ubuntu edition. Now available for sale all over Europe, the world’s first Ubuntu phone had arrived to the eager hands of Seravo. (Working in an open office with a bunch of other companies dealing more or less with IT, one can now easily get attention by not just talking on the phone but about it, too.)

The Ubuntu phone has been developed for a while, and now it has found its first users and can really be reviewed in practice. Can Ubuntu handle the expectations and demands of a modern mobile user? My personal answer, after getting to know my phone and see what it can and cannot do, is yes, but not yet.

But let’s get back to the pretty black box. For a visual (and not-that-technical) person such as myself, the amount of thought put into the design of the Ubuntu phone is very pleasing. The mere layout of the packaging of the box is very nice, both to the eye and from the point of view of usability. The same goes (at least partly) for the phone and its operating system itself: the developers themselves claim that “Ubuntu Phone has been designed with obsessive attention to detail” and that “form follows function throughout”. So it is not only the box that is pretty.

 

ubuntuphone2

Swiping through the scopes

When getting familiar with the Ubuntu phone, one can simply follow clear insctructions to get the most relevant settings in place. A nice surprise was that the system has been translated into Finnish – and to a whole bunch of other odd languages ranging from Catalan to Uyghur.

The Ubuntu phone tries to minimalize the effort of browsing through several apps, and introduces the scopes. “Ubuntu’s scopes are like individual home screens for different kinds of content, giving you access to everything from movies and music to local services and social media, without having to go through individual apps.” This is a fine idea, and works to a certain point. I myself would have though appreciated an easier way to adjust and modify my scopes, so that they would indeed serve my everyday needs. It is for instance not possible to change the location for the Today section, so my phone still thinks that I’m interested in the weather and events near Helsinki (which is not the case, as my hometown Tampere is lightyears or at least 160 kilometers away from the capital).

Overall, swiping is the thing with the Ubuntu phone. One can swipe from the left, swipe from the right, swipe from the top and the bottom and all through the night, never finding a button to push in order to get to the home screen. There are no such things as home buttons or home screens. This acquires practice until one gets familiar with it.

 

ubuntuphone_swipes

Designed for the enthusiasts

A friend of mine once said that in order to really succeed, ecological products must be able to compete with non-ecological ones in usability – or at times even beat them in that area. A green product that does not work, can never achieve popularity. The same though can be applied to open source products as well: as the standard is high, the philosophy itself is not enough if the end product fails to do what it should.

This thought in mind, I was happy to notice that the Ubuntu phone is not only new and exiting, but also pretty usable in everyday work. There are, though, bugs and lacks of features and some pretty relevant apps missing from the selection. For services like Facebook, Twitter, Google+ or Google Maps, Ubuntu phone uses web apps. If one is addicted to Instagram or WhatsApp, one should still wait until purchasing an Ubuntu phone. Telegram, a nice alternative for instant messaging is though available, and so is the possibility to view one’s Instagram feed. It also remains a mystery to me what benefits sigining in to Ubuntu One can bring to the user – except for updates, which are indeed longed for.

To conclude, I would state that at this point the Ubuntu phone is designed for the enthusiasts and developers, and should keep on evolving to become popular with the masses. The underlying idea of open source should of course be supported, and it is expected to see the Ubuntu phone develop in the near future. Hopefully the upcoming updates will fix the most relevant bugs and the app selection will fill the needs of an average mobile phone user.

ubuntuphone3

Read more about the Ubuntu Phone.

Written by Sanna Saarikangas

June 4th, 2015 at 12:05 am

Posted in phone,technology,Ubuntu

Why and how to publish a plugin at WordPress.org

without comments

The first ever WordCamp was held in Finland on May 8th and 9th in Tampere. Many from our staff participated in the event and Seravo was also one of the sponsors.

On Friday Otto Kekäläinen had a talk with the title “Contributing to WordPress.org – Why you (and your company) should publish plugins at WordPress.org”. On Saturday he held workshops titled “How to publish a plugin at WordPress.org” and Onni Hakala held a workshop about how to develop with WordPress using Git, Composer, Vagrant and other great tools.

wcfi2015workshops155

Below are the slides from these presentations and workshops:



wcfi2015workshops021

WordCamp Workshop on modern dev tools by Onni Hakala (in Finnish)

 

See also our recap on WordCamp Finland 2015 in Finnish: WP-palvelu.fi/blogi

 

wp-palvelu-logo

 

(Photos by Jaana Björklund)

 

Written by Otto Kekäläinen

May 13th, 2015 at 6:27 am

Why and how to publish a plugin at WordPress.org

without comments

The first ever WordCamp was held in Finland on May 8th and 9th in Tampere. Many from our staff participated in the event and Seravo was also one of the sponsors.

On Friday Otto Kekäläinen had a talk with the title “Contributing to WordPress.org – Why you (and your company) should publish plugins at WordPress.org”. On Saturday he held workshops titled “How to publish a plugin at WordPress.org” and Onni Hakala held a workshop about how to develop with WordPress using Git, Composer, Vagrant and other great tools.

wcfi2015workshops155

 

Below are the slides from these presentations and workshops:

wcfi2015workshops021

WordCamp Workshop on modern dev tools by Onni Hakala (in Finnish)

 

See also our recap on WordCamp Finland 2015 in Finnish: WP-palvelu.fi/blogi

 

wp-palvelu-logo

 

(Photos by Jaana Björklund)

 

Written by Otto Kekäläinen

May 13th, 2015 at 6:27 am

OpenFOAM – Open Computational Fluid Dynamics

without comments

OpenFOAM (Open source Field Operation And Manipulation) is a numerical CFD (Computational Fluid Dynamics) solver and a pre/postprocessing software suite.

Special care has been taken to enable automatic parallelization of applications written using OpenFOAM high-level syntax. Parallelization can be further extended by using a clustering software such as OpenMPI that distributes simulation workload to multiple worker nodes.

Pre/post-processing tools like ParaView enable graphical examination of the simulation set-up and results.

The project code is free software and it is licensed under the Gnu General Public License and maintained by the OpenFOAM Foundation.

A parellel version called OpenFOAM-extend  is a fork maintained by Wikki Ltd that provides a large collection of community generated code contributions that can be used with the official OpenFOAM version.

What does it actually do?

OpenFOAM is aimed at solving continuum mechanical problems. Continuum mechanics deals with the analysis of kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles.

OpenFOAM has an extensive range of features to solve complex gas/fluid flows involving chemical reactions, turbulence, heat transfer, solid dynamics, electromagnetics and much more!

The software suite is used widely in the engineering and scientific fields concerning simulations of fluid flows in pipes, engines, combustion chambers, pumps and other diverse use cases.

 

How is it used?

In general, the workflow adheres to the following steps:

  • pre-process
    • physical modeling
    • input mesh generation
    • visualizing the input geometry
    • setting simulation parameters
  • solving
    • running the simulation
  •  post-process
    • examining output data
    • visualizing the output data
    • refining the simulation parameters
    • rerunning the simulation to achieve desired results

Later we will see an example of a 2d water flow simulation following these steps.

 

What can Seravo do to help a customer running OpenFOAM?

Seravo can help your organization by building and maintaining a platform for running OpenFOAM and related software.

Our services include:

  • installing the host platform OS
  • host platform security updates and maintenance
  • compiling, installing and updating the OpenFOAM and OpenFOAM-extend suites
  • cluster set-up and maintenance
  • remote use of visualization software

Seravo has provided above-mentioned services in building a multinode OpenFOAM cluster to its customers.

 

OpenFOAM example: a simplified laminar flow 2d-simulation of a breaking water dam hitting an obstacle in an open container

N.B. Some steps are omitted for brevity!

Input files for simulation are ascii text files with defined open format.

Inside the working directory of a simulation case, we have many files defining the simulation environment and parameters, for example (click filename for sample view):

  • constant/polyMesh/blockMeshDict
    • defines the physical geometries; walls, water, air
  • system/controlDict
    • simulation parameters that define the time range and granularity of the run
  • constant/transportProperties
    • defines material properties of air and water used in simulation
  • numerous other control files define properties such as gravitational acceleration, physical properties of the container materials and so on

In this example, the simulated timeframe will be one second with output snapshot every 0,01 seconds.

OpenFOAM simulation input geometry

OpenFOAM simulation input geometry

 

After input files have been massaged to desired consistency, commands are executed to check and process the input files for actual simulation run:

  1. process input mesh (blockMesh)
  2. initialize input conditions (setFields)
  3. optional: visually inspect start conditions (paraFoam/paraview)

Solver application in this case will be OpenFOAM provided “interFoam”, which is a solver for 2 incompressible fluids. It tracks the material interfaces and mesh motion.

After setup, the simulation is executed by running the interFoam command (sample output).

OpenFOAM cluster running full steam on 40 CPU cores.

OpenFOAM cluster running simulation full steam on 40 CPU cores.

After about 40 seconds, the simulation is complete and results can be visualized and inspected with ParaView:

Simulation output at 0 seconds.

Simulation output at 0 seconds.

Simulation output at 0,2 seconds.

Simulation output at 0,2 seconds.

 

And here is a fancy gif animation of the whole simulation output covering one second of time:

dambreak

 

Written by Tero Auvinen

April 10th, 2015 at 4:27 am

How to create good OpenPGP keys

without comments

The OpenPGP standard and the most popular open source program that implements it, GnuPG, have been well tested and widely deployed over the last decades. At least for the time being they are considered to be cryptographically unbroken tools for encrypting and verifying messages and other data.

photo: keys

Due to the lack of easy-to-use tools and integrated user interfaces, large scale use of OpenPGP, in for example encrypting emails, hasn’t happened. There are however some new interesting efforts like EnigmailMailPile, Mailvelope and End-to-end that might change the game. There are also new promising tools in the area of key management (establishing trust between parties) like Gnome Keysign and Keybase.io.

Despite the PGP’s failure to solve email encryption globally, OpenPGP has been very successful in other areas. For example it is the de-facto tool for signing digital data. If you download a software package online, and want to verify that the package you have on your computer is actually the same package as released by the original author (and not a tampered one), you can use the OpenPGP signature of the author to verify authenticity. Also, even though it is not easy enough for day-to-day usage, if a person wants to send a message to another person and they want to send it encrypted, using OpenPGP is still the only solution for doing it. Alternative messaging channels like Hangouts or Telegram are just not enough widely used, so email prevails – and for email OpenPGP is the best encryption tool.

How to install GnuPG?

Installing GnuPG is easy. Just use the software manager of your Linux distro to install it, or download the installation package for Mac OS X via gnupg.org.

There are two generations of GnuPG, the 2.x series and the 1.4.x series. For compatibility reasons it is still advisable to use the 1.4.x versions.

How to create keys?

Without you own key you can only send encrypted data or verify the signature of other users. In order to be able to receive encrypted data or to sign some data yourself, you need to create a key pair for yourself. The key pair consists for two keys:

  • a secret key you shall protect and which is the only key that can be used to decrypt data sent to you or to make signatures
  • a public key which you publish and which others use to encrypt data for you or use to verify your signatures

Before you generate your keys, you need to edit your gpg configuration file to make sure the strongest algorithms are used instead of the default options in GnuPG. If you are using a very recent version of GnuPG it might already have better defaults.

For brevity, we only provide the command line instructions here. Edit the config file by running for example nano ~/.gnupg/gpg.conf and adding the algorithm settings:

personal-digest-preferences SHA512
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed

If the file does not exist, just run gpg and press Ctrl-C to cancel. This will create the configuration directory and file automatically.

Once done with that preperation, actually generate the key by running gpg --gen-key

For key type select “(1) RSA and RSA (default)“. RSA is the preferred algorithm nowadays and this option also automatically creates a subkey for encryption, something that might be useful later but which you don’t immediately need to learn about.

As the key size enter “4096” as 2048 bit keys are not considered strong enough anymore.

A good value for expiration is 3 years, so enter “3y” when asked for how long the key should be valid. Don’t worry – you don’t have to create a new key again. You can some day update your key expiry date, even after it expired. Having keys that never expires is bad practice. Old never-expiring keys might come back haunting you some day.

For the name and email choose your real name and real email. OpenPGP is not an anonymity tool, but a tool to encrypt to and verify signatures of other users. Other people will be evaluating if a key is really yours, so having a false name would be confusing.

When GnuPG asks for a comment, don’t enter anything. Comments are unnecessary and sometimes simply confusing, so avoid making one.

The last step is to define a passphrase. Follow the guidelines of our password best practices article and choose a complex yet easy to remember password, and make sure you never forget it.

$ gpg --gen-key 
gpg (GnuPG) 1.4.10; Copyright (C) 2008 Free Software Foundation, Inc.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 3y
Key expires at Mon 05 Mar 2018 02:39:23 PM EET
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name: Lisa Simpson
Email address: lisa.simpson@example.com
Comment: 
You selected this USER-ID:
    "Lisa Simpson <lisa.simpson@example.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 284 more bytes)
.....................................+++++

gpg: key 3E44A531 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2018-03-05
pub   4096R/3E44A531 2015-03-06 [expires: 2018-03-05]
      Key fingerprint = 4C63 2BAB 4562 5E09 392F  DAA4 C6E4 158A 3E44 A531
uid                  Lisa Simpson <lisa.simpson@example.com>
sub   4096R/75BB2DC6 2015-03-06 [expires: 2018-03-05]

$

At this stage you are done and can start using your new key. For different usages of OpenPGP you need to consult other documentation or install software that makes it easy. All software that use OpenPGP will automatically detect your ~/.gnupg directory in your home folder and use the keys from there.

Store securely

Make sure you home directory is encrypted, or maybe even your whole hard drive. On Linux it is easy with eCryptfs or LUKS/dm-crypt. If your hard drive is stolen or your keys leak in some other way, the thief can decrypt all your data and impersonate you by signing things digitally with your key.

Also if you don’t make regular backups of your home directory, you really should start doing it now so that you don’t lose your key or any other data either.

Additional identities (emails)

If you want to add more email addresses in the key gpg --edit-key 12345678 and in the prompt enter command adduid, which will start the dialog for adding another name and email on your key.

More guides

Encryption, and in particular secure unbreakable encryption is really hard. Good tools can hide away the complexity, but unfortunately modern tools and operating systems don’t have these features fully integrated yet. Users need to learn some of the technical stuff to be able to use different tools themselves.

Because OpenPGP is difficult to use, the net is full of lots of different guides. Unfortunately most of them are outdated or have errors. Here are a few guides we can recommend for futher reading:

Written by Otto Kekäläinen

March 6th, 2015 at 8:38 am

How to create good OpenPGP keys

without comments

The OpenPGP standard and the most popular open source program that implements it, GnuPG, have been well tested and widely deployed over the last decades. At least for the time being they are considered to be cryptographically unbroken tools for encrypting and verifying messages and other data.

photo: keys

Due to the lack of easy-to-use tools and integrated user interfaces, large scale use of OpenPGP, in for example encrypting emails, hasn’t happened. There are however some new interesting efforts like EnigmailMailPile, Mailvelope and End-to-end that might change the game. There are also new promising tools in the area of key management (establishing trust between parties) like Gnome Keysign and Keybase.io.

Despite the PGP’s failure to solve email encryption globally, OpenPGP has been very successful in other areas. For example it is the de-facto tool for signing digital data. If you download a software package online, and want to verify that the package you have on your computer is actually the same package as released by the original author (and not a tampered one), you can use the OpenPGP signature of the author to verify authenticity. Also, even though it is not easy enough for day-to-day usage, if a person wants to send a message to another person and they want to send it encrypted, using OpenPGP is still the only solution for doing it. Alternative messaging channels like Hangouts or Telegram are just not enough widely used, so email prevails – and for email OpenPGP is the best encryption tool.

How to install GnuPG?

Installing GnuPG is easy. Just use the software manager of your Linux distro to install it, or download the installation package for Mac OS X via gnupg.org.

There are two generations of GnuPG, the 2.x series and the 1.4.x series. For compatibility reasons it is still advisable to use the 1.4.x versions.

How to create keys?

Without you own key you can only send encrypted data or verify the signature of other users. In order to be able to receive encrypted data or to sign some data yourself, you need to create a key pair for yourself. The key pair consists for two keys:

  • a secret key you shall protect and which is the only key that can be used to decrypt data sent to you or to make signatures
  • a public key which you publish and which others use to encrypt data for you or use to verify your signatures

Before you generate your keys, you need to edit your gpg configuration file to make sure the strongest algorithms are used instead of the default options in GnuPG. If you are using a very recent version of GnuPG it might already have better defaults.

For brevity, we only provide the command line instructions here. Edit the config file by running for example nano ~/.gnupg/gpg.conf and adding the algorithm settings:

personal-digest-preferences SHA512
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed

If the file does not exist, just run gpg and press Ctrl-C to cancel. This will create the configuration directory and file automatically.

Once done with that preperation, actually generate the key by running gpg --gen-key

For key type select “(1) RSA and RSA (default)“. RSA is the preferred algorithm nowadays and this option also automatically creates a subkey for encryption, something that might be useful later but which you don’t immediately need to learn about.

As the key size enter “4096” as 2048 bit keys are not considered strong enough anymore.

A good value for expiration is 3 years, so enter “3y” when asked for how long the key should be valid. Don’t worry – you don’t have to create a new key again. You can some day update your key expiry date, even after it expired. Having keys that never expires is bad practice. Old never-expiring keys might come back haunting you some day.

For the name and email choose your real name and real email. OpenPGP is not an anonymity tool, but a tool to encrypt to and verify signatures of other users. Other people will be evaluating if a key is really yours, so having a false name would be confusing.

When GnuPG asks for a comment, don’t enter anything. Comments are unnecessary and sometimes simply confusing, so avoid making one.

The last step is to define a passphrase. Follow the guidelines of our password best practices article and choose a complex yet easy to remember password, and make sure you never forget it.

$ gpg --gen-key 
gpg (GnuPG) 1.4.10; Copyright (C) 2008 Free Software Foundation, Inc.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 3y
Key expires at Mon 05 Mar 2018 02:39:23 PM EET
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name: Lisa Simpson
Email address: lisa.simpson@example.com
Comment: 
You selected this USER-ID:
    "Lisa Simpson <lisa.simpson@example.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 284 more bytes)
.....................................+++++

gpg: key 3E44A531 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2018-03-05
pub   4096R/3E44A531 2015-03-06 [expires: 2018-03-05]
      Key fingerprint = 4C63 2BAB 4562 5E09 392F  DAA4 C6E4 158A 3E44 A531
uid                  Lisa Simpson <lisa.simpson@example.com>
sub   4096R/75BB2DC6 2015-03-06 [expires: 2018-03-05]

$

At this stage you are done and can start using your new key. For different usages of OpenPGP you need to consult other documentation or install software that makes it easy. All software that use OpenPGP will automatically detect your ~/.gnupg directory in your home folder and use the keys from there.

Store securely

Make sure you home directory is encrypted, or maybe even your whole hard drive. On Linux it is easy with eCryptfs or LUKS/dm-crypt. If your hard drive is stolen or your keys leak in some other way, the thief can decrypt all your data and impersonate you by signing things digitally with your key.

Also if you don’t make regular backups of your home directory, you really should start doing it now so that you don’t lose your key or any other data either.

Additional identities (emails)

If you want to add more email addresses in the key gpg --edit-key 12345678 and in the prompt enter command adduid, which will start the dialog for adding another name and email on your key.

More guides

Encryption, and in particular secure unbreakable encryption is really hard. Good tools can hide away the complexity, but unfortunately modern tools and operating systems don’t have these features fully integrated yet. Users need to learn some of the technical stuff to be able to use different tools themselves.

Because OpenPGP is difficult to use, the net is full of lots of different guides. Unfortunately most of them are outdated or have errors. Here are a few guides we can recommend for futher reading:

Written by Otto Kekäläinen

March 6th, 2015 at 8:38 am

A guide to modern WordPress deployment (part 2)

without comments

banner-front

Recently we published part one in this series on our brand new WordPress deployment platform in which we covered some of the server side technologies that constitute our next-gen WordPress platform.

In part 2 we’ll be briefly covering the toolkit we put together to easily manage the Linux containers that hold individual installations of WordPress.

4. WP-CLI, WordPress on the Command Line

We use the WordPress command line interface to automate everything you would usually have to do in the wp-admin interface. Using WP-CLI removes the inconvenience of logging into a client’s site and clicking around in the WP-admin to perform basic actions like changing option values or adding users.

We’ve been using WP-CLI as part of our install-, backup- and update processes for quite some time now. Quick, simple administration actions, especially when done in bulk is where the command line interface for WordPress really reveals its powers.

Check out the famous 5-minute install compressed into 3 easy lines with the WP-CLI:

wp core download
wp core config --dbname=wordpress --dbuser=dbuser --dbpass=dbpasswd
wp core install --url=https://orange.seravo.fi --title="An Orange Website" --admin=anttiviljami --admin_password=supersecret --admin_email=antti@seravo.fi

5. Git, Modern Version Control for Everything

We love Git and use it for pretty much everything we do! For WordPress, we rely on Git for deployment and development in virtually all our own projects (including this one!).

Our system is built for developers who use Git for deployment. We provide a Bedrock-like environment for an easy WordPress deployment experience and even offer the ability to easily set up identical environments for development and staging.

The main difference between Bedrock and our layout is the naming scheme. We found that it’s better to provide a familiar directory structure for the majority of our clients who may not be familiar with Bedrock, so we didn’t go with the /app and /wp directory naming scheme and instead went with /wp-content and /wordpress to provide a non-confusing separation between the WP core and the application.

Bedrock directory structure:

└── web
    ├── app
    │   ├── mu-plugins
    │   ├── plugins
    │   └── themes
    ├── wp-config.php
    ├── index.php
    └── wp

Seravo WordPress layout:

└── htdocs
    ├── wp-content
    │   ├── mu-plugins
    │   ├── plugins
    │   └── themes
    ├── wp-config.php
    ├── index.php
    └── wordpress

Our users can easily jump straight into development regardless of whether they want to use modern deployment techniques with dependency management and Git version control, or the straight up old-fashioned way of copying and editing files (which still seems to be the predominant way to do things with WordPress).

6. Composer, Easy Package Management for PHP

As mentioned earlier, our platform is built for Git and the modern WordPress development stack. This includes the use of dependency management with Composer – the package manager for PHP applications.

We treat the WordPress core, language packs, plugins, themes and their dependencies just like any other component in a modern web application. By utilising Composer as the package manager for WordPress, keeping your dependencies up to date and installed becomes just a matter of having the composer.json file included in your repositories. This way you don’t have to include any code from third party plugins or themes in your own repositories.

With Composer, you also have the ability to choose whether to always use the most recent version of a given plugin or a theme, or stay with a version that’s known to work with your site. This can be extremely useful with large WordPress installations that depend on lots of different plugins and dependencies that may sometimes have compatibility issues between versions.

7. Extra: PageSpeed for Nginx

Now, Pagespeed really doesn’t have much to do with managing WordPress or Linux containers. Rather it’s a cutting edge post-processor and cache developed and used by Google that’s free and open source! Since we hadn’t yet officially deployed it on our platform when we published our last article, we’re going to include it here as an extra.

The PageSpeed module for Nginx takes care of a large set of essential website optimisations automat(g)ically. It implements optimisations to entire webpages according to best practices by looking at your application’s output and analysing it. Really useful things like asset minification, concatenation and optimisation are handled by the PageSpeed module, so our users get the best possible experience using our websites.

Here are just some of the things PageSpeed will automatically handle for you:

  • Javascript and CSS minification
  • Image optimisation
  • Combining Javascript and CSS
  • Inlining small CSS
  • Lazy loading images
  • Flattening CSS @imports
  • Deferring Javascript
  • Moving stylesheets to the head
  • Trimming URLs

We’re really excited about introducing the power of PageSpeed to our client sites and will be posting more about the benefits of using the Nginx PageSpeed module with WordPress in the near future. The results so far have been simply amazing.

More information

More information for Finnish-speaking readers available at wordpress-palvelu.fi.

Please feel free to ask us about our WordPress platform via email at wordpress@seravo.fi or in the comment section below.

Here’s how to patch Ubuntu 8.04 or anything where you have to build bash from source

without comments

UPDATED: I have updated the post to include the post from gb3 as well as additional patches and some tests

Just a quick post to help those who might be running older/unsupported distributions of linux, mainly Ubuntu 8.04 who need to patch their version of bash due to the recent exploit here:

http://thehackernews.com/2014/09/bash-shell-vulnerability-shellshock.html

I found this post and can confirm it works:

https://news.ycombinator.com/item?id=8364385

Here are the steps(make a backup of /bin/bash just in case):

#assume that your sources are in /src
cd /src
wget http://ftp.gnu.org/gnu/bash/bash-4.3.tar.gz
#download all patches
for i in $(seq -f "%03g" 1 28); do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
tar zxvf bash-4.3.tar.gz
cd bash-4.3
#apply all patches
for i in $(seq -f "%03g" 1 28);do patch -p0 < ../bash43-$i; done
#build and install
./configure --prefix=/ && make && make install
cd ../../
rm -r src

To test for exploits CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, and CVE-2014-7187 I have found the following information at this link

To check for the CVE-2014-6271 vulnerability

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

it should NOT echo back the word vulnerable.


To check for the CVE-2014-7169 vulnerability
(warning: if yours fails it will make or overwrite a file called /tmp/echo that you can delete after, and need to delete before testing again )

cd /tmp; env X='() { (a)=>\' bash -c "echo date"; cat echo

it should say the word date then complain with a message like cat: echo: No such file or directory. If instead it tells you what the current datetime is then your system is vulnerable.


To check for CVE-2014-7186

bash -c 'true < || echo "CVE-2014-7186 vulnerable, redir_stack"

it should NOT echo back the text CVE-2014-7186 vulnerable, redir_stack.


To check for CVE-2014-7187

(for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | bash || echo "CVE-2014-7187 vulnerable, word_lineno"

it should NOT echo back the text CVE-2014-7187 vulnerable, word_lineno.

Written by leftyfb

September 25th, 2014 at 11:03 am

Posted in Linux,technology,Ubuntu