LinuxPlanet Blogs

By Linux Geeks, For Linux Geeks.

Archive for the ‘technology’ Category

Why and how to publish a plugin at WordPress.org

without comments

The first ever WordCamp was held in Finland on May 8th and 9th in Tampere. Many from our staff participated in the event and Seravo was also one of the sponsors.

On Friday Otto Kekäläinen had a talk with the title “Contributing to WordPress.org – Why you (and your company) should publish plugins at WordPress.org”. On Saturday he held workshops titled “How to publish a plugin at WordPress.org” and Onni Hakala held a workshop about how to develop with WordPress using Git, Composer, Vagrant and other great tools.

wcfi2015workshops155

Below are the slides from these presentations and workshops:



wcfi2015workshops021

WordCamp Workshop on modern dev tools by Onni Hakala (in Finnish)

 

See also our recap on WordCamp Finland 2015 in Finnish: WP-palvelu.fi/blogi

 

wp-palvelu-logo

 

(Photos by Jaana Björklund)

 

Written by Otto Kekäläinen

May 13th, 2015 at 6:27 am

OpenFOAM – Open Computational Fluid Dynamics

without comments

OpenFOAM (Open source Field Operation And Manipulation) is a numerical CFD (Computational Fluid Dynamics) solver and a pre/postprocessing software suite.

Special care has been taken to enable automatic parallelization of applications written using OpenFOAM high-level syntax. Parallelization can be further extended by using a clustering software such as OpenMPI that distributes simulation workload to multiple worker nodes.

Pre/post-processing tools like ParaView enable graphical examination of the simulation set-up and results.

The project code is free software and it is licensed under the Gnu General Public License and maintained by the OpenFOAM Foundation.

A parellel version called OpenFOAM-extend  is a fork maintained by Wikki Ltd that provides a large collection of community generated code contributions that can be used with the official OpenFOAM version.

What does it actually do?

OpenFOAM is aimed at solving continuum mechanical problems. Continuum mechanics deals with the analysis of kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles.

OpenFOAM has an extensive range of features to solve complex gas/fluid flows involving chemical reactions, turbulence, heat transfer, solid dynamics, electromagnetics and much more!

The software suite is used widely in the engineering and scientific fields concerning simulations of fluid flows in pipes, engines, combustion chambers, pumps and other diverse use cases.

 

How is it used?

In general, the workflow adheres to the following steps:

  • pre-process
    • physical modeling
    • input mesh generation
    • visualizing the input geometry
    • setting simulation parameters
  • solving
    • running the simulation
  •  post-process
    • examining output data
    • visualizing the output data
    • refining the simulation parameters
    • rerunning the simulation to achieve desired results

Later we will see an example of a 2d water flow simulation following these steps.

 

What can Seravo do to help a customer running OpenFOAM?

Seravo can help your organization by building and maintaining a platform for running OpenFOAM and related software.

Our services include:

  • installing the host platform OS
  • host platform security updates and maintenance
  • compiling, installing and updating the OpenFOAM and OpenFOAM-extend suites
  • cluster set-up and maintenance
  • remote use of visualization software

Seravo has provided above-mentioned services in building a multinode OpenFOAM cluster to its customers.

 

OpenFOAM example: a simplified laminar flow 2d-simulation of a breaking water dam hitting an obstacle in an open container

N.B. Some steps are omitted for brevity!

Input files for simulation are ascii text files with defined open format.

Inside the working directory of a simulation case, we have many files defining the simulation environment and parameters, for example (click filename for sample view):

  • constant/polyMesh/blockMeshDict
    • defines the physical geometries; walls, water, air
  • system/controlDict
    • simulation parameters that define the time range and granularity of the run
  • constant/transportProperties
    • defines material properties of air and water used in simulation
  • numerous other control files define properties such as gravitational acceleration, physical properties of the container materials and so on

In this example, the simulated timeframe will be one second with output snapshot every 0,01 seconds.

OpenFOAM simulation input geometry

OpenFOAM simulation input geometry

 

After input files have been massaged to desired consistency, commands are executed to check and process the input files for actual simulation run:

  1. process input mesh (blockMesh)
  2. initialize input conditions (setFields)
  3. optional: visually inspect start conditions (paraFoam/paraview)

Solver application in this case will be OpenFOAM provided “interFoam”, which is a solver for 2 incompressible fluids. It tracks the material interfaces and mesh motion.

After setup, the simulation is executed by running the interFoam command (sample output).

OpenFOAM cluster running full steam on 40 CPU cores.

OpenFOAM cluster running simulation full steam on 40 CPU cores.

After about 40 seconds, the simulation is complete and results can be visualized and inspected with ParaView:

Simulation output at 0 seconds.

Simulation output at 0 seconds.

Simulation output at 0,2 seconds.

Simulation output at 0,2 seconds.

 

And here is a fancy gif animation of the whole simulation output covering one second of time:

dambreak

 

Written by Tero Auvinen

April 10th, 2015 at 4:27 am

How to create good OpenPGP keys

without comments

The OpenPGP standard and the most popular open source program that implements it, GnuPG, have been well tested and widely deployed over the last decades. At least for the time being they are considered to be cryptographically unbroken tools for encrypting and verifying messages and other data.

photo: keys

Due to the lack of easy-to-use tools and integrated user interfaces, large scale use of OpenPGP, in for example encrypting emails, hasn’t happened. There are however some new interesting efforts like EnigmailMailPile, Mailvelope and End-to-end that might change the game. There are also new promising tools in the area of key management (establishing trust between parties) like Gnome Keysign and Keybase.io.

Despite the PGP’s failure to solve email encryption globally, OpenPGP has been very successful in other areas. For example it is the de-facto tool for signing digital data. If you download a software package online, and want to verify that the package you have on your computer is actually the same package as released by the original author (and not a tampered one), you can use the OpenPGP signature of the author to verify authenticity. Also, even though it is not easy enough for day-to-day usage, if a person wants to send a message to another person and they want to send it encrypted, using OpenPGP is still the only solution for doing it. Alternative messaging channels like Hangouts or Telegram are just not enough widely used, so email prevails – and for email OpenPGP is the best encryption tool.

How to install GnuPG?

Installing GnuPG is easy. Just use the software manager of your Linux distro to install it, or download the installation package for Mac OS X via gnupg.org.

There are two generations of GnuPG, the 2.x series and the 1.4.x series. For compatibility reasons it is still advisable to use the 1.4.x versions.

How to create keys?

Without you own key you can only send encrypted data or verify the signature of other users. In order to be able to receive encrypted data or to sign some data yourself, you need to create a key pair for yourself. The key pair consists for two keys:

  • a secret key you shall protect and which is the only key that can be used to decrypt data sent to you or to make signatures
  • a public key which you publish and which others use to encrypt data for you or use to verify your signatures

Before you generate your keys, you need to edit your gpg configuration file to make sure the strongest algorithms are used instead of the default options in GnuPG. If you are using a very recent version of GnuPG it might already have better defaults.

For brevity, we only provide the command line instructions here. Edit the config file by running for example nano ~/.gnupg/gpg.conf and adding the algorithm settings:

personal-digest-preferences SHA512
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed

If the file does not exist, just run gpg and press Ctrl-C to cancel. This will create the configuration directory and file automatically.

Once done with that preperation, actually generate the key by running gpg --gen-key

For key type select “(1) RSA and RSA (default)“. RSA is the preferred algorithm nowadays and this option also automatically creates a subkey for encryption, something that might be useful later but which you don’t immediately need to learn about.

As the key size enter “4096” as 2048 bit keys are not considered strong enough anymore.

A good value for expiration is 3 years, so enter “3y” when asked for how long the key should be valid. Don’t worry – you don’t have to create a new key again. You can some day update your key expiry date, even after it expired. Having keys that never expires is bad practice. Old never-expiring keys might come back haunting you some day.

For the name and email choose your real name and real email. OpenPGP is not an anonymity tool, but a tool to encrypt to and verify signatures of other users. Other people will be evaluating if a key is really yours, so having a false name would be confusing.

When GnuPG asks for a comment, don’t enter anything. Comments are unnecessary and sometimes simply confusing, so avoid making one.

The last step is to define a passphrase. Follow the guidelines of our password best practices article and choose a complex yet easy to remember password, and make sure you never forget it.

$ gpg --gen-key 
gpg (GnuPG) 1.4.10; Copyright (C) 2008 Free Software Foundation, Inc.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 3y
Key expires at Mon 05 Mar 2018 02:39:23 PM EET
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name: Lisa Simpson
Email address: lisa.simpson@example.com
Comment: 
You selected this USER-ID:
    "Lisa Simpson <lisa.simpson@example.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 284 more bytes)
.....................................+++++

gpg: key 3E44A531 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2018-03-05
pub   4096R/3E44A531 2015-03-06 [expires: 2018-03-05]
      Key fingerprint = 4C63 2BAB 4562 5E09 392F  DAA4 C6E4 158A 3E44 A531
uid                  Lisa Simpson <lisa.simpson@example.com>
sub   4096R/75BB2DC6 2015-03-06 [expires: 2018-03-05]

$

At this stage you are done and can start using your new key. For different usages of OpenPGP you need to consult other documentation or install software that makes it easy. All software that use OpenPGP will automatically detect your ~/.gnupg directory in your home folder and use the keys from there.

Store securely

Make sure you home directory is encrypted, or maybe even your whole hard drive. On Linux it is easy with eCryptfs or LUKS/dm-crypt. If your hard drive is stolen or your keys leak in some other way, the thief can decrypt all your data and impersonate you by signing things digitally with your key.

Also if you don’t make regular backups of your home directory, you really should start doing it now so that you don’t lose your key or any other data either.

Additional identities (emails)

If you want to add more email addresses in the key gpg --edit-key 12345678 and in the prompt enter command adduid, which will start the dialog for adding another name and email on your key.

More guides

Encryption, and in particular secure unbreakable encryption is really hard. Good tools can hide away the complexity, but unfortunately modern tools and operating systems don’t have these features fully integrated yet. Users need to learn some of the technical stuff to be able to use different tools themselves.

Because OpenPGP is difficult to use, the net is full of lots of different guides. Unfortunately most of them are outdated or have errors. Here are a few guides we can recommend for futher reading:

Written by Otto Kekäläinen

March 6th, 2015 at 8:38 am

A guide to modern WordPress deployment (part 2)

without comments

banner-front

Recently we published part one in this series on our brand new WordPress deployment platform in which we covered some of the server side technologies that constitute our next-gen WordPress platform.

In part 2 we’ll be briefly covering the toolkit we put together to easily manage the Linux containers that hold individual installations of WordPress.

4. WP-CLI, WordPress on the Command Line

We use the WordPress command line interface to automate everything you would usually have to do in the wp-admin interface. Using WP-CLI removes the inconvenience of logging into a client’s site and clicking around in the WP-admin to perform basic actions like changing option values or adding users.

We’ve been using WP-CLI as part of our install-, backup- and update processes for quite some time now. Quick, simple administration actions, especially when done in bulk is where the command line interface for WordPress really reveals its powers.

Check out the famous 5-minute install compressed into 3 easy lines with the WP-CLI:

wp core download
wp core config --dbname=wordpress --dbuser=dbuser --dbpass=dbpasswd
wp core install --url=https://orange.seravo.fi --title="An Orange Website" --admin=anttiviljami --admin_password=supersecret --admin_email=antti@seravo.fi

5. Git, Modern Version Control for Everything

We love Git and use it for pretty much everything we do! For WordPress, we rely on Git for deployment and development in virtually all our own projects (including this one!).

Our system is built for developers who use Git for deployment. We provide a Bedrock-like environment for an easy WordPress deployment experience and even offer the ability to easily set up identical environments for development and staging.

The main difference between Bedrock and our layout is the naming scheme. We found that it’s better to provide a familiar directory structure for the majority of our clients who may not be familiar with Bedrock, so we didn’t go with the /app and /wp directory naming scheme and instead went with /wp-content and /wordpress to provide a non-confusing separation between the WP core and the application.

Bedrock directory structure:

└── web
    ├── app
    │   ├── mu-plugins
    │   ├── plugins
    │   └── themes
    ├── wp-config.php
    ├── index.php
    └── wp

Seravo WordPress layout:

└── htdocs
    ├── wp-content
    │   ├── mu-plugins
    │   ├── plugins
    │   └── themes
    ├── wp-config.php
    ├── index.php
    └── wordpress

Our users can easily jump straight into development regardless of whether they want to use modern deployment techniques with dependency management and Git version control, or the straight up old-fashioned way of copying and editing files (which still seems to be the predominant way to do things with WordPress).

6. Composer, Easy Package Management for PHP

As mentioned earlier, our platform is built for Git and the modern WordPress development stack. This includes the use of dependency management with Composer – the package manager for PHP applications.

We treat the WordPress core, language packs, plugins, themes and their dependencies just like any other component in a modern web application. By utilising Composer as the package manager for WordPress, keeping your dependencies up to date and installed becomes just a matter of having the composer.json file included in your repositories. This way you don’t have to include any code from third party plugins or themes in your own repositories.

With Composer, you also have the ability to choose whether to always use the most recent version of a given plugin or a theme, or stay with a version that’s known to work with your site. This can be extremely useful with large WordPress installations that depend on lots of different plugins and dependencies that may sometimes have compatibility issues between versions.

7. Extra: PageSpeed for Nginx

Now, Pagespeed really doesn’t have much to do with managing WordPress or Linux containers. Rather it’s a cutting edge post-processor and cache developed and used by Google that’s free and open source! Since we hadn’t yet officially deployed it on our platform when we published our last article, we’re going to include it here as an extra.

The PageSpeed module for Nginx takes care of a large set of essential website optimisations automat(g)ically. It implements optimisations to entire webpages according to best practices by looking at your application’s output and analysing it. Really useful things like asset minification, concatenation and optimisation are handled by the PageSpeed module, so our users get the best possible experience using our websites.

Here are just some of the things PageSpeed will automatically handle for you:

  • Javascript and CSS minification
  • Image optimisation
  • Combining Javascript and CSS
  • Inlining small CSS
  • Lazy loading images
  • Flattening CSS @imports
  • Deferring Javascript
  • Moving stylesheets to the head
  • Trimming URLs

We’re really excited about introducing the power of PageSpeed to our client sites and will be posting more about the benefits of using the Nginx PageSpeed module with WordPress in the near future. The results so far have been simply amazing.

More information

More information for Finnish-speaking readers available at wordpress-palvelu.fi.

Please feel free to ask us about our WordPress platform via email at wordpress@seravo.fi or in the comment section below.

Here’s how to patch Ubuntu 8.04 or anything where you have to build bash from source

without comments

UPDATED: I have updated the post to include the post from gb3 as well as additional patches and some tests

Just a quick post to help those who might be running older/unsupported distributions of linux, mainly Ubuntu 8.04 who need to patch their version of bash due to the recent exploit here:

http://thehackernews.com/2014/09/bash-shell-vulnerability-shellshock.html

I found this post and can confirm it works:

https://news.ycombinator.com/item?id=8364385

Here are the steps(make a backup of /bin/bash just in case):

#assume that your sources are in /src
cd /src
wget http://ftp.gnu.org/gnu/bash/bash-4.3.tar.gz
#download all patches
for i in $(seq -f "%03g" 1 28); do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
tar zxvf bash-4.3.tar.gz
cd bash-4.3
#apply all patches
for i in $(seq -f "%03g" 1 28);do patch -p0 < ../bash43-$i; done
#build and install
./configure --prefix=/ && make && make install
cd ../../
rm -r src

To test for exploits CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, and CVE-2014-7187 I have found the following information at this link

To check for the CVE-2014-6271 vulnerability

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

it should NOT echo back the word vulnerable.


To check for the CVE-2014-7169 vulnerability
(warning: if yours fails it will make or overwrite a file called /tmp/echo that you can delete after, and need to delete before testing again )

cd /tmp; env X='() { (a)=>\' bash -c "echo date"; cat echo

it should say the word date then complain with a message like cat: echo: No such file or directory. If instead it tells you what the current datetime is then your system is vulnerable.


To check for CVE-2014-7186

bash -c 'true < || echo "CVE-2014-7186 vulnerable, redir_stack"

it should NOT echo back the text CVE-2014-7186 vulnerable, redir_stack.


To check for CVE-2014-7187

(for x in {1..200} ; do echo "for x$x in ; do :"; done; for x in {1..200} ; do echo done ; done) | bash || echo "CVE-2014-7187 vulnerable, word_lineno"

it should NOT echo back the text CVE-2014-7187 vulnerable, word_lineno.

Written by leftyfb

September 25th, 2014 at 11:03 am

Posted in Linux,technology,Ubuntu

A guide to modern WordPress deployment (part 1)

without comments

Screen-Shot-2014-08-29-at-09.47.20

Seravo & WordPress

As a Linux and open source specialist company, Seravo provides services to many companies that run Linux in a web server. Not surprisingly, in many of these cases, the top-level software running on the server is of course, the world’s most popular CMS, WordPress. We love it!

In the process of administering and developing a number of WordPress sites for quite some time now, we’ve discovered an arsenal of useful ways to optimise and automate WordPress, some of which we’ve published right here on our blog:

Throughout 2014, we’ve expanded our WordPress expertise and in the process, combined our practices into a full WordPress deployment platform. We’re confident our solution is the next step forward from traditional WordPress hosting services.

In the spirit of openness in the WordPress community, we’re happy to present the details of our deployment platform and which technologies lie under it in this series of blog posts.

1. LXC – A full OS for every WordPress installation

As one of the starting points to our platform, we wanted every individual WordPress installation to have its own full Linux environment. Instead of going the traditional route to virtualisation with VMs seen in most generic hosting solutions, we chose a more recent technology called Linux containers or LXC for short.

Each WordPress instance resides within its own, robust Linux container which provides a lightweight, flexible way to sandbox applications. By using LXC as a means of virtualisation, we’ve greatly reduced the overhead required for hosting websites in a clustered environment, thus increasing overall server performance.

As each WordPress container is also a completely standalone system in itself, it has been extremely easy to clone and transfer instances between hosts and even other WordPress platforms.

2. Nginx, HHVM and MariaDB for amazing performance

Instead of a more traditional LAMP (Linux, Apache, MySQL and PHP) environment. We utilised the newest in technologies for running WordPress.

  • Nginx, the fastest and most flexible HTTP server available
  • HHVM, a new and improved PHP engine developed and used by Facebook
  • MariaDB, a faster drop-in-replacement for MySQL server

The combination of these technologies enable us to offer WordPress performance unheard of when compared to LAMP environments. Additionally, all of these components are extremely configurable so that fine-tuning their performance could be a blog post all on its own.

3. Secure administration with TLS on SPDY/3.0

The drawbacks of building a HTTPS secured WordPress site have always been the inconvenience of acquiring an SSL certificate for each domain used and the increased server load from the additional computation required for secure protocols.

We didn’t want our users to throw away security for convenience, so we went in search for a solution.

First, we enabled the use of an open networking protocol called SPDY, which is the basis for the upcoming HTTP/2 protocol. SPDY/3 is already supported by all major browsers and offers a significant increase in server side performance in comparison to standard HTTPS. This allows us to effortlessly serve large amounts of secure HTTPS traffic with almost no performance penalty.

To avoid having to acquire separate SSL certificates for all our separate WordPress installations, we developed HTTPS Domain Alias – a WordPress plugin that allows the use of a separate domain name for wp-admin. All our clients now get their own subdomain for WordPress administration at *.seravo.fi, which can be securely accessed over HTTPS for a secure WordPress admin panel.

Keep reading

Read part 2 of this series, in which we discuss the management aspects of multiple WordPress installations and useful tools for general WordPress development and security.

More information for Finnish-speaking readers available at wordpress-palvelu.fi.

Written by antti

September 22nd, 2014 at 6:00 am

Turn any computer into a wireless access point with Hostapd

without comments

Linux hotspotDo you want to make a computer function as a WLAN base station, so that other computers can use as it as their wifi access point? This can easily be done using the open source software Hostapd and compatible wifi hardware.

This is a useful thing to do if computer acting as a firewall or as a server in the local network, and you want to avoid adding new appliances that all require their own space and cables in you already crowded server closet. Hostapd enables you to have full control of your WLAN access point and also enhances security. By using Hostapd the system will be completely in your control, every line of code can be audited and the source of all software can be verified and all software can be updated easily. It is quite common that active network devices like wifi access points are initially fairly secure small appliances with Linux inside, but over time their vendors don’t provide timely security updates and local administrators don’t care to install them via some clumsy firmware upgrade mechanism. With a proper Linux server admins can easily SSH into it and run upgrades using the familiar and trusted upgrade channels that Linux server distributions provide.

The first step in creating wireless base station with Hostapd is to make sure the WLAN hardware supports running in access point mode. Examples are listed in the hostapd documentation. A good place to shop for WLAN cards with excellent Linux drivers is thinkpenguin.com and in their product descriptions the WLAN card supported operation modes are nicely listed.

The next step is to install the software called Hostapd by Jouni Malinen and others. This is a very widely used software and it most likely is available in your Linux distribution by default. Many of the WLAN router appliances available actually are small Linux computers running hostapd inside, so by running hostapd on a proper Linux computer will give you at least all the features available in the WIFI routers, including advanced authentication and logging.

Our example commands are for Ubuntu 14.04. You need to have access to install hostapd and dnsmasq Dnsmasq is a small DNS/DHCP server which we’ll use in this setup. To start simply run:

sudo apt-get install hostapd dnsmasq

After that you need to create and edit the configuration file:

zcat /usr/share/doc/hostapd/examples/hostapd.conf.gz | sudo tee -a   /etc/hostapd/hostapd.conf

The configuration file /etc/hostapd/hostapd.conf is filled with configuration examples and documentation in comments. The relevant parts for a simple WPA2 protected 802.11g  network with the SSID ‘Example-WLAN‘ and password ‘PASS‘ are:

interface=wlan0
ssid=Example-WLAN
hw_mode=g
wpa=2
wpa_passphrase=PASS
wpa_key_mgmt=WPA-PSK WPA-EAP WPA-PSK-SHA256 WPA-EAP-SHA256

Next you need to edit the network interfaces configuration to force the WLAN card to only run in the access point mode. Assuming that the access point network will use the address space 192.168.8.* the file /etc/network/interfaces should look something like this:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet static
hostapd /etc/hostapd/hostapd.conf
address 192.168.8.1
netmask 255.255.255.0

Then we need to have a DNS relay and DHCP server on our wlan0 interface so the clients actually get a working Internet connection, and this can be accomplished by configuring dnsmasq. Like hostapd it also has a very verbose configuration file /etc/dnsmasq.conf, but the relevant parts look like this:

interface=lo,wlan0
no-dhcp-interface=lo
dhcp-range=192.168.8.20,192.168.8.254,255.255.255.0,12h

Next we need to make sure that the Linux kernel forwards traffic from our wireless network onto other destination networks. For that you need to edit the file /etc/sysctl.conf and make sure it has lines like this:

net.ipv4.ip_forward=1

We need to activate NAT in the built-in firewall of Linux to make sure the traffic going out uses the external address as its source address and thus can be routed back. It can be done for example by appending the following line to the file /etc/rc.local:

iptables -t nat -A POSTROUTING -s 192.168.8.0/24 ! -d 192.168.8.0/24  -j MASQUERADE

Some WLAN card hardware might have a virtual on/off switch. If you have such hardware you might need to also run rfkill to enable the hardware using a command like rfkill unblock 0.

The same computer also runs Network Manager (as for example Ubuntu does by default) you need to edit it’s settings so that if won’t interfere with the new wifi access point. Make sure file /etc/NetworkManager/NetworkManager.conf looks like this:

[main]
plugins=ifupdown,keyfile,ofono
dns=dnsmasq
 
[ifupdown]
managed=false

Now all configuration should be done. To be sure all changes take effect, finish by rebooting the computer.

If everything is working, a new WLAN network should be detected by other devices.
On the WLAN-server you’ll see similar output from these commands:

$ iw wlan0 info
Interface wlan0
        ifindex 3
        type AP
        wiphy 0

$ iwconfig 
wlan0     IEEE 802.11bgn  Mode:Master  Tx-Power=20 dBm   
          Retry  long limit:7   RTS thr:off   Fragment thr:off
          Power Management:off

$ ifconfig
wlan0     Link encap:Ethernet  HWaddr f4:ec:38:de:c8:d2  
          inet addr:192.168.8.1  Bcast:192.168.8.255  Mask:255.255.255.0
          inet6 addr: fe80::f6ec:38ff:fede:c8d2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5463040 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8166528 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:861148382 (861.1 MB)  TX bytes:9489973056 (9.4 GB)

Written by Otto Kekäläinen

August 27th, 2014 at 9:25 am

Optimal Sailfish SDK workflow with QML auto-reloading

without comments

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

Sailfish is the Linux based operating system used in Jolla phones. Those who develop apps for Jolla use the Sailfish SDK (software development kit), which is basically a customized version of Qt Creator. Sailfish OS apps are written using the Qt libraries and typically in the C++ programming language. The user interfaces of Sailfish apps are however written in a declarative language called QML. The syntax of QML is a custom markup language and includes a subset of CSS and JavaScript to define style and actions. QML files are not compiled but stay as plain text files when distributed with the app binaries and are interpreted at run-time.

While SailfishOS IDE (Qt Creator) is probably pretty good for C++ programming with Qt libraries, and the Sailfish flavour comes nicely bundled with complete Sailfish OS instances as virtual machines (one for building binaries and one for emulating running the binaries) the overall workflow is not very optimal from a QML development point of view. Each time a developer presses the Play button to launch his app, Qt Creator builds the app from scratch, packages it, deploys it on the emulator (or a real device if set up to do so) and only then actually runs the app. After doing some changes to the source code, the developer needs to remember to press Stop and then press Play to build, deploy and start the app again. Even on a super fast machine this cycle takes at least 10 seconds.

It would be a much more optimal workflow if relaunching the app after QML source code changes would happen in only 1 second or even less. Using Entr it is possible.

Enter Entr

Entr is an multi platform app which uses the operating system facilities to watch for file changes and to run a command the instant a watched file is changed. To install Entr on a Sailfish OS emulator or device, ssh to the emulator or device, add the community repository chum and install the package entr with (Note the chum for 1.0.5.16 also exists, but the repo is empty.):

ssh nemo@xxx.xxx.xxx.xxx
ssu ar chum http://repo.merproject.org/obs/sailfishos:/chum:/1.0.4.20/1.0.4.20_armv7hl/
pkcon refresh
pkcon install entr

After this change to the directory where your app and it’s QML files reside and run entr:

cd /usr/share/harbour-seravo-news/qml/
find . -name *.qml | entr -r /usr/bin/harbour-seravo-news

The find command will make sure all QML files in current or any subdirectory will be watched. Running entr with parameter -r will make sure it kills the program before running it again. The name of our app in this example here is seravo-news (available in the Jolla store if you are interested).

With this the app would automatically reload it any of the QML files change. To do this mount the app directory on the emulator (or device) to your local system using SSH:

mkdir mountpoint
sshfs nemo@xxx.xxx.xxx.xxx:/usr/share/harbour-seravo-news mountpoint/

Then finally open Qt Creator, point it to the files in the mountpoint directory and start editing. Every time you’ve edited QML files and you feel like you want to see how the result looks like, simply press Ctrl+S to save and watch the magic! It’s even easier than what web developers are used to do then pressing F5 to reload, because on the emulator (or device) there is nothing you need to do, just look at it while the app auto-restarts directly.

Remember to copy or directly git commit your files from the mountpoint directory when you’re completed writing the QML files.

Entr has been packaged for SailfishOS by Seravo staff. Subscribe to our blog to get notified when we post about how to package and build RPM packages using the Open Build System and submit them for inclusion in SailfishOS Chum repository.

Written by Otto Kekäläinen

April 15th, 2014 at 2:37 am

Open source for office workers

without comments

Document Freedom logoOpen source software is great and it’s not only great for developers who can code and use the source directly. Open source is a philosophy. Open source is for technology like what democracy is for society: it isn’t magically superior right away, but it enables a process which over time leads to best results – or at least avoids the worst results. A totalitarian regime might be efficient and benevolent, but there is a big risk it will become corrupt and get bad. And then a totalitarian regime gets bad, it can be really, really ugly.

Because of this philosophy even regular office workers should strive for maximizing their use of open source software. To help ordinary non-technical people Seravo has contributed to the VALO-CD project, which in 2008-2013 created a collection of the best Free and Open Source Software for Windows, which is available both in Finnish and English. The CD (contents suitable also for a USB stick) and related materials are still available for download.

We have also participated in promoting open standards. Most recently we helped the Free Software Foundation Europe publish a press releases in Finland regarding the Document Freedom Day. Also the theme of our latest Seravo-salad was the OpenDocument Format. Open standards are essential in making sure users can access their own data and open files in different programs. Open standards is also about programs being able to communicate with each other directly using publicly defined protocols and interfaces.

Information technology is rather challenging, and understanding abstract principles like open source and open standards does not happen in one go. Seravo is proud to support the oldest open source competence center in Europe, the Finnish Center for Open Systems and Solutions COSS ry which has promoted open technologies in Finland since 2003.

When it comes down to details, training is needed. This and last year we have cooperated with the Visio educational centre in Helsinki to provide courses on how to utilize open source software in non-profit organizations.

Learn more

We have recently published the following presentations in Finnish so people can learn more by themselves:





Written by Otto Kekäläinen

March 31st, 2014 at 4:56 am

How’s (battery) life with Jolla?

without comments

Some years ago Nokia conducted a large survey among its customers on what people most liked about Nokia phones. One of the top features turned out to be their unrivaled battery life. Despite the hype around screen resolutions, processor performance and software versions, one of the most important features for a mobile device is simply how long until you have to charge it again.

Jolla phone uptime shows device has been continously on for 8 days and 13 hours

Jolla phone uptime shows the device has continuously been turned on for 8 days and 13 hours

Back in 2012 we wrote about how to get an Android phone last for a week without recharging. On a mobile phone, the single most significant power hog is the display. On the other hand with the display turned off, the biggest energy hogs are the wireless radio devices. The Android phone in our example lasted for an entire week after locking it in 2G mode only, thus utilising only the most basic GSM network connection with all other forms of connectivity disabled.

The Nokia Maemo series devices and the Meego N9 smart phone already sported a feature where if the device was not in active use, it would automatically downgrade the network connections or disable them. When a user the opened an application that requires network access, the network was re-enabled automatically without extra user intervention. This feature is also present in Jolla phones. This is the reason why Jolla users every now and then see the “Connecting” notification; the connections are disabled but are automatically invoked upon request for network access.

We tested this feature by having all networks (3G, WLAN, Bluetooth, GPS) enabled in the Jolla phone settings and by having e-mail updates active with instant message presence turned on, but with no further active usage of the device. The results showed that the Jolla phone battery lasted for over 8 days! The screenshot attached is from SailTime by Mikko Ahlroth, which visualises the output of the uptime Linux command.

Keeps on running…

But wait, that was not all! The unique hardware feature present in the Jolla Phone is, of course The Other Half (abbreviated TOH), an exchangeable back side of the device. One of the connectors of TOH is a I2C connection, which is able to transfer both data and power. This makes possible TOHs that supplement the main battery within the device. In fact, the main battery is also located on the backside, so it could completely be replaced with a TOH that connects directly to the connectors the original battery would use.

First examples of this have already emerged. Olli-Pekka Heinsuo has created two battery supplementing Other Halves: the Power Half, which holds an extra battery for increased capacity and the Solar Half, which hosts a solar panel that directly charges the device. Olli-Pekka is attending the Seravo sponsored Jolla and Sailfish Hack Day next Saturday. If you wish to attend, please make sure to swiftly register to the event as the attendance capacity is limited!

Solar Half Power Half

Written by Otto Kekäläinen

March 27th, 2014 at 6:20 am