LinuxPlanet Blogs

By Linux Geeks, For Linux Geeks.

Be a “Narendra Modi” Voter | My Appeal to Vote for Modi

without comments

First of All Sorry to all those who follows my Blog only for Linux and Open Source HowTos and Article. Sorry for CONgress and Aam Aadmi Party followers who follows my blog too ! Hello Indians, I am writing this article to Support Shri Narendra Modi ji as Next Prime Minister of India. Narendra Modi […]

Using sysdig to Troubleshoot like a boss

without comments

If you haven't seen it yet there is a new troubleshooting tool out called sysdig. It's been touted as strace meets tcpdump and well, it seems like it is living up to the hype. I would actually rather compare sysdig to SystemTap meets tcpdump, as it has the command line syntax of tcpdump but the power of SystemTap.

In this article I am going to cover some basic and cool examples for sysdig, for a more complete list you can look over the sysdig wiki. However, it seems that even the sysdig official documentation is only scratching the surface of what can be done with sysdig.

Installation

In this article we will be installing sysdig on Ubuntu using apt-get. If you are running an rpm based distribution you can find details on installing via yum on sysdig's wiki.

Setting up the apt repository

To install sysdig via apt we will need to setup the apt repository maintained by Draios the company behind sysdig. We can do this by running the following curl commands.

# curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public | apt-key add -  
# curl -s -o /etc/apt/sources.list.d/draios.list http://download.draios.com/stable/deb/draios.list

The first command above will download the Draios gpg key and add it to apt's key repository. The second will download an apt sources file from Draios and place it into the /etc/apt/sources.list.d/ directory.

Update apt's indexes

Once the sources list and gpg key are installed we will need to re-sync apt's package indexes, this can be done by running apt-get update.

# apt-get update

Kernel headers package

The sysdig utility requires the kernel headers package, before installing we will need to validate that the kernel headers package is installed.

Check if kernel headers is installed

The system that I am using for this example already had the kernel headers packaged installed, to validate if they are installed on your system you can use the dpkg command.

    # dpkg --list | grep header
    ii  linux-generic                       3.11.0.12.13                     amd64        Complete Generic Linux kernel and headers
    ii  linux-headers-3.11.0-12             3.11.0-12.19                     all          Header files related to Linux kernel version 3.11.0
    ii  linux-headers-3.11.0-12-generic     3.11.0-12.19                     amd64        Linux kernel headers for version 3.11.0 on 64 bit x86 SMP
    ii  linux-headers-generic               3.11.0.12.13                     amd64        Generic Linux kernel headers

It is important to note that the kernel headers package must be for the specific kernel version your system is running. In the output above you can see the linux-generic package is version 3.11.0.12 and the headers package is for 3.11.0.12. If you have multiple kernels installed you can validate which version your system is running with the uname command.

# uname -r
3.11.0-12-generic

Installing the kernel headers package

To install the headers package for this specific kernel you can use apt-get. Keep in mind, you must specify the kernel version listed from uname -r.

# apt-get install linux-headers-<kernel version>

Example:

# apt-get install linux-headers-3.11.0-12-generic

Installing sysdig

Now that the apt repository is setup and we have the required dependencies, we can install the sysdig command.

# apt-get install sysdig

Using sysdig

Basic Usage

The syntax for sysdig is similar to tcpdump in particular the saving and reading of trace files. All of sysdig's output can be saved to a file and read later just like tcpdump. This is useful if you are running a process or experiencing an issue and wanted to dig through the information later.

Writing trace files

To write a file we can use the -w flag with sysdig and specify the file name.

Syntax:

# sysdig -w <output file>

Example:

# sysdig -w tracefile.dump

Like tcpdump the sysdig command can be stopped with CTRL+C.

Reading trace files

Once you have written the trace file you will need to use sysdig to read the file, this can be accomplished with the -r flag.

Syntax:

# sysdig -r <output file>

Example:

    # sysdig -r tracefile.dump
    1 23:44:57.964150879 0 <NA> (7) > switch next=6200(sysdig) 
    2 23:44:57.966700100 0 rsyslogd (358) < read res=414 data=<6>[ 3785.473354] sysdig_probe: starting capture.<6>[ 3785.473523] sysdig_probe: 
    3 23:44:57.966707800 0 rsyslogd (358) > gettimeofday 
    4 23:44:57.966708216 0 rsyslogd (358) < gettimeofday 
    5 23:44:57.966717424 0 rsyslogd (358) > futex addr=13892708 op=133(FUTEX_PRIVATE_FLAG|FUTEX_WAKE_OP) val=1 
    6 23:44:57.966721656 0 rsyslogd (358) < futex res=1 
    7 23:44:57.966724081 0 rsyslogd (358) > gettimeofday 
    8 23:44:57.966724305 0 rsyslogd (358) < gettimeofday 
    9 23:44:57.966726254 0 rsyslogd (358) > gettimeofday 
    10 23:44:57.966726456 0 rsyslogd (358) < gettimeofday

Output in ASCII

By default sysdig saves the files in binary, however you can use the -A flag to have sysdig output in ASCII.

Syntax:

# sysdig -A

Example:

# sysdig -A > /var/tmp/out.txt
# cat /var/tmp/out.txt
1 22:26:15.076829633 0 <NA> (7) > switch next=11920(sysdig)

The above example will redirect the output to a file in plain text, this can be helpful if you wanted to save and review the data on a system that doesn't have sysdig installed.

sysdig filters

Much like tcpdump the sysdig command has filters that allow you to filter the output to specific information. You can find a list of available filters by running sysdig with the -l flag.

Example:

    # sysdig -l

    ----------------------
    Field Class: fd

    fd.num            the unique number identifying the file descriptor.
    fd.type           type of FD. Can be 'file', 'ipv4', 'ipv6', 'unix', 'pipe', 'e
                      vent', 'signalfd', 'eventpoll', 'inotify' or 'signalfd'.
    fd.typechar       type of FD as a single character. Can be 'f' for file, 4 for 
                      IPv4 socket, 6 for IPv6 socket, 'u' for unix socket, p for pi
                      pe, 'e' for eventfd, 's' for signalfd, 'l' for eventpoll, 'i'
                       for inotify, 'o' for uknown.
    fd.name           FD full name. If the fd is a file, this field contains the fu
                      ll path. If the FD is a socket, this field contain the connec
                      tion tuple.
<truncated output>

Filter examples

Capturing a specific process

You can use the "proc.name" filter to capture all of the sysdig events for a specific process. In the example below I am filtering on any process named sshd.

Example:

    # sysdig -r tracefile.dump proc.name=sshd
    530 23:45:02.804469114 0 sshd (917) < select res=1 
    531 23:45:02.804476093 0 sshd (917) > rt_sigprocmask 
    532 23:45:02.804478942 0 sshd (917) < rt_sigprocmask 
    533 23:45:02.804479542 0 sshd (917) > rt_sigprocmask 
    534 23:45:02.804479767 0 sshd (917) < rt_sigprocmask 
    535 23:45:02.804487255 0 sshd (917) > read fd=3(<4t>10.0.0.12:55993->162.0.0.80:22) size=16384
Capturing all processes that open a specific file

The fd.name filter is used to filter events for a specific file name. This can be useful to see what processes are reading or writing a specific file or socket.

Example:

# sysdig fd.name=/dev/log
14 11:13:30.982445884 0 rsyslogd (357) < read res=414 data=<6>[  582.136312] sysdig_probe: starting capture.<6>[  582.136472] sysdig_probe:

Capturing all processes that open a specific filesystem

You can also use comparison operators with filters such as contains, =, !=, <=, >=, < and >.

Example:

    # sysdig fd.name contains /etc
    8675 11:16:18.424407754 0 apache2 (1287) < open fd=13(<f>/etc/apache2/.htpasswd) name=/etc/apache2/.htpasswd flags=1(O_RDONLY) mode=0 
    8678 11:16:18.424422599 0 apache2 (1287) > fstat fd=13(<f>/etc/apache2/.htpasswd) 
    8679 11:16:18.424423601 0 apache2 (1287) < fstat res=0 
    8680 11:16:18.424427497 0 apache2 (1287) > read fd=13(<f>/etc/apache2/.htpasswd) size=4096 
    8683 11:16:18.424606422 0 apache2 (1287) < read res=44 data=admin:$apr1$OXXed8Rc$rbXNhN/VqLCP.ojKu1aUN1. 
    8684 11:16:18.424623679 0 apache2 (1287) > close fd=13(<f>/etc/apache2/.htpasswd) 
    8685 11:16:18.424625424 0 apache2 (1287) < close res=0 
    9702 11:16:21.285934861 0 apache2 (1287) < open fd=13(<f>/etc/apache2/.htpasswd) name=/etc/apache2/.htpasswd flags=1(O_RDONLY) mode=0 
    9703 11:16:21.285936317 0 apache2 (1287) > fstat fd=13(<f>/etc/apache2/.htpasswd) 
    9704 11:16:21.285937024 0 apache2 (1287) < fstat res=0

As you can see from the above examples filters can be used for both reading from a file or the live event stream.

Chisels

Earlier I compared sysdig to SystemTap, Chisels is why I made that reference. Similar tools like SystemTap have a SystemTap only scripting language that allows you to extend the functionality of SystemTap. In sysdig these are called chisels and they can be written in LUA which is a common programming language. I personally think the choice to use LUA was a good one, as it makes extending sysdig easy for newcomers.

List available chisels

To list the available chisels you can use the -cl flag with sysdig.

Example:

    # sysdig -cl

    Category: CPU Usage
    -------------------
    topprocs_cpu    Top processes by CPU usage

    Category: I/O
    -------------
    echo_fds        Print the data read and written by processes.
    fdbytes_by      I/O bytes, aggregated by an arbitrary filter field
    fdcount_by      FD count, aggregated by an arbitrary filter field
    iobytes         Sum of I/O bytes on any type of FD
    iobytes_file    Sum of file I/O bytes
    stderr          Print stderr of processes
    stdin           Print stdin of processes
    stdout          Print stdout of processes
    <truncated output>

The list is fairly long even though sysdig is still pretty new, and since sysdig is on GitHub you can easily contribute and extend sysdig with your own chisels.

Display chisel information

While the list command gives a small description of the chisels you can display more information using the -i flag with the chisel name.

Example:

    # sysdig -i bottlenecks

    Category: Performance
    ---------------------
    bottlenecks     Slowest system calls

    Use the -i flag to get detailed information about a specific chisel

    Lists the 10 system calls that took the longest to return dur
    ing the capture interval.

    Args:
    (None)

Running a chisel

To run a chisel you can run sysdig with the -c flag and specify the chisel name.

Example:

    # sysdig -c topprocs_net
    Bytes     Process
    ------------------------------
    296B      sshd

Running a chisel with filters

Even with chisels you can still use filters to run chisels against specific events.

Capturing all network traffic from a specific process

The below example shows using the echo_fds chisel against the processes named apache2.

# sysdig -A -c echo_fds proc.name=apache2
------ Read 444B from 127.0.0.1:57793->162.243.109.80:80

GET /wp-admin/install.php HTTP/1.1
Host: 162.243.109.80
Connection: keep-alive
Cache-Control: max-age=0
Authorization: Basic YWRtaW46ZUNCM3lyZmRRcg==
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8

Capturing network traffic exchanged between a specific ip

We can also use the the echo_fds chisel to show all network traffic for a single ip using the fd.cip filter.

# sysdig -A -c echo_fds fd.cip=127.0.0.1
------ Write 1.92KB to 127.0.0.1:58896->162.243.109.80:80

HTTP/1.1 200 OK
Date: Thu, 17 Apr 2014 03:11:33 GMT
Server: Apache
X-Powered-By: PHP/5.5.3-1ubuntu2.3
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 1698
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8

Originally Posted on BenCane.com: Go To Article

Written by Benjamin Cane

April 18th, 2014 at 3:30 am

Posted in Uncategorized

A Simple Netcat How-To for Beginners

without comments

There are tonnes of tutorials on Netcat already. This one is to remind me and my colleagues about the awesomeness of nc which we forget on regular basis.
Common situations where nc can be used:
  • Check connectivity between two nodes. I had to learn hard way that ping (read all ICMP) based protocols are not always the best way to judge connectivity. Often ISPs set ICMP to lower priority and drop it.
  • Single file transfer.
  • Testing of network applications. I have written several clients and loggers for logstash and graphite which couldn't have been easier to test without nc.
  • Firing commands to remote servers where running a conventional tcp/http server is not possible (like VMWare ESXi)
Basic Netcat servers:
  • nc -l <port>
    Netcat starts listening for TCP sockets at the specified port. A client can connect and write arbitrary strings to the socket which will be reflected here.
  • nc -u -l <port>
    Netcat starts listening for UDP sockets at the specified port. A client can write arbitrary strings to the socket which will be reflected here.
  • nc -l <port> -e /bin/bash
    Netcat starts listening for TCP sockets at the specified port. A client can connect and write arbitrary commands which will be passed to /bin/bash and executed. Use with extreme caution on remote servers. The security here is nil.
  • nc -l -k <port> -e /bin/bash
    Problem with above command is that nc gets terminated as soon as client disconnects. -k option forces nc to stay alive and listen for subsequent connections as well.
Basic Netcat Clients:
  • nc <address> <port>
    Connect as client to the server running on <address>:<port> via TCP.
  • nc -u <address> <port>
    Connect as client to the server running on <address>:<port> via UDP.
  • nc -w <seconds> <address> <port>
    Connect as client to the server running on <address>:<port> via TCP and timeout after <seconds> of being idle. I used it a lot to send data to graphite using shell scripts.

A cool example to stream any file's content live (mostly used for logs) can be found at commandlinefu.

Written by Aditya Patawari

April 16th, 2014 at 9:56 am

Posted in nc,netcat,network,tcp,UDP

Optimal Sailfish SDK workflow with QML auto-reloading

without comments

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

SailfishOS IDE open. Just press Ctrl+S to save and see app reloading!

Sailfish is the Linux based operating system used in Jolla phones. Those who develop apps for Jolla use the Sailfish SDK (software development kit), which is basically a customized version of Qt Creator. Sailfish OS apps are written using the Qt libraries and typically in the C++ programming language. The user interfaces of Sailfish apps are however written in a declarative language called QML. The syntax of QML is a custom markup language and includes a subset of CSS and JavaScript to define style and actions. QML files are not compiled but stay as plain text files when distributed with the app binaries and are interpreted at run-time.

While SailfishOS IDE (Qt Creator) is probably pretty good for C++ programming with Qt libraries, and the Sailfish flavour comes nicely bundled with complete Sailfish OS instances as virtual machines (one for building binaries and one for emulating running the binaries) the overall workflow is not very optimal from a QML development point of view. Each time a developer presses the Play button to launch his app, Qt Creator builds the app from scratch, packages it, deploys it on the emulator (or a real device if set up to do so) and only then actually runs the app. After doing some changes to the source code, the developer needs to remember to press Stop and then press Play to build, deploy and start the app again. Even on a super fast machine this cycle takes at least 10 seconds.

It would be a much more optimal workflow if relaunching the app after QML source code changes would happen in only 1 second or even less. Using Entr it is possible.

Enter Entr

Entr is an multi platform app which uses the operating system facilities to watch for file changes and to run a command the instant a watched file is changed. To install Entr on a Sailfish OS emulator or device, ssh to the emulator or device, add the community repository chum and install the package entr with (Note the chum for 1.0.5.16 also exists, but the repo is empty.):

ssh nemo@xxx.xxx.xxx.xxx
ssu ar chum http://repo.merproject.org/obs/sailfishos:/chum:/1.0.4.20/1.0.4.20_armv7hl/
pkcon refresh
pkcon install entr

After this change to the directory where your app and it’s QML files reside and run entr:

cd /usr/share/harbour-seravo-news/qml/
find . -name *.qml | entr -r /usr/bin/harbour-seravo-news

The find command will make sure all QML files in current or any subdirectory will be watched. Running entr with parameter -r will make sure it kills the program before running it again. The name of our app in this example here is seravo-news (available in the Jolla store if you are interested).

With this the app would automatically reload it any of the QML files change. To do this mount the app directory on the emulator (or device) to your local system using SSH:

mkdir mountpoint
sshfs nemo@xxx.xxx.xxx.xxx:/usr/share/harbour-seravo-news mountpoint/

Then finally open Qt Creator, point it to the files in the mountpoint directory and start editing. Every time you’ve edited QML files and you feel like you want to see how the result looks like, simply press Ctrl+S to save and watch the magic! It’s even easier than what web developers are used to do then pressing F5 to reload, because on the emulator (or device) there is nothing you need to do, just look at it while the app auto-restarts directly.

Remember to copy or directly git commit your files from the mountpoint directory when you’re completed writing the QML files.

Entr has been packaged for SailfishOS by Seravo staff. Subscribe to our blog to get notified when we post about how to package and build RPM packages using the Open Build System and submit them for inclusion in SailfishOS Chum repository.

Written by Otto Kekäläinen

April 15th, 2014 at 2:37 am

OUR SERVER IN HAVANA

without comments

Lostnbronx has a home file server for which he had high hopes.

Written by LOSTNBLOG

April 9th, 2014 at 8:12 pm

Posted in Uncategorized

The best replacement for Windows XP: Linux with LXDE

without comments

Lubuntu logo, LXDE archAs of today, Microsoft has officially ended the support for Windows XP and it will no longer receive any security updates. Even with the updates, XP has never been a secure platform and by now users should really stop using it. But what should people install instead? Our recommendation is Lubuntu.

Windows XP has finally reached its end-of-life. Most people have probably already bought new computers that come preinstalled with a newer version of Windows and others have had the wisdom to move to another operating system for a long time ago. Considering the licensing model, performance and ease of use in newer Windows versions, it is completely understandable that there is a large amount of people who have decided to stick to XP for a long time. But now they must upgrade, and the question is, to what?

It is obvious that the solution is to install Linux. It is the only option when the requirement is having a usable desktop environment on the same hardware as XP was used on. But the hard choice is what Linux distribution to choose. For this purpose Seravo recommends Lubuntu version 14.04 (currently in beta-2, but final release coming in just a few weeks).

Why? First of all the underlying Linux distribution is Ubuntu, the worlds third most popular desktop operating system (after Windows and Mac). The big market share guarantees that there are plenty of peer users, support and expertise available. Most software publishers have easy to install packages for Windows, Mac and Ubuntu. All major pre-installed Linux desktop offerings are based on Ubuntu. And when you count in Ubuntu’s parent distribution Debian and all of the derivative Linux distributions, it is certainly the most widely used Linux desktop platform. There is safety in the numbers and a platform with lots of users is most likely maintained, so it is a safe choice. Ubuntu 14.04 is also a long term release (LTS), and the publishing company Canonical promises that the base of this Ubuntu release will receive updates and security fixes until 2019.

However we don’t recommend the default desktop environment Ubuntu provides. Instead we recommend to use the Ubuntu flavour Lubuntu, which comes with the desktop environment LXDE. This is a very lightweight graphical user interface, meaning it will be able to run on machines that have just 128 MB of RAM memory. On better machines LXDE will just be lightning fast to use, and it will leave more unused memory for other applications to use (e.g. Firefox, Thunderbird, LibreOffice etc). Also the default applications in Lubuntu are chosen to be lightweight ones, so the file manager and image viewers are fast. There are also some productivity software included like Abiword and Sylpheed, but most users will rather want to use the heavier but more popular equivalents like LibreOffice Writer and Mozilla Thunderbird. These can easily be installed in Lubuntu using the Lubuntu software center.

Note that even though Lubuntu will be able to run on very old machines as it gets by with so little resources, you might still have some difficulties installing it if your machine does not support the PAE feature or if there are other hardware which are not supported by the millions of Linux device drivers that Ubuntu ships with by default. If you live in Finland, you can buy professional support from Linux-tuki.fi and have an expert do the installation for you.

Why is Lubuntu the best Win XP replacement?

Classic menu

Classic menu

First of all Lubuntu is very easy to install and maintain. It has all the ease of use and out-of-box security and usability enhancements that Ubuntu has engineered, including the installer, encrypted home folders, auto-updates, readable fonts, stability and other features which makes up a good quality experience.

The second reason to use Lubuntu is that it is very easy to learn, use and support. Instead of the Ubuntu default Unity desktop environment Lubuntu has LXDE, which looks and behaves much like the classic desktop. LXDE has a panel at the bottom, a tree-like application launcher in the left lower corner, a clock and notification area in the right lower corner and a panel for window visualization and switching in the middle. Individual windows have their manipulation buttons in the right upper corner and application menus right inside the application windows and always visible. Anybody who has used Windows XP will immediately feel comfortable: applications are easy to discover and launch, there is no need to know their name or category in advance. It is easy to see what applications are open and to switch between them with classic mouse actions or using simple Alt+Tab shortcut. From a support perspective it is easy to ask users by phone to open menu File and Save as and so on, as users can easily see and choose the correct menu items for the application in question.

The third reason is that while the LXDE is visually simple, users can always install whatever application available in the Ubuntu repositories and get productive with whatever complex productivity software they want. A terminal can be spawned in under a second with shortkeys Ctrl+Alt+T. Even though LXDE itself is simple, it won’t hamper anybodys ability to be productive and do complex things.

The fourth reason is that when using Lubuntu, switching to a more modern desktop UI is easy. On top of a Lubuntu installation a admin can install the gnome-session package, and then users will be able to choose another session type in the login screen to get into Gnome 3.

Some might criticize LXDE that it does not have enough features. Yes, in LXDE pressing the keyboard button Print Screen will not automatically launch a screenshot tool nor dragging a window to the side of the screen will not automatically make the window into a perfectly aligned half screen sized window. But it is still possible to achieve the same end results using other means in LXDE and all the important features, like system settings, changing resolution, attaching external screen, attaching USB key and easy mounting and unmounting etc are all part of the standard feature set. In fact the lead developer of Lubuntu has said he will not add any new features and only do bug fixes. It could be said that LXDE is feature complete and the next development effort is rewriting in Qt instead of the current GTK2 toolkit, a move that will open new technological horizons under the hood, but not necessarily do anything to end user visible features.

Another option with similar design ideas is XFCE and the Ubuntu flavour Xubuntu that is built around this desktop environment. Proponents of XFCE say it has more features than LXDE but most of those features are not needed by average users and some components, like the file manager and image viewer, are more featureful in LXDE than in XFCE and the features in those apps are more likely to be actually needed. However the biggest and most striking difference is that XCFE isn’t actually that lightweight, and to run smoothly it needs a computer that is more than twice as powerful than what LXDE needs.

Our fifth and final reason to recommend LXDE and Lubuntu is speed. It is simply fast. And fast is always better. Have you ever wondered how come computers year after year feel sluggish even though processor speed is doubled each 18 moths according the Moore’s law? Switch to LXDE and you’ll have an environment that is lightning fast on any reasonable modern hardware.

Getting Linux and LXDE

LXDE is available also in many other Linux distributions like Debian and OpenSUSE, but for the reasons stated above we recommend installing it downloading Lubuntu 14.04, making a bootable USB key of it (following simple installation instructions) and installing it on all of your old Windows XP machines. Remember though to copy your files from XP to an external USB key so that you can later put them back on your computer when Lubuntu is installed.

Lubuntu desktop Lubuntu menu Lubuntu windows

Written by Otto Kekäläinen

April 8th, 2014 at 2:34 am

Download Ubuntu 14.04 LTS Trusty Tahr ISO / CD / DVD / x86_64 / 32-Bit / MAC

without comments

Hello, This post will contain links for Downloading Ubuntu 14.04 LTS Trusty Tahr Final. Links of Final Version of Ubuntu 14.04 LTS Trusty Tahr is updated in this post. I always Prefer New Releases of Ubuntu and Instantly Install it for Experience the New World of Ubuntu, Ubuntu is very fast to recognize and Fix […]

Upcoming Foresight Linux 3 information

without comments

It’s been quite from us with information about Foresight Linux 3 and what’s happening. Iv’e been collected some information to give you some insight what’s going on.

Fedora respin?

First of all, we are importing the whole fedora repository for Foresight Linux. That means you will be using conary as package manager instead of yum. We are using a tool called mirrorball.

You will get the features from conary instead from yum. It means you will get a rolling linux dist with able to rollback your system, if something goes wrong from a update. I won’t write everything that differs from conary and yum.

Conary is pretty strict when it comes to dependency resolution. We already found packaging issues of fedora20 just by importing and rebuilding it with conary.

Why?

System-model. A file that will keep track of what your system has installed or deleted. That will make it very easy to remember all kinds of packages you have installed during the years of running your Foresight Linux OS.

Of course there are other benefits to use conary instead of yum, but we leave that information for now.

We will also be able to change default applications, so it might not even look like a fedora from beginning.

What’s next?

We are currently creating groups from our repository, to be able to remake your Fedora os to Foresight Linux 3. We will also create other ways to get your system going.

If you want to read more about the import process and so on, please read our mailinglist for even more information.

Foresight Linux is not dead, just taking a break from social media and focus on upcoming Foresight Linux 3.

Our developer Mark wrote a blog post about this too. Read all about it here.

The post Upcoming Foresight Linux 3 information appeared first on Foresight Linux.

Written by Tomas Forsman

April 5th, 2014 at 12:47 am

Open Source as Last Resort

without comments

“Open Source as Last Resort” appears to be popular this week. First, Canonical, Ltd. will finally liberate UbuntuOne server-side code, but only after abandoning it entirely. Second, Microsoft announced a plan to release its .NET compiler platform, Roslyn, under the Apache License spinning it into an (apparent, based on description) 501(c)(6) organization called the Dot Net Foundation.

This strategy is pretty bad for software freedom. It gives fodder to the idea that “open source doesn't work”, because these projects are likely to fail (or have already failed) when they're released. (I suspect, although I don't know of any studies on this, that) most software projects, like most start-up organizations, fail in the first five years. That's true if they're proprietary software projects or not.

But, using code liberation as a last straw attempt to gain interest in a failing codebase only gives a bad name to the licensing and community-oriented governance that creates software freedom. I therefore think we should not laud these sorts of releases, even though they liberate more code. We should call them for what they are: too little, too late. (I said as much in the five year old bug ticket where community members have been complaining that UbuntuOne server-side is proprietary.)

Finally, a note on using a foundation to attempt to bolster a project community in these cases:

I must again point out that the type of organization matters greatly. Those who are interested in the liberated .NET codebase should be asking Microsoft if they're going to form a 501(c)(6) or a 501(c)(3) (and I suspect it's the former, which bodes badly).

I know some in our community glibly dismiss this distinction as some esoteric IRS issue, but it really matters with regard to how the organization treats the community. 501(c)(6) organizations are trade associations who serve for-profit businesses. 501(c)(3)'s serve the public at large. There's a huge difference in their behavior and activities. While it's possible for a 501(c)(3) to fail to serve all the public's interest, it's corruption when they so fail. When 501(c)(6)'s serve only their corporate members' interest, possibly at the detriment to the public, those 501(c)(6) organizations are just doing the job they are supposed to do — however distasteful it is.


Note: I said “open source” on purpose in this post in various places. I'm specifically saying that term because it's clear these companies actions are not in the spirit of software freedom, nor even inspired therefrom, but are pure and simple strategy decisions.

Written by Bradley M. Kuhn

April 3rd, 2014 at 3:35 pm

Posted in Uncategorized

nginx rules to protect wordpress admin

without comments

Following my last post about how to ensure a bit more our wordpress instance, today I implemented some basic rules for nginx that can be useful to block some automate brute force attacks to our wordpress administration panel generated by bots or vulnerability software scanners and save more cpu time in our server. Obviously the best method is limiting the access to admin url for some IP’s, but not always is possible.

Basically the idea is block requests like that:

xx.xx.xx.xx - - [01/Apr/2014:17:04:42 +0100] "POST /wp-login.php HTTP/1.0" 200 3974 "-" "-"

Basically a variable called $bot is defined inside the server clause initialized to 1. Some patterns of the http request are checked, if this pattern is matched, nginx will append one 0 to this variable. This patterns consists in:

1.- Check if is a HTTP/1.0 request.
2.- Check if the request method is POST.
3.- Check if the uri requested is /wp-admin or /wp-login.php
4.- Check if the host http header requested doesn’t match with our domain.
5.- Check if the referrer server_name http header doesn’t match with our domain.

Combining this patterns we can block some bots avoiding false positives using old browsers that uses HTTP/1.0. At the end of the checks, if it matches with at least 4 conditions, the request will be blocked returning a 444 response code.

set $bot 1;
valid_referers server_names ~(yourdomain.com);
if ($server_protocol ~* "HTTP/1.0") {
set $bot  "${bot}0";
}
if ($request_method = POST) {
set $bot  "${bot}0";
}
if ($request_uri ~* ^/(wp-admin|wp-login\.php)){
set $bot  "${bot}0";
}
if ($http_host !~* ([a-z0-9]+\.)*yourdomain\.com ){
set $bot  "${bot}0";
}
if ($invalid_referer) {
set $bot  "${bot}0";
}

if ($bot ~* ^10000){
return 444;
}

There are more sophisticated methods to bock brute force requests like using a WAF (web application firewall) using mod_security for apache or naxsi for nginx, or even using fail2ban. But from an easy and fast way you can implement this rules. You can take a look with more detail on the different methods on this url:

http://codex.wordpress.org/Brute_Force_Attacks

Source documentation:
http://wiki.nginx.org/HttpCoreModule

http://wiki.nginx.org/HttpRefererModule

http://wiki.nginx.org/IfIsEvil

Written by Ivan Mora Perez

April 2nd, 2014 at 1:03 pm