LinuxPlanet Blogs

By Linux Geeks, For Linux Geeks.

[Video] [HowTo] CentOS / RHEL 7 Installation with GUI / Custom Partition

without comments

Hello, Today I have tried to Install CentOS 7 on my Virtual Machine. Sharing video of CentOS 7 Installation with GUI and Custom Partition with LVM. Hope this video will help to those who wants to Install and Looking for easy guide for CentOS 7 Installation. CentOS / RHEL 7 Installation Video   Enjoy CentOS […]

RHEL 7 / CentOS 7 : How to get started with Systemd

without comments

Hello All, Today I was trying to learn and know about Systemd. I have found one of the great Article, Sharing with you guys, It will help you to understand this biggest and major change in RHEL and CentOS 7. This article is not mine, I found on internet and felt that this is wonderful […]

Setting the default document folder/directory in the KDE Kate Text Editor

without comments

By default Kate is trying to open and save files starting in the ~/Documents folder. While this may be convenient for some, I’d like to change it. Try as I might I couldn’t find this option in the configuration of Kate. I remember this was an option, and it was removed. Then it was a command line option but kate –help-all shows it as been removed.

The only I found to change this is via KDE > System Settings > Account Details > Paths > Document Path:

Changing that to your desired directory works fine.

Written by ken_fallon

July 22nd, 2014 at 10:54 am

Posted in General

Speeding up Speech with mplayer

without comments

Mplayer is a fantastic media player and I have been using it as the default tool to play both music and speech for years now.

One of it’s lesser known features is the ability to speed up or slow down whatever it’s playing. Not very useful for music but very handy if you are listening to speech. In some cases you may wish to speed up podcasts, to get more enjoyment in. In other cases you may want to slow down a recording so that you can transcribe the text. For people with Dyslexia this is very empowering as it gives them control over the rate of input.

You can use the {, [, backspace, ], }, keys to control the speed.

  • { key will slow down by 50% of the current rate
  • [ key will slow down by 10% of the current rate
  • Backspace will return the speed to normal
  • ] key will speed up 10% of the current rate
  • } key will speed up by 50% of the current rate
  • 9 key will decrease the volume
  • 0 key will increase the volume

I strongly recommend taking some time to review the keyboard controls in the manpage.

By default mplayer will not maintain pitch when you change the speed. So if you speed it up the speaker starts to sound like a chipmunk, and if you slow it down female voices start to sound like male voices.

You can change this by starting mplayer with the switch -af scaletempo

You can change this quickly by creating an alias

alias mplayer='mplayer -af scaletempo'

A more permanent way to set this is to configure your mplayer configuration file. Simply add the following in the “# audio settings #” section

af=scaletempo

See the Configuration Files section in the man page for more information.

The system-wide configuration file ‘mplayer.conf’ is in your configuration directory (e.g. /etc/mplayer or /usr/local/etc/mplayer), the user specific one is ~/.mplayer/config. User specific options override system-wide options and options given on the command line override either.

Written by ken_fallon

July 22nd, 2014 at 3:23 am

My New Page

without comments

This is my new page

flattr this!

Written by Dan

July 21st, 2014 at 6:54 am

Posted in Uncategorized

Carving text on image using gimp

without comments

We often want our images to have some text carved on it. Here is how we can do it easily using gimp.

Launch gimp and click on File->Create->Logos->Carved

 photo gimp_carved_select.png

The following menu will appear.

 photo gimp_carved_text.png

In the text field enter the text that has to be carved on the image.

Font Size: The size of the font to be engraved. Select this carefully depending on the size of the image.
Font: The fonr to be used for the characters of the text.
Background Image: Browse and select the image on which the text need to be carved.
Carve raised text: We can tick this check box when we want the text to stand out in a raised manner from the image.
Padding around text: How much spacing should be between the image borders and the text.
For our example we will write the word Linux on the image of the "tux", the linux penguin.
After setting all the above fields click on ok.

The gimp screen will come up with the carved text. The default color of the text is almost white as shown below.

The color can be changed by using the bucket tool available in the toolbox and filling the characters with our desired color by selecting the top most layer among all the layers.

 photo gimp_carved_linux.png

After coloring we will have to export the image using File->Export and choose the path and name for the final file. The following is the final result after coloring the text red.

 photo linux_mbed_2.png

Note that the image being small, it has been repeated to accommodate the whole text.

Written by Tux Think

July 19th, 2014 at 5:36 am

Posted in gimp,Linux

Checkpont SNX on Ubuntu 14.04 LTS (Trusty Tahr)

without comments

Life has conspired to bring me back to the open arms of Kubuntu and with a new install comes the required update on getting Checkpont Firewall AKA SNX working. This is part of the snx series here.


The first step remains the same and is to get your username, password and ip address or host name of your snx server from your local administrator. Once you do that you can login and then press the settings link. This will give you a link to the various different clients. In our case we are looking for the “Download installation for Linux” link. Download that and then run it with the following command.

# sh +x snx_install.sh
Installation successfull

If you run this now you will get the error

snx: error while loading shared libraries: libpam.so.0: cannot open shared object file: No such file or directory

We can check if the required libraries are loaded.

# ldd /usr/bin/snx | grep "not found"
        libpam.so.0 => not found
        libstdc++.so.5 => not found

This is the 64 bit version and I’m installing a 32 bit application, so you’ll need to install the 32 bit libraries and the older version of libstdc if you haven’t all ready. The old trick of simply installing ia32-libs will no longer work since MultArch support has been added. Now the command is simply

apt-get install libstdc++5:i386 libpam0g:i386

You should now be able to type snx without errors. You only now need to accept the VPN Certificate by loging in via the command line and press “Y”.

user@pc:~$ snx -s my-checkpoint-server -u username
Check Point's Linux SNX
build 800007075
Please enter your password:
SNX authentication:
Please confirm the connection to gateway: my-checkpoint-server VPN Certificate
Root CA fingerprint: AAAA BBB CCCC DDD EEEE FFF GGGG HHH IIII JJJ KKKK
Do you accept? [y]es/[N]o:

Note the build number of 800007075. I had difficulties connecting with any other version lower than this.

Written by ken_fallon

July 18th, 2014 at 10:10 am

Posted in General,snx

Creating a queue in linux kernel using list functions

without comments

In the post we saw how to create a stack using the list functions. In this post let us use the same list functions to implement a queue.

As we know a queue is a data structure that stores data in the first in first out order. That is the data that is entered first is read out first.

To create a queue using the list functions we can make use of the function list_add_tail



new: The new node to be added
head: Pointer to node before which the new node has to be added.

To make the list work as a queue we need keep inserting every new node before the head node, that is at the tail end of the list and when we read the list, we read from the front end. Thus the entry that is added first will be accessed first.

To demonstrate the operation of queue we will create a proc entry called as queue make it work like a queue.

To create the queue we will use the structure



queue_list : Used to create the linked list using the list functions.
data: Will hold the data to be stored.

In the init function we need to first create the proc entry and then intialize the linked list that we will be using for the queue.



create_new_proc_entry: Function for creation of the proc entry.





Next we need to define the file operations for the proc entry. To manipulate the queue we need two operations push and pop.



The function push_queue will be called when data is written into the queue and pop_queue will be called when data is read from the queue.

In push_queue we will use the function list_add_tail to add every new node behind the head.



In the function pop_queue, before attempting to read the queue, we need to ensure that the queue has data. This can be done using the function list_empty.



Which returns true if the list is empty.

In case the list is not empty, we need to pop out the first data from the list which can be done using the function list_first_entry.



ptr: Pointer to the head node of the list.
type: type of structure of which the head is member of
member: The name of the head node in the structre.
Thus the pop function will look as below.

The flag and the new_node are variables used to ensure that only one node is returned on every read of hte proc entry. The read function from user space will continue to read the proc entry as long as the return value is not zero. In the first pass,new_node=0, we will return the number of bytes of data transfered from one node and in the second pass,new_node=0, we will return 0 to terminate the read. Thus making sure that on every read data of exactly one node is returned. The flag variable works in the similar fashion when the queue is empty to return the string "Queue empty".

Thus the full code of queue will be



To compile the module use the make file.



Compile and insert the module into the kernel



To see the output

Thus we have written the 1,2,3,4 as data into 4 nodes with 4 being inserted last. Now to pop data out of the stack

Written by Tux Think

July 17th, 2014 at 1:23 am

Using salt-api to integrate SaltStack with other services

without comments

Recently I have been looking for ways to allow external tools and services to perform corrective actions across my infrastructure automatically. As an example, I want to allow a monitoring tool to monitor my nginx availability and if for whatever reason nginx is down I want that monitoring tool/service to do something to fix it.

While I was looking at how to implement this, I remembered that SaltStack has an API and that API can provide exactly the functionality I wanted. The below article will walk you through setting up salt-api and configuring it to allow third party services to initiate SaltStack executions.

Install salt-api

The first step to getting started with salt-api is to install it. In this article we will be installing salt-api on a single master server, it is possible to run salt-api on a multi-master setup however you will need to ensure the configuration is consistent across master servers.

Installation of the salt-api package is pretty easy and can be performed with apt-get on Debian/Ubuntu or yum on Red Hat variants.

# apt-get install salt-api

Generate a self signed certificate

By default salt-api utilizes HTTPS and while it is possible to disable this, it is generally not a good idea. In this article we will be utilizing HTTPS with a self signed certificate.

Depending on your intended usage of salt-api you may want to Generate a certificate that is signed by a certificate authority, if you are using salt-api internally only a self signed certificate should be acceptable.

Generate the key

The first step in creating any SSL certificate is to generate a key, the below command will generate a 4096 bit key.

# openssl genrsa -out /etc/ssl/private/key.pem 4096

Sign the key and generate a certificate

After we have generated the key we can generate the certificate with the following command.

# openssl req -new -x509 -key /etc/ssl/private/key.pem -out /etc/ssl/private/cert.pem -days 1826

The -days option allows you to specify how many days this certificate is valid, the above command should generate a certificate that is valid for 5 years.

Create the rest_cherrypy configuration

Now that we have installed salt-api and generated an SSL certificate we can start configuring the salt-api service. In this article we will be placing the salt-api configuration into /etc/salt/master.d/salt-api.conf you can also put this configuration directly in the /etc/salt/master configuration file; however, I prefer putting the configuration in master.d as I believe it allows for easier upkeep and better organization.

# vi /etc/salt/master.d/salt-api.conf

Basic salt-api configuration

Insert

rest_cherrypy:
  port: 8080
  host: 10.0.0.2
  ssl_crt: /etc/ssl/private/cert.pem
  ssl_key: /etc/ssl/private/key.pem

The above configuration enables a very basic salt-api instance, which utilizes SaltStack's external authentication system. This system requires requests to authenticate with user names and passwords or tokens; in reality not every system can or should utilize those methods of authentication.

Webhook salt-api configuration

Most systems and tools however do support utilizing a pre-shared API key for authentication. In this article we are going to use a pre-shared API key to authenticate our systems with salt-api. To do this we will add the webhook_url and webhook_disable_auth parameters.

Insert

rest_cherrypy:
  port: 8080
  host: <your hosts ip>
  ssl_crt: /etc/ssl/private/cert.pem
  ssl_key: /etc/ssl/private/key.pem
  webhook_disable_auth: True
  webhook_url: /hook

The webhook_url configuration parameter tells salt-api to listen for requests to the specified URI. The webhook_disable_auth configuration allows you to disable the external authentication requirement for the webhook URI. At this point in our configuration there is no authentication on that webhook URI, and any request sent to that webhook URI will be posted to SaltStack's event bus.

Actioning webhook requests with Salt's Reactor system

SaltStack's event system is a internal notification system for SaltStack, it allows external processes to listen for SaltStack events and potentially action them. A few examples of events that are published to this event system are Jobs, Authentication requests by minions, as well as salt-cloud related events. A common consumer of these events is the SaltStack Reactor system, the Reactor system allows you to listen for events and perform specified actions when those events are seen.

Reactor configuration

We will be utilizing the Reactor system to listen for webhook events and execute specific commands when we see them. Reactor definitions need to be included in the master configuration, for this article we will define our configuration within the master.d directory.

# vi /etc/salt/master.d/reactor.conf

Insert

reactor:
  - 'salt/netapi/hook/restart':
    - /salt/reactor/restart.sls

The above command will listen for events that are tagged with salt/netapi/hook/restart which would mean any API requests targeting https://example.com:8080/hook/restart. When those events occur it will execute the items within /salt/reactor/restart.sls.

Defining what happens

Within the restart.sls file, we will need to define what we want to happen when the /hook/restart URI is requested.

# vi /salt/reactor/services/restart.sls

Simply restarting a service

To perform our simplest use case of restarting nginx when we get a request to /hook/restart you could add the following.

Insert

restart_services:
  cmd.service.restart:
    - tgt: 'somehost'
    - arg:
      - nginx

This configuration is extremely specific and doesn't leave much chance for someone to exploit it for malicious purposes. This configuration is also extremely limited.

Restarting the specified services on the specified servers

When a salt-api's webhook URL is called the POST data being sent with that request is included in the event message. In the below configuration we will be using that POST data to allow request to the webhook URL to specify both the target servers and the service to be restarted.

Insert

{% set postdata = data.get('post', {}) %}

restart_services:
  cmd.service.restart:
    - tgt: '{{ postdata.tgt }}'
    - arg:
      - {{ postdata.service }}

In the above configuration we are taking the POST data sent to the URL and assigning it to the postdata object. We can then use that object to provide the tgt and arg values. This allows the same URL and webhook call to be used to restart any service on any or all minion nodes.

Since we disabled external authentication on the webhook URL there is currently no authentication with the above configuration. This means that anyone who knows our URL could send a well formatted request and restart any service on any minion they want.

Using a secret key to authenticate

To secure our webhook URL a little better we can add a few lines to our reactor configuration that requires that requests to this reactor include a valid secret key before executing.

Insert

{% set postdata = data.get('post', {}) %}

{% if postdata.secretkey == "replacethiswithsomethingbetter" %}
restart_services:
  cmd.service.restart:
    - tgt: '{{ postdata.tgt }}'
    - arg:
      - {{ postdata.service }}
{% endif %}

The above configuration requires that the requesting service include a POST key of secretkey and the value of that key must be replacethiswithsomethingbetter. If the requester does not, the Reactor will simply perform no steps. The secret key in this configuration should be treated like any other API key, it should be validated before performing any action and if you have more than one system/user making requests you should ensure that only the correct users have the ability to execute the commands specified in the reactor SLS file.

Restarting salt-master and Starting salt-api

Before testing our new salt-api actions we will need to restart the salt-master service and start the salt-api service.

# service salt-master restart
# service salt-api start

Testing our configuration with curl

Once the two above services have finished restarting we can test our configuration with the following curl command.

# curl -H "Accept: application/json" -d tgt='*' -d service="nginx" -d secretkey="replacethiswithsomethingbetter" -k https://salt-api-hostname:8080/hook/services/restart

If the configuration is correct you should expect to see a message stating success.

Example

# curl -H "Accept: application/json" -d tgt='*' -d service="nginx" -d secretkey="replacethiswithsomethingbetter" -k https://10.0.0.2:8080/hook/services/restart
{"success": true}

At this point you can now integrate any third party tool or service that can send webhook requests with your SaltStack implementation. You can also use salt-api to have home grown tools initiate SaltStack executions.

Additional Considerations

Other netapi modules

In this article we implemented salt-api with the cherrypy netapi module, this is one of three options that SaltStack gives you. If you are more familiar with other application servers such as wsgi than I would suggest looking through the documentation to find the appropriate module for your environment.

Restricting salt-api

While in this article we added a secret key for authentication and SSL certificates for encrypted HTTPS traffic; there are additional options that can be used to restrict the salt-api service further. If you are implementing the salt-api service into a non-trusted network, it is a good idea to use tools such as iptables to restrict which hosts / IP's are able to utilize the salt-api service.

In addition when implementing salt-api it is a good idea to think carefully as to what you allow systems to request. A simple example of this would be the cmd.run module within SaltStack. If you allowed a server to perform dynamic cmd.run requests via `salt-api, if a malicious user was able to bypass the implemented restrictions that user would then be able to theoretically run any arbitrary command on your systems.

It is always best to only allow specific commands through the API system and avoid allowing potentially dangerous commands such as cmd.run.


Originally Posted on BenCane.com: Go To Article

Written by Benjamin Cane

July 16th, 2014 at 6:00 pm

Posted in Uncategorized

Review: Penetration Testing with the Bash shell by Keith Makan – Packt Pub.

without comments

Penetration Testing with the Bash shell

I’ll have to say that, for some reason, I thought this book was going to be some kind of guide to using only bash itself to do penetration testing. It’s not that at all. It’s really more like doing penetration testing FROM the bash shell, or command line of you like.

Your first 2 chapters take you through a solid amount of background bash shell information. You cover topics like directory manipulation, grep, find, understanding some regular expressions, all the sorts of things you will appreciate knowing if you are going to be spending some time at the command line, or at least a good topical smattering. There is also some time spent on customization of your environment, like prompts and colorization and that sort of thing. I am not sure it’s really terribly relevant to the book topic, but still, as I mentioned before if you are going to be spending time at the command line, this is stuff that’s nice to know. I’ll admit that I got a little charge out of it because my foray into the command line was long ago on an amber phosphorous serial terminal. We’ve come a long way, Baby :)

The remainder of the book deals with some command line utilities and how to use them in penetration testing. At this point I really need to mention that you should be using Kali Linux or BackTrack Linux because some of the utilities they reference are not immediately available as packages in other distributions. If you are into this topic, then you probably already know that, but I just happened to be reviewing this book while using a Mint system while away from my test machine and could not immediately find a package for dnsmap.

The book gets topically heavier as you go through, which is a good thing IMHO, and by the time you are nearing the end you have covered standard bash arsenal commands like dig and nmap. You have spent some significant time with metasploit and you end up with the really technical subjects of disassembly (reverse engineering code) and debugging. Once you are through that you dive right into network monitoring, attacks and spoofs. I think the networking info should have come before the code hacking but I can also see their logic in this roadmap as well. Either way, the information is solid and sensical, it’s well written and the examples work. You are also given plenty of topical reference information should you care to continue your research, and this is something I think people will really appreciate.

To sum it up, I like the book. Again, it wasn’t what I thought it was going to be, but it surely will prove to be a valuable reference, especially combined with some of Packt’s other fine books like those on BackTrack. Buy your copy today!

Written by linc

July 16th, 2014 at 7:07 am