LinuxPlanet Casts

Media from the Linux Moguls

Archive for the ‘hpr’ Category

The Techie Geek – Episode 92 – Show Notes

without comments

Written by Russ Wenner

October 16th, 2011 at 5:13 pm

Talk Geek To Me Interview

without comments

I was honoured to be asked to do an interview by Deep Geek on his site Talk Geek To Me. Although the interview focused on the plans for revitalizing  Hacker Public Radio, there was time for a discussion about technology in general and my belief in Doug McIlroy’s Unix philosophy:

This is the Unix philosophy:

  • Write programs that do one thing and do it well.
  • Write programs to work together
  • Write programs to handle text streams, because that is a universal interface.

I just hope I didn’t bring down the tone of his show.

Written by ken_fallon

October 12th, 2010 at 11:23 pm

HPR: Shot of Hack – Changing the time offset of a series of photos.

without comments

The problem: You have a series of photos where the time is offset from the correct time but is still correct in relation to each other.

The Hacker Public Radio Logo of a old style microphone with the letters HPR

Transcript(ish) of Hacker Public Radio Episode 546

 

Here are a few of the times that I’ve needed to do this:

  • Changing the battery on my camera switched to a default date.
  • I wanted to synchronize the time on my camera to a GPS track so the photos matched the timestamped coordinates.
  • At a family event where images from different cameras were added together.

You can do edit the timestamp using a GUI and many photo manipulation applications like the GIMP support metadata editing.

For example on KDE -> gwenview -> plugins -> images -> metadata -> edit EXIF


The problem is that this gets tiresome after a few images, and anyway the times are correct in relation to each other – I just need to add or subtract a time correction to them en masse.

The answer: exiv2 – Image metadata manipulation tool. It is a program to read and write Exif, IPTC and XMP image metadata and image comments.

user@pc:~$ exiv2 *.jpg
File name       : test.jpg
File size       : 323818 Bytes
MIME type       : image/jpeg
Image size      : 1280 x 960
Camera make     : FUJIFILM
Camera model    : MX-1200
Image timestamp : 2008:12:07 15:12:59
Image number    :
Exposure time   : 1/64 s
Aperture        : F4.5
Exposure bias   : 0 EV
Flash           : Fired
Flash bias      :
Focal length    : 5.8 mm
Subject distance:
ISO speed       : 160
Exposure mode   : Auto
Metering mode   : Multi-segment
Macro mode      :
Image quality   :
Exif Resolution : 1280 x 960
White balance   :
Thumbnail       : image/jpeg, 5950 Bytes
Copyright       :
Exif comment    :

The trick is to pick a image where you can that figure out what the time was and work out the time offset. In my case I needed to adjust the date forward by six months and four days while changing the time back by seven hours. I used the command exiv2 -O 6 -D 4 -a -7 *.jpg

-a time
    Time adjustment in the format [-]HH[:MM[:SS]].
    This option is only used with the 'adjust' action. Examples:
        1 adds one hour,
        1:01 adds one hour and one minute,
        -0:00:30 subtracts 30 seconds.
-Y yrs
    Time adjustment by a positive or negative number of years, for the 'adjust' action.
-O mon
    Time adjustment by a positive or negative number of months, for the 'adjust' action.
-D day
    Time adjustment by a positive or negative number of days, for the 'adjust' action.

When we run this we can see that the timestamp has now changed.

user@pc:~$ exiv2 *.jpg | grep timestamp
Image timestamp : 2009:06:11 08:12:59

That’s it. Remember this is the end of the conversation – to give feedback you can either record a show for the HPR network and email it to admin@hackerpublicradio.org or write it on a post-it note and attach it to the windscreen of Dave Yates’s car as he’s recording his next show.

More Info

http://www.hackerpublicradio.org

http://kenfallon.com/?cat=12

Written by ken_fallon

June 3rd, 2010 at 11:04 pm

HPR: A private data cloud

without comments

The Hacker Public Radio Logo of a old style microphone with the letters HPR

Transcript(ish) of Hacker Public Radio Episode 544

Over the last two years I have stopped using analogue cameras for my photos and videos. As a result I also don’t print out photos any more when the roll is full. This goes some way to explaining why my mother has no recent pictures of the kids. Living in a digital world comes a realization that we need to take a lot more care when it comes to making backups.

In the past if my pc’s hard disk blew up virtually everything of importance could be recreated. That simply isn’t the case any more when the only copy of your cherished photos and videos are now on your computer. Add to that the fact that in an effort to keep costs decreasing and capacity increasing hardisks are becoming more and more unreliable (pdf).

A major hurdle to efficient backups is that the capacity of data storage is exceeding what can be practically transferred to ‘traditional’ backup media. I now have a collection of media reaching 250G where backing up to DVD is not feasible any more. Even if your collection is smaller be aware that sd cards, USB Sticks, DVD’s or even tapes also degrade over time.

And yet hard disks are cheap. You can get a 1.5 TB disk from amazon.com for $95 or a 1T for €70 from mycom.nl. So the solution would appear to be a juggling act where you keep moving your data on a pool of disks and replace the drives as they fail. Probably the easiest solution is to get a hand holding drobo or a sub 100$/€ low power NAS solutions. If you fancy doing it yourself Linux has had support for fast software mirroring/raid or years.

Problem solved ….

NASA Image of the Earth taken by Apollo 17 over green binary data

…well not quite. What if your nas is stolen or accidentally destroyed ?

You need to consider a backup strategy that also mirrors your data to another geographic location. There are solutions out there to store data in the cloud (ubuntu one, dropbox, etc.) The problem is that these services are fine for ‘small’ amounts of data but get very expensive very quickly for the amount of data we’re talking about.

The solution, well my solution, is to mirror data across the Internet using rsync over ssh to my brothers NAS and he mirrors his data to mine. This involves a degree of trust as you are now putting your data into someone else’s care. In my case it’s not an issue but if you are worried about this you can take the additional step of shipping them an entire pc. This might be a low power device that has enough of an OS that can get onto the Internet. From there you can ssh in to mount an encrypted partition. When hosting content for someone else you should consider the security implications of another user having access to your network from behind your firewall. You would also need to be confident that they are not hosting anything or doing anything that would lead you to get in trouble with the law.

Once you are happy to go ahead what you need to do is to start storing all your important data to the NAS in the first place. You will want to have all your PC’s and other devices back up to it. It’s probably a good idea to mount the nas on the client PC’s directly using nfs, samba, sshfs etc so that data is saved there directly. If you and your peering partner have enough space you can start replicating immediately or you may need to purchase an additional disk for your remote peer to install. I suggest that you do the initial drop locally and transfer the data by sneaker net, which will be faster and avoid issues with the ISP’s.

It’s best to mirror between drives that can support the same file attributes. For instance copying files from ext3 to fat32 will result in a loss of user and group permissions.

When testing I usually create a test directory on the source and destination that have some files and directories that are identical, different and modified so that I can confirm rsync operations.

To synchronize between locally mounted disks you can use the command:

rsync -vva --dry-run --delete --force /data/AUTOSYNC/ /media/disk/

/data/AUTOSYNC/ is the source and /media/disk/ is the destination. The --dry-run option will go through the motions of copying the data but not actually do anything and this is very important when you start so you know what’s going on. The -a option is the archive option and is equivalent to -rlptgoD. Here’s a quick run through the rsync options

-n, --dry-run
    perform a trial run that doesn't make any changes
-v, --verbose
    increases the amount of information you are given during the transfer.
-r, --recursive
    copy directories recursively.
-l, --links
    recreate the symlink on the destination.
-p, --perms
    set the destination permissions to be the same as the source.
-t, --times
    set the destination modification times to be the same as the source.
-g, --group
    set the group of the destination file to be the same as the source.
-o, --owner
    set the owner of the destination file to be the same as the source.
-D
    transfer character, block device files, named sockets and fifos.
--delete
    delete extraneous files from dest dirs
--force
    force deletion of dirs even if not empty

For a complete list see the rsync web page.

Warning: Be careful when you are transferring data that you don’t accidentally delete or overwrite anything.

Once you are happy that the rsync is doing what you expect, you can drop the --dry-run and wait for the transfer to complete.

The next step might be to ship the disk off to the remote location and then setup the rsync over ssh. However I prefer to have an additional testing step where I rsync over ssh to a pc in the home. This allows me to work out all the rsync ssh issues before the disk is shipped. The steps are identical so you can repeat this step once the disk has been shipped and installed at the remote end.

OpenBSD and OpenSSH mascot Puffy

OpenSSH

On your NAS server you will need to generate a new ssh public and private key pair that has no password associated. The reason for this is that you want the synchronization to occur automatically so you will need to be able to access the remote system securely without having to enter a password. There are security concerns with this approach so again proceed with caution. You may wish to create a separate user for this but I’ll leave that up to you. Now you can add the public key to the remote users .ssh/authorized_keys file. Jeremy Mates site has more information on this.

To confirm the keys are working you can try to open a ssh session using the key you just setup.

ssh -i /home/user/.ssh/rsync-key user@example.com

You may need to type yes to add the keys to the .ssh/known_hosts file, so it makes sense to run that command as the user that will be doing the rsyncing. All going well you should now be logged into the other system.

Once you are happy that secure shell is working all you now need to do is add the option to tell rsync to use secure shell as the transport.

rsync -va --delete --force -e "ssh -i /home/user/.ssh/rsync-key" /data/AUTOSYNC/ user@example.com:AUTOSYNC/

All going well there should be no updates but you may want to try adding, deleting and modifying files on both ends to make sure the process is working correctly. When you are happy you can ship the disk to the other side. The only requirement on the other network is that ssh is allowed through the firewall to your server and that you know the public IP address of the remote network. For those poor people without a fixed IP address, most systems provide a means to register a dynamic dns entry. Once you can ssh to your server you should also be able to rsync to it like we did before.

Of course the whole point is that the synchronization should be seamless so you want your rsync to be running constantly. The easiest way to do this is just to start a screen session and then run the command above in a simple loop. This has the advantage of allowing you to get going quickly but is not very resistant to reboots. I created a simple bash script to do the synchronization.

user@pc:~$ cat /usr/local/bin/autosync
#!/bin/bash
while true
  do
  date
  rsync -va --delete --force -e "ssh -i /home/user/.ssh/rsync-key" /data/AUTOSYNC/ user@example.com:AUTOSYNC/
  date
  sleep 3600
done
user@pc:~$ chmod +x /usr/local/bin/autosync

We wrap the rsync command into a infinite while loop that outputs a time stamp before and after it has run. I then pause the script for an hour after each run so that I’m not swamping either side. After making the file executable you can add it to the crontab of the user doing the rsync. See my episode on Cron on how to do that. This is a listing of the crontab file that I use.

user@pc:~$ crontab -l
MAILTO=""
0 1 * * * timeout 54000 /usr/local/bin/autosync > /tmp/autosync.log  2>&1

There are a few additions to what you might expect here. Were I to run the script directly from cron then it would spawn a new copy of the autosync script at one o’clock every morning. The script itself would never terminate so over time there would be many copies of the script running simultaneously. This isn’t an issue here as I am actually calling the timeout command first and it’s the one that actually calls the autosync script. The reason for this is that my brother doesn’t want me rsyncing in the evening when he is usually online. I could have throttled the amount of bandwidth I used as well but he said not to bother.

--bwlimit=KBPS
    This option allows you to specify a maximum transfer rate in kilobytes per second.

As the timeout command runs in it’s own process it’s output is not redirected to the logfile. In order to stop the cron owners email account getting a mail every time the timeout occurs I added a blank MAILTO="" line at the start of the crontab file. Thanks to UnixCraft for that tip.

Well that’s it. Once anyone on your network saves a file it will be stored on their local NAS and over time it will be automatically replicated to the remote network. There’s nothing stopping you replicating to other sites as well.

An image from screencasters.heathenx.org episode 94

screencasters.heathenx.org

This months recommended podcast is screencasters at heathenx.org.
From the about page:

The goal of Screencasters.heathenx.org is to provide a means, through a simple website, of allowing new users in the Inkscape community to watch some basic and intermediate tutorials by the authors of this website.

heathenx and Richard Querin have produced a series of shows that put a lot of ‘professional tutorials’ to shame. Their instructions are clear and simple and have given me a good grounding into a complex and powerfull graphics program despite the fact I have as yet not even installed inkskape. They even have mini tutorials on how to make your way around the interface and menus.

After watching the entire series I find myself looking at posters and advertisements knowing how that effect could be achieved in inkskape. If you are interested in graphics you owe it to yourself to check out the series. If you know someone using photoshop then burn these onto DVD and install inkskape for them. Even if you have no creative bone in your body this series would allow you to bluff your way through graphic design.

Excellent work.

Written by ken_fallon

May 29th, 2010 at 2:12 am

HPR: Bash Loops

without comments

I have been thinking about doing a small episode on bash loops for some time after I heard the guys at the The Linux Cranks discussing the topic. Well I found some time to record the show on my way to Ireland for the weekend. The show is recorded in a Airbus A320-200 !

OK so not the best audio quality but follows in the long history of linux podcasting

The show is available on the Hacker Public Radio website.

Here are the examples I used in the show.

user@pc:~$ for number in 1 2 3
> do
> echo my number is $number
> done
my number is 1
my number is 2
my number is 3

user@pc:~$ for number in 1 2 3 ; do echo my number is $number; done
my number is 1
my number is 2
my number is 3

user@pc:~$ cat x.txt|while read line;do echo $line;done
one-long-line-with-no-spaces
one long line with spaces

user@pc:~$ for line in `cat x.txt`;do echo $line;done
one<-long-line-with-no-spaces
one
long
line
with
spaces

Written by ken_fallon

March 25th, 2010 at 12:44 am

Ready for the Desktop

without comments

I’ve been using many different desktop environments over the years. Not only the many desktop’s available under Linux but also the various different iterations of Windows and Apple’s OS. Some allowed me to do my work quickly while others just frustrated me to the point that I spent more time fighting the computer than doing actual work.

I have been thinking about what causes this frustration for a while. In today’s Hacker Public Radio episode I draw parallels between my frustrations at trying to start an automatic car with peoples frustration with computer interfaces. I realize that my frustrations may be with my own expectations rather than with the interface itself.

In the computer world when I hear people say that “Linux is not ready for the desktop”. I have to look at all the people around me that are having no problem using Linux as their only computing environment. People across the spectrum of ages and abilities but the one thing they have in common was that they have no preconceived ideas of how a computer works. They approach it by asking ‘how do I…” as opposed to “why can’t I…”.

So if you are having a problem with the Linux Desktop perhaps it is because it’s to much like what you have used before. Fortunately with free software there is a choice of how you interact with your computer. You’re not stuck with an automatic or a stick shift. Try out other desktop environments until you find one you like and most importantly ask questions.

Written by ken_fallon

October 2nd, 2009 at 3:09 pm

Posted in hpr,Linux,podcasts

HPR ep0386 :: SSH config file

without comments

This episode spawned from some feedback I sent to klatuu from The Bad Apples podcast. I’ve been using my .ssh/config to simplify long or commonly used ssh commands.

Say you want to login to your home machine (mymachine.dynamicdns.org) as user homeuser that’s listening on a non standard port of 1234.

ssh -p 1234 homeuser@mymachine.dynamicdns.org

You can shorten this to

ssh home

by adding the following to your .ssh/config file

Host home
	User homeuser
	Hostname mymachine.dynamicdns.org
        Port 1234

Probably not worth setting up if you’re not going to be using it often but if you start doing a lot of port forwarding then your command line can quickly get unwieldy.

ssh -p 1234 -L 8080:localhost:80 \
homeuser@mymachine.dynamicdns.org

Just add the line below to the section to achieve the same result.

	LocalForward 8080 192.168.1.100:80

The nice thing is that you can add lots of LocalForward lines for a particular host. Another trick I use is to have different public/private key files for each group of server that I use. Normally you would use the -i switch

ssh -i ~/.ssh/work_id_dsa.pub homeuser@mymachine.dynamicdns.org

Just add the line below to the section to achieve the same result.

        IdentityFile ~/.ssh/work_id_dsa.pub

You can commands per host by placing them in the Host section or for all the hosts by placing them at the top of the file. Some common ones that I use are

  • ForwardX11 yes Use instead of using the -X switch to allow forwarding of X applications to run on your local X server.
  • ForwardAgent yes Use instead of using the -A switch to allow forwarding of the ssh-agent/ssh-add
  • Protocol 2 Use instead of -2 to ensure that only protocal 2 is used.
  • GSSAPIAuthentication no Use instead of -o GSSAPIAuthentication=no. This switch is used to provide Kerberos 5 authentication to ssh. Although the man pages say that GSSAPIAuthentication is off continue reading to see if the distro maintainers note that it is turned on. This is the case with Debian and Fedora based distros.

I started using this switch when I noticed that ssh connections were taking a long time to setup and I discovered that it was due to:
The default Fedora ssh_config file comes with GSSAPIAuthentication set to “yes”. This causes a DNS query in an attempt to resolve _kerberos. whenever ssh is invoked. During periods when connectivity to the outside world is interrupted for whatever reason, the ssh session won’t proceed until the DNS query times out. Not really a problem, just more of an annoyance when trying to ssh to another machine on the LAN.

So putting it all together a sample ~/.ssh/config file might look like this:

GSSAPIAuthentication no
ForwardAgent yes
EscapeChar none
ForwardX11 yes
Protocol 2
 
Host hometunnel
    User homeuser
    Hostname mymachine.dynamicdns.org
    LocalForward 8080 192.168.1.100:80
    Port 1234
 
Host home
    User homeuser
    Hostname mymachine.dynamicdns.org
    Port 1234
 
Host work
    User workuser
    Hostname mywork.mycompany.com
    IdentityFile ~/.ssh/work_id_dsa.pub
 
Host isp
    User ispuser
    Hostname isp.example.com
    IdentityFile ~/.ssh/isp_id_dsa.pub

Written by ken_fallon

June 27th, 2009 at 11:24 am

HPR Episode on autonessus

without comments

I attended a presentation on AutoNessus at the Dutch Linux Users Group NLLGG on the 7th of February last. I managed to record an interview with the author Frank Breedijk who works as a Security Engineer for Schuberg Philis and it’s just been released as a Hacker Public Radio episode.

Autonessus is a tool that you can use not only to automate your Nessus security scans but  more importantly it is a valuable tool in helping you to digest the findings that are produced.

In the interview Frank gives a background to Nessus and he explains why AutoNessus is a useful tool for helping you decipher the results of an initial scan. This is an invaluable tool for those who regularly scan their own networks as it will allow you to focus on the issues that have changed. Whither the change is for the better or worse you will still have to decide yourself but at least you don’t need to wade through any of the findings that are unchanged from the last scan.

Towards the end of the interview we go through the roadmap and discuss why it was released under the GPLv3. Frank is working on a English screencast demo and I’ll keep you posted once it’s available.

Written by admin

February 20th, 2009 at 6:57 am

HPR episode on using a squid proxy server locally

without comments

This month my HPR episode featured using a local squid proxy. You might want to to run your own proxy server to provide yourself with a secure web connection when you are out and about by tunneling your traffic over ssh. Another good reason is to find out which urls your browser is going to. On some sites url’s are deliberatly hidden or you may be interested in exactly where you are sending traffic to. All is explained.

Written by ken_fallon

November 14th, 2008 at 4:43 pm

Posted in hpr,Linux,podcasts,squid