LinuxPlanet Casts

Media from the Linux Moguls

Archive for the ‘ssh’ Category

Ultimate File Server | TechSNAP 25

without comments

post thumbnail

Coming up on this week’s on TechSNAP…

Have you ever been curious how hackers pull off massive security breaches? This week we’ve got the details on a breach that exposed private data of 35 millions customers.

Plus spreads custom malware tailored just for your system, and the details are amazing!

On top of all that, we’ll share our insights are setting up the ultimate network file server!

Direct Download Links:

HD Video | Large Video | Mobile Video | WebM | MP3 Audio | OGG Audio | YouTube

Subscribe via RSS and iTunes:

Show Notes:

South Korea’s SK Telecom hacked, detailed forensics released

  • Between July 18th and 25th, SK Telecom’s systems were compromised, and all of their customer records (35 million customers) were compromised. The records included a wealth of information, including username, password, national ID number, name, address, mobile phone number and email address.
  • The attack was classified as an Advanced Persistent Threat, the attackers compromised 60 computers at SK Telecom in total, biding their time until they could compromise the database. Data was exchanged between the compromised computers at SK Telecom, and a server at a Taiwanese publishing company that had been compromised by the attackers at an earlier date.
  • The attack was very sophisticated, specifically targeted, and also seems to indicate a degree of knowledge about the the target. The well organized attackers managed to compromise the software updates server of another company (ESTsoft) who’s software (ALTools) was used by SK Telecom, then piggyback a trojan in to the secure systems that way. Only computers from SK Telecom received the malicious update.
  • The attackers send the compromised data through a number of way points before receiving it, masking the trail and the identities of the attackers. A similar pattern was seen with the RSA APT attack, the attackers uploaded the stolen data to a compromised web server, and once they had removed the data from there, destroyed the server and broke the trail back to them selves.
  • Proper code signing, or GPG signing could have prevented this
  • Original BBC Article about the attack

Mac OS X Lion may expose your hashed password

  • The Directory Services command allows users to search for data about other users on the machine. This is the intended function.
  • The problem is that the search results for the current user also include sensitive information, such as the users’ password hash. You are authorized to view this information, because you are the current user.
  • However, any application running as that user, could also gain that information, and send it back to an attacker.
  • Using the hash, an attacker could perform an offline brute force attack against the password. These attacks have gotten more common and less time consuming with the advent of better parallel computing, cloud computing and high performance GPGPUs.
  • My bitcoin mining rig could easily be converting to a password hash cracking rig, especially now that the current value of bitcoin is sagging. If there were a big enough market for cracking hashed passwords, there are now a huge number of highly specialized machines devoted to bitcoin that could be easily switched over.
  • The tool can also allow the current user to overwrite their own password hash with a new one, without the need to provide the current plain text password. This means that rather than spend time cracking the password, the attacker could just change the current users password, and then take over the account that way.
  • These attacks would require some kind of exploit that allowed the attack to perform the required actions, however we have seen a number of flash, java and general browsers exploits that could allow this.
  • The current recommended work around is to chmod the dscl command such that it can only be used by root
  • Additional Article compromised, visitors subject to drive by infection

  • The front page was compromised and had malicious code injected in to it.
  • The code (usually an iframe) caused a java exploit to be executed against the visitor. The exploit required no interaction or confirmation from the user. This type of attack is know as a ‘drive by infection’, because the user does not have to take any action to become infected.
  • Two different trojans were detected being sent to users, Troj/WndRed-C and Troj/Agent-TNV
  • Because of the nature of the iframe attack, and the redirect chain the attackers could have easily varied the payload, or selected different payloads based on the platform the user was visiting the site on.
  • There are reports of Russian hackers offering to sell admin access to for $3000
  • Detailed Analysis with malicious source code, video of the infection process
  • Article about previous compromise
  • When the previous compromise was reported, it was also reported that was subject to a XSS (Cross Site Scripting) attack, where content from another site could be injected in to the MySQL site, subverting the browsers usual ‘Same Origin’ policy. This vulnerability, if not repaired, could have been the source of this latest attack.


Continuing our Home Server Segment – This week we are covering file servers.
Some possible solutions:

  • Roll Your Own (UNIX)
  • Linux or FreeBSD Based
  • Install Samba for SMB Server (allow windows and other OS machines to see your shared files)
  • Setup FTP (unencrypted unless you do FTPS (ftp over ssl), high speed, doesn’t play well with NAT, not recommended)
  • Configure SSH (provides SCP and SFTP) (encrypted, slightly higher cpu usage, recommended for Internet access)
  • Install rsync (originally designed to keep mirrors of source code and websites up to date, allows you to transfer only the differences between files, rather than the entire file) (although it is recommended you do rsync over SSH not via the native protocol)
  • Configure NFS (default UNIX file sharing system)
  • Build your own iSCSI targets (allows you to mount a remote disk as if it were local, popular in virtualization as it removes a layer of abstraction. required for virtual machines that can be transferred from one host to another.
  • Roll Your Own (Windows)
  • Windows provides built in support for SMB
  • Install Filezilla Server for FTP/FTPs (Alternative: CyberDuck)
  • There are some NFS alternatives for windows, but not are not free
  • There is an rsync client for windows, or you could use cygwin, same goes for SSH. Similar tools like robocopy and synctoy
  • FreeNAS
  • FreeBSD Based. Provides: SMB, NFS, FTP, SFTP/SCP, iSCSI (and more)
  • Supports ZFS
  • Chris’ Previous Coverage of FreeNAS:
  • FreeNAS Vs. HP MediaSmart WHS
  • FreeNAS vs Drobo

Round Up:

Bitcoin Blaster:

Smarter Google DNS | TechSNAP 21

without comments

post thumbnail

Google and openDNS join forces to improve the speed of your downloads, find out what they are doing and how it works!

Plus gmail suffered another man in the middle attack, and gets some egg on their face!

All that and more, on this week’s episode of TechSNAP!

Direct Download Links:

HD Video | Large Video | Mobile Video | WebM Video | MP3 Audio | OGG Audio | YouTube

Subscribe via RSS and iTunes:

Show Notes:

Another SSL Certificate Authority Compromised, MitM Attack on Gmail

  • Sometime before July 10th, the Dutch Certificate Authority DigiNotar was compromised and the attackers we able to issue a number (apparently as many as 200) of fraudulent certificates, including a wildcard certificate for * The attack was only detected by DigiNotar on July 19th. DigiNotar revoked the certificates, and an external security audit determined that all invalid certificates had been revoked. However, it seemed that probably the most important certificate, * was in fact not revoked. This raises serious questions and seems to point to a coverup by DigiNotar. Detailed Article Additional Article
  • Newer versions of Chrome were not effected, because Google specifically listed a small subset of CAs who would ever be allowed to issue a certificate for gmail. This also prevents self-signed certificates, which some users fall for regardless of the giant scary browser warning. Chrome Security Notes for June
  • Mozilla and the other browsers have taken more direct action disabled than they did with the Comodo compromise. All major browsers have entirely removed the the DigiNotar root certificate from their trust list. With the Comodo compromise, the effected certificates were blacklisted, but the rest of the Comodo CA was left untouched. One wonders if this was done as strong signal to all CAs that that must take security more seriously, or if DigiNotar was in fact cooperating with the Iranian government in its efforts to launch MitM attacks on its citizens. Mozilla Security Blog
  • Part of the issue is that some of the certificates issued were for the browser manufacturers them selves, such as With a fake certificate from Mozilla, it is possible that the MitM attack could block updates to your browser, or worse, feed you a spyware laden version of the browser.
  • Press Release from Parent Company VASCO
  • Pastebin of the fraudulent Certificate
  • Allan’s blog post about the previous CA compromise, and more detail than can fit even in an episode of TechSNAP

    GoogleDNS and OpenDNS launch ‘A Faster Internet’

  • The site promoted a DNS protocol extension called edns-client-subnet that would have the recursive DNS server pass along the IP Subnet (not the full IP, for privacy) of the requesting client, to allow the authoritative DNS server to make a better Geo Targetting Decision.
  • A number of large content distributors and CDNs rely on GeoIP technology at DNS time to direct users to the nearest (and as such, usually fastest) server. However this approach is often defeated when a large portion of users are using GoogleDNS and OpenDNS and all of those requests come from a specific IP range. As this technology takes hold, it should make it possible for the Authoritative DNS servers to target the user rather than the Recursive DNS Server, resulting in more accurate results.
  • Internet Engineering Task Force Draft Specification
  • This change has already started effecting users, many users of services such as iTunes had complained of much slower download speeds when using Google or Open DNS. This was a result of being sent to a far-away node, and that node getting a disproportionate amount of the total load. Now that this DNS extension has started to come online and is backed by a number of major CDNs, it should alleviate the problem.
  • ScaleEngine is in the process of implementing this, and already has some test edns enabled authoritative name servers online.
    * Compromised

  • Attackers were able to compromise a number of machines
  • Attackers appear to have compromised a single user account, and then through unknown means, gained root access.
  • Attackers replaced the running OpenSSH server with a trojaned version, likely leaking the credentials of users who authenticated against it.
  • is working with the 448 people who have accounts there, to replace their passwords and SSH keys.
  • The attack was only discovered due to an extraneous error message about /dev/mem
  • Additional Article


Q: (DreamsVoid) I have a server setup, and I am wondering what it would take to setup a backup server, that would automatically take over if the first server were to go down. What are some of the ways I could accomplish this?

A: This is a rather lengthy answer, so I will actually break it apart, and have given one possible answer each week, for the last few weeks. This weeks solution is Anycast. This is by far the most complicated and resource intensive solution, but it is also the most scalable. Standard connections on the Internet are Unicast, meaning they go from a single point to another single point (typically, from a client to a specific server). The are also Broadcast (send to all nodes in the broadcast domain, such as your local LAN), and Multicast (send to a group of subscribed peers, used extensively by routers to distribute routing table updates, but does not work on the Internet). Anycast is different than a Unicast, instead of sending the packet to a specific host, the packet is sent to the nearest host (in network terms, hops, not necessarily geographic terms). The way Anycast works is your BGP enabled routers broadcast a route to your subnet to the Internet from each of the different locations, and the other routers on the Internet update their routing tables with the route to the location that is the fewest hops away. In this way, your traffic is diverted to the nearest location. If one of your locations goes down, when the other routers do not get an update from the downed router, they automatically change their route to the next nearest location. If you want only fail over, and not to distribute traffic geographically, you can have your routers prefix their routes with their own AS number a sufficient number of times to make the backup location always more hops than the main location, so it is only used if the main is down. There are some caveats with this solution, the first being that TCP packets were never meant to randomly redirect to another location, if a route change happens in the middle of an active session, that session will not exist at the second location, and the connection will be dropped. This makes Anycast unsuitable for long-lived connections, as routes on the Internet change constantly, routing around faults and congestion. Connections also cannot be made outbound from an Anycast IP, as the route back may end up going to a different server, and so a response will never be received, so servers would require a regular Unicast address, plus the Anycast address. A common solution to overcome the limitations of Anycast, is to do DNS (which is primarily UDP) via Anycast, and have each location serve a different version of the authoritative zone, which the local IP address of the web server, this way the users are routed to the nearest DNS server, which then returns the regular IP of the web server at the same location (this solution suffers from the same problems mentioned above in the Google DNS story). Another limitation is that due to the size of the address space on the Internet, most provides will not accept a route for a subnet smaller than a /24, meaning than an entire 256 ip address subnet must be dedicated to Anycast, and your servers will each require a regular address in a normal subnet. Broadcasting routes to the Internet also requires your own Autonomous System number, which are only granted to largish providers, or an ISP willing to announce your subnet on their AS number, but this requires a Letter of Authorization from the owner of the IP block.



Written by chris

September 2nd, 2011 at 12:42 am

Battery Malware | TechSNAP 16

without comments

post thumbnail

Attackers take aim at Apple with an exploit that could brick your Macbook, or perhaps worse. Plus you need to patch against a 9 year old SSL flaw.

Plus find out about a Google bug that could wipe a site from their Index, and a excellent batch of your feedback!

All that and more, on this week’s TechSNAP!

Direct Download Links:

HD Video | Large Video | Mobile Video | MP3 Audio | OGG Audio | YouTube

Subscribe via RSS and iTunes:

Show Notes:

iPhones vulnerable to 9 year old SSL sniffing attack

  • A nine year old bug discovered and disclosed by Moxie Marlinspike in 2002 allows attackers to decrypt intercepted SSL sessions. Moxie Marlinspike released a newer, easier to use version of the tool on monday, to coincide with Apple finally patching the flaw on iPhone and other iOS devices.
  • Any unpatched iOS device can have all of it’s SSL traffic trivially intercepted and decrypted
  • This means anyone with this new easy to use tool sitting near a wifi hotspot, can intercept encrypted login information (gmail, facebook), banking credentials, e-commerce transactions, or anything else people do from their phone.
  • The bug was in the way iOS interpreted the certificate chain. Apple failed to respect the ‘basicConstraint’ parameter, allowing an attacker to sign a certificate for any domain with an existing valid certificate, a condition normally prevented by the constraint.
  • There are no known flaws in SSL it self, in this case, the attacker could perform a man-in-the-middle attack, by feeding the improperly signed certificate to the iPhone which would have accepted it, and used the attackers key to encrypt the data.
  • Patch is out with a support doc and direct download links

Apple Notebook batteries vulnerable to firmware hack

  • After analyzing a battery firmware update that Apple pushed in 2009, researchers found that all patched batteries, and all batteries manufactured since, use the same password
  • With this password, it is possible to control the firmware on the battery
  • This means that an attacker can remotely brick your Macbook, or cause the battery to overheat and possibly even explode
  • The attacker can also falsify the data returned to the OS from the battery, causing odd system behaviour
  • The attacker could also completely replace the Apple firmware, with one designed to silently infect the machine with malware. Even if the malware is removed, the battery would be able to reinfect the machine, even after a complete OS wipe and reinstall.
  • Further research will be presented at this years Black Hat Security Conference
  • In the meantime, researchers have notified Apple of the vulnerability, and have created a utility that generates a completely random password for your Mac’s battery.
    Additional Link

Facebook fixes glitch that let you see private video information

  • A glitch in facebook allowed you to see the thumbnail preview and description of private videos posted by other users, even when they were not shared with you.
  • It was not possible to view the actual videos

Google was quick to shutdown Webmaster Tools after vulnerability found

  • Using the google webmaster tools, users were able to remove websites that did not belong to them from the Google Index
  • By simply modifying the query string of a valid request to remove your own site from the google index, and changing one of the two references to the target url, you were able to remove an arbitrary site from the google index
  • The issue was resolved within 7 hours of being reported to Google
  • Google restored sites that were improperly removed from its index.

Researchers find vulnerablity in Skype

  • Inproper input validation and output sanitation allowed attackers to inject code into their skype profile
  • By entering html and java script in to the ‘mobile phone’ section of your profile, anyone who had you on their friends list would execute the injected code.
  • This vulnerability could have allowed attackers to high your session, steal your account, capture your payment data, and change your password


Q: (Sargoreth) I downloaded eclipse, and I didn’t bother to verify the md5 hash they publish on the download page, how big a security risk is this?
A: Downloadable software often has an MD5 hash published along with the downloadable file, as a measure to allow you to ensure that the file you downloaded is valid. Checking the downloaded file against this hash can ensure that the file was not corrupted during transfer. However it is not a strong enough indicator that the file has not been tampered with. If the file was modified, the MD5 hash could just as easily have been updated along with it. In order to be sure that the file has not been tampered with, you need a hash that is provided out of band, from a trusted source (The FreeBSD Ports tree comes with the SHA256 hashs of all files, which are then verified once they are downloaded). SHA256 is much more secure, as MD5 has been defeated a number of times, with attackers able to craft two files with matching hashes. SHA-1 is no longer considered secure enough for cryptographic purposes. It should also be noted that SHA-512 is actually faster to calculate than SHA256 on 64bit hardware, however it is not as widely supported yet. The ultimate solution for ensuring the integrity of downloadable files is a GPG signature, verified against a trusted public key. Many package managers (such as yum) take this approach, and some websites offer a .asc file for verification. A number of projects have stopped publishing the GPG signatures because the proportion of users who checked the signature was too low to justify the additional effort. Some open source projects have had backdoors injected in to their downloadable archives on official mirrors, such as the UnrealIRCd project.

Q: (Christoper) I have a windows 7 laptop, and a Ubuntu desktop, what would be a cheap and easy way to share files between them?
A: The easiest and most secure way, is to enable SSH on the ubuntu machine, and then use an SFTP client like FileZilla (For Windows, Mac and Linux), and then just login to your ubuntu machine using your ubuntu username/password. Alternatively, If you have shared a folder on your windows machine, you should be be able to browse to it from the Nautilus file browser in Ubuntu. Optionally, you can also install Samba, to allow your Ubuntu machine to share files with windows, it will appear as if it were another windows machine in your windows ‘network neighbourhood’.

Q: (Chad) I have a network of CentOS servers, and a central NFS/NIS server, however we are considering adding a FreeNAS box to provide ZFS. I need to be able to provide consistent centralized permissions control on this new file system. I don’t want to have to manually recreate the users on the FreeNAS box. Should I switch to LDAP?
A: FreeNAS is based on FreeBSD, so it has a native NIS client you can use (ypbind) to connect to your existing NIS system. This would allow the same users/groups to exist across your heterogeneous network. You may need to modify the /etc/nsswitch.conf file to configure the order local files and NIS are checked in, and set your NIS domain in /etc/rc.conf. Optionally, you could use LDAP, again, adding some additional parameters to nsswitch.conf and configuring LDAP. If you decide to use LDAP, I would recommend switching your CentOS machines to using LDAP as well, allowing you to again maintain a single system for both Linux and BSD, instead of maintaining separate account databases. If you are worried about performance, you might consider setting the BSD machine up as an NIS slave, so that it maintains a local copy of the NIS database. The FreeBSD NIS server is called ypserv. You can find out more about configuring NIS on FreeBSD here

Bitcoin Blaster


HPR: A private data cloud

without comments

The Hacker Public Radio Logo of a old style microphone with the letters HPR

Transcript(ish) of Hacker Public Radio Episode 544

Over the last two years I have stopped using analogue cameras for my photos and videos. As a result I also don’t print out photos any more when the roll is full. This goes some way to explaining why my mother has no recent pictures of the kids. Living in a digital world comes a realization that we need to take a lot more care when it comes to making backups.

In the past if my pc’s hard disk blew up virtually everything of importance could be recreated. That simply isn’t the case any more when the only copy of your cherished photos and videos are now on your computer. Add to that the fact that in an effort to keep costs decreasing and capacity increasing hardisks are becoming more and more unreliable (pdf).

A major hurdle to efficient backups is that the capacity of data storage is exceeding what can be practically transferred to ‘traditional’ backup media. I now have a collection of media reaching 250G where backing up to DVD is not feasible any more. Even if your collection is smaller be aware that sd cards, USB Sticks, DVD’s or even tapes also degrade over time.

And yet hard disks are cheap. You can get a 1.5 TB disk from for $95 or a 1T for €70 from So the solution would appear to be a juggling act where you keep moving your data on a pool of disks and replace the drives as they fail. Probably the easiest solution is to get a hand holding drobo or a sub 100$/€ low power NAS solutions. If you fancy doing it yourself Linux has had support for fast software mirroring/raid or years.

Problem solved ….

NASA Image of the Earth taken by Apollo 17 over green binary data

…well not quite. What if your nas is stolen or accidentally destroyed ?

You need to consider a backup strategy that also mirrors your data to another geographic location. There are solutions out there to store data in the cloud (ubuntu one, dropbox, etc.) The problem is that these services are fine for ‘small’ amounts of data but get very expensive very quickly for the amount of data we’re talking about.

The solution, well my solution, is to mirror data across the Internet using rsync over ssh to my brothers NAS and he mirrors his data to mine. This involves a degree of trust as you are now putting your data into someone else’s care. In my case it’s not an issue but if you are worried about this you can take the additional step of shipping them an entire pc. This might be a low power device that has enough of an OS that can get onto the Internet. From there you can ssh in to mount an encrypted partition. When hosting content for someone else you should consider the security implications of another user having access to your network from behind your firewall. You would also need to be confident that they are not hosting anything or doing anything that would lead you to get in trouble with the law.

Once you are happy to go ahead what you need to do is to start storing all your important data to the NAS in the first place. You will want to have all your PC’s and other devices back up to it. It’s probably a good idea to mount the nas on the client PC’s directly using nfs, samba, sshfs etc so that data is saved there directly. If you and your peering partner have enough space you can start replicating immediately or you may need to purchase an additional disk for your remote peer to install. I suggest that you do the initial drop locally and transfer the data by sneaker net, which will be faster and avoid issues with the ISP’s.

It’s best to mirror between drives that can support the same file attributes. For instance copying files from ext3 to fat32 will result in a loss of user and group permissions.

When testing I usually create a test directory on the source and destination that have some files and directories that are identical, different and modified so that I can confirm rsync operations.

To synchronize between locally mounted disks you can use the command:

rsync -vva --dry-run --delete --force /data/AUTOSYNC/ /media/disk/

/data/AUTOSYNC/ is the source and /media/disk/ is the destination. The --dry-run option will go through the motions of copying the data but not actually do anything and this is very important when you start so you know what’s going on. The -a option is the archive option and is equivalent to -rlptgoD. Here’s a quick run through the rsync options

-n, --dry-run
    perform a trial run that doesn't make any changes
-v, --verbose
    increases the amount of information you are given during the transfer.
-r, --recursive
    copy directories recursively.
-l, --links
    recreate the symlink on the destination.
-p, --perms
    set the destination permissions to be the same as the source.
-t, --times
    set the destination modification times to be the same as the source.
-g, --group
    set the group of the destination file to be the same as the source.
-o, --owner
    set the owner of the destination file to be the same as the source.
    transfer character, block device files, named sockets and fifos.
    delete extraneous files from dest dirs
    force deletion of dirs even if not empty

For a complete list see the rsync web page.

Warning: Be careful when you are transferring data that you don’t accidentally delete or overwrite anything.

Once you are happy that the rsync is doing what you expect, you can drop the --dry-run and wait for the transfer to complete.

The next step might be to ship the disk off to the remote location and then setup the rsync over ssh. However I prefer to have an additional testing step where I rsync over ssh to a pc in the home. This allows me to work out all the rsync ssh issues before the disk is shipped. The steps are identical so you can repeat this step once the disk has been shipped and installed at the remote end.

OpenBSD and OpenSSH mascot Puffy


On your NAS server you will need to generate a new ssh public and private key pair that has no password associated. The reason for this is that you want the synchronization to occur automatically so you will need to be able to access the remote system securely without having to enter a password. There are security concerns with this approach so again proceed with caution. You may wish to create a separate user for this but I’ll leave that up to you. Now you can add the public key to the remote users .ssh/authorized_keys file. Jeremy Mates site has more information on this.

To confirm the keys are working you can try to open a ssh session using the key you just setup.

ssh -i /home/user/.ssh/rsync-key

You may need to type yes to add the keys to the .ssh/known_hosts file, so it makes sense to run that command as the user that will be doing the rsyncing. All going well you should now be logged into the other system.

Once you are happy that secure shell is working all you now need to do is add the option to tell rsync to use secure shell as the transport.

rsync -va --delete --force -e "ssh -i /home/user/.ssh/rsync-key" /data/AUTOSYNC/

All going well there should be no updates but you may want to try adding, deleting and modifying files on both ends to make sure the process is working correctly. When you are happy you can ship the disk to the other side. The only requirement on the other network is that ssh is allowed through the firewall to your server and that you know the public IP address of the remote network. For those poor people without a fixed IP address, most systems provide a means to register a dynamic dns entry. Once you can ssh to your server you should also be able to rsync to it like we did before.

Of course the whole point is that the synchronization should be seamless so you want your rsync to be running constantly. The easiest way to do this is just to start a screen session and then run the command above in a simple loop. This has the advantage of allowing you to get going quickly but is not very resistant to reboots. I created a simple bash script to do the synchronization.

user@pc:~$ cat /usr/local/bin/autosync
while true
  rsync -va --delete --force -e "ssh -i /home/user/.ssh/rsync-key" /data/AUTOSYNC/
  sleep 3600
user@pc:~$ chmod +x /usr/local/bin/autosync

We wrap the rsync command into a infinite while loop that outputs a time stamp before and after it has run. I then pause the script for an hour after each run so that I’m not swamping either side. After making the file executable you can add it to the crontab of the user doing the rsync. See my episode on Cron on how to do that. This is a listing of the crontab file that I use.

user@pc:~$ crontab -l
0 1 * * * timeout 54000 /usr/local/bin/autosync > /tmp/autosync.log  2>&1

There are a few additions to what you might expect here. Were I to run the script directly from cron then it would spawn a new copy of the autosync script at one o’clock every morning. The script itself would never terminate so over time there would be many copies of the script running simultaneously. This isn’t an issue here as I am actually calling the timeout command first and it’s the one that actually calls the autosync script. The reason for this is that my brother doesn’t want me rsyncing in the evening when he is usually online. I could have throttled the amount of bandwidth I used as well but he said not to bother.

    This option allows you to specify a maximum transfer rate in kilobytes per second.

As the timeout command runs in it’s own process it’s output is not redirected to the logfile. In order to stop the cron owners email account getting a mail every time the timeout occurs I added a blank MAILTO="" line at the start of the crontab file. Thanks to UnixCraft for that tip.

Well that’s it. Once anyone on your network saves a file it will be stored on their local NAS and over time it will be automatically replicated to the remote network. There’s nothing stopping you replicating to other sites as well.

An image from episode 94

This months recommended podcast is screencasters at
From the about page:

The goal of is to provide a means, through a simple website, of allowing new users in the Inkscape community to watch some basic and intermediate tutorials by the authors of this website.

heathenx and Richard Querin have produced a series of shows that put a lot of ‘professional tutorials’ to shame. Their instructions are clear and simple and have given me a good grounding into a complex and powerfull graphics program despite the fact I have as yet not even installed inkskape. They even have mini tutorials on how to make your way around the interface and menus.

After watching the entire series I find myself looking at posters and advertisements knowing how that effect could be achieved in inkskape. If you are interested in graphics you owe it to yourself to check out the series. If you know someone using photoshop then burn these onto DVD and install inkskape for them. Even if you have no creative bone in your body this series would allow you to bluff your way through graphic design.

Excellent work.

Written by ken_fallon

May 29th, 2010 at 2:12 am

HPR ep0386 :: SSH config file

without comments

This episode spawned from some feedback I sent to klatuu from The Bad Apples podcast. I’ve been using my .ssh/config to simplify long or commonly used ssh commands.

Say you want to login to your home machine ( as user homeuser that’s listening on a non standard port of 1234.

ssh -p 1234

You can shorten this to

ssh home

by adding the following to your .ssh/config file

Host home
	User homeuser
        Port 1234

Probably not worth setting up if you’re not going to be using it often but if you start doing a lot of port forwarding then your command line can quickly get unwieldy.

ssh -p 1234 -L 8080:localhost:80 \

Just add the line below to the section to achieve the same result.

	LocalForward 8080

The nice thing is that you can add lots of LocalForward lines for a particular host. Another trick I use is to have different public/private key files for each group of server that I use. Normally you would use the -i switch

ssh -i ~/.ssh/

Just add the line below to the section to achieve the same result.

        IdentityFile ~/.ssh/

You can commands per host by placing them in the Host section or for all the hosts by placing them at the top of the file. Some common ones that I use are

  • ForwardX11 yes Use instead of using the -X switch to allow forwarding of X applications to run on your local X server.
  • ForwardAgent yes Use instead of using the -A switch to allow forwarding of the ssh-agent/ssh-add
  • Protocol 2 Use instead of -2 to ensure that only protocal 2 is used.
  • GSSAPIAuthentication no Use instead of -o GSSAPIAuthentication=no. This switch is used to provide Kerberos 5 authentication to ssh. Although the man pages say that GSSAPIAuthentication is off continue reading to see if the distro maintainers note that it is turned on. This is the case with Debian and Fedora based distros.

I started using this switch when I noticed that ssh connections were taking a long time to setup and I discovered that it was due to:
The default Fedora ssh_config file comes with GSSAPIAuthentication set to “yes”. This causes a DNS query in an attempt to resolve _kerberos. whenever ssh is invoked. During periods when connectivity to the outside world is interrupted for whatever reason, the ssh session won’t proceed until the DNS query times out. Not really a problem, just more of an annoyance when trying to ssh to another machine on the LAN.

So putting it all together a sample ~/.ssh/config file might look like this:

GSSAPIAuthentication no
ForwardAgent yes
EscapeChar none
ForwardX11 yes
Protocol 2
Host hometunnel
    User homeuser
    LocalForward 8080
    Port 1234
Host home
    User homeuser
    Port 1234
Host work
    User workuser
    IdentityFile ~/.ssh/
Host isp
    User ispuser
    IdentityFile ~/.ssh/

Written by ken_fallon

June 27th, 2009 at 11:24 am