LinuxPlanet Casts

Media from the Linux Moguls

Archive for the ‘rsync’ Category

Ultimate Backups | TechSNAP 26

without comments

post thumbnail

We’ll tell you about AT&T leaving Android open to a hack so easy, my two year old son could pull it off. Plus FireFox goes to battle with McAfee and is Bank of America Under attack?

Then – We delve into backups, from the fundamentals to the very best tools!

All that and more, in this week’s TechSNAP!


Direct Download Links:

HD Video | Large Video | Mobile Video | WebM | MP3 Audio | OGG Audio | YouTube

Subscribe via RSS and iTunes:

Show Notes:

Security hole in AT&T Samsung Galaxy S II

  • Bug allows someone to bypass the security lockout screen, accessing the phone without the password
  • The flaw does not exist on the Sprint version of the Samsung Galaxy S , or the Epic Touch 4G
  • By pressing the lock button to wake the phone, and you will be prompted with the unlock screen. Allow the phone to go back to sleep, and immediately tap the lock button again, and you will have access to the phone
  • This feature is likely designed for the situation where you are waiting for some interaction on the phone and it falls asleep, if you press a button to wake it within a few seconds, it doesn’t prompt you to reunlock the phone. This is a useful feature, however, it should be predicated on the fact that you just recently unlocked the phone (don’t make me unlock the phone twice within 90 seconds, or something similar)
  • The flaw only effects phones that have been unlocked once since boot
  • Since the flaw only effects the AT&T version of the phone, it would seem it is based on software added to the phone by AT&T, which appears to cache your response to the unlock screen, and use it to bypass the screen when you re-wake the phone immediately after it goes to sleep.
  • Another example of the vendors messing with the core google product.
  • Users with Microsoft Exchange security policies don’t seem to be affected
  • Users can adjust the settings on their phone by accessing: Settings ->Location and Security->Screen unlock settings->Timeout and setting the value to Immediately, disabling the ‘feature’ that presents the vulnerablity.

Firefox advises users to disable McAfee Plugin

  • Firefox says the McAfee ScanScript plugin causes Stability and Security problems
  • The problem only seems to effect the new Firefox 7, it is likely caused by a compatibility problem with versions of ScanScript designed for older versions of Firefox
  • Firefox has started generating popup warnings to users using versions of McAfee older than 14.4.0 due to an incredibly high volume of crash reports
  • McAfee says it is working with Firefox to solve the issue for the next version of the software
  • McAfee is very popular in corporate environments and is often enforced with a Active Directory Group Policy that makes it nearly impossible for the end user to disable the virus scanner

Bank of America – Unexplained Outages – Is it an attack?

  • The Bank of America website has been degraded, slow, returning errors or down for more than 6 days
  • Bank of America (BofA) said its Web and mobile services have not been hit by hacking or denial-of-service attacks, however they would not disclose what has been causing the online problems.
  • Quote: “I just want to be really clear. Every indication [is that] recent performance issues have not been the result of hacking, malware or denial of service,” said BofA spokeswoman Tara Burke. “We’ve had some intermittent or sporadic slowness. We don’t break out the root cause.”
  • The problems began Friday morning, a day after BofA announced it would charge a $5 monthly fee for account holders using their debit cards
  • Additional Coverage

Feedback:

Continuing our Home Server Segment – This week we are covering backups.
Before we cover some of the solutions, we should look at some of the concepts and obstacles to creating proper backups. There are a number of different ways to back things up, but the most popular involves using multiple ‘levels’ of backup.

  • Full backup

  • This is a backup of every file (or a specific subset, or without specific exclusions) on a system.

  • This is the base of higher level backups, and is also known as a level 0 backup

  • Full backups are the biggest and take the slowest

  • Differential Backup

  • A differential backup is one that includes every file that has changed since the last full backup was started (this is important).

  • >It is very important the higher level backups always be based on the START time of the lower level backup, rather than the last modified, or finish time. During the last backup, if the file changed after it was backed up, but before that backup completed, we want to be sure to include it in the next backup

  • Differential backups require only the most recent full backup to restore

  • Incremental Backup

  • An incremental backup consists of every file that has changed since the start of the last backup of any level

  • Incremental backups are the smallest and fastest

  • Incremental backups can take the longest to restore, and can require access to each of the previous differential backups since last full backup, and that most recent full backup

  • Incremental backups offer the trade off, they take less time and less storage, however they slow the recovery process.

  • Incremental backups, due to their smaller size, make it easier to have ‘point of time’ backups of files, rather than just the most recent.

  • Some backup systems do away with the name designations, and allow even more granularity

  • A level 0 backup is a full backup

  • A level 1 is everything that has changed since the level 0

  • A level n is everything that has changed since the last level n–1 or higher

  • Systems such as the unix ‘dump’ utility, allow up level 9 backups

  • Some backup systems, such as Bacula, support ‘synthetic full backups’

  • A synthetic backup is when you use a full backup, plus more recent differential and incremental backups to create a new, more recent full backup.

  • This can be especially advantageous in remote and off site backup systems, where transferring the full data set over the network can be very slow and costly.

  • rsync

  • Not actually a backup tool, it just creates and synchronizes a copy of the files

  • Copies only the changes to the files, so is faster

  • snapshots

  • A point in time copy of the files in a filesystem (supported by LVM, UFS, ZFS, etc)

  • A good place to take a backup from, resolves issues with open files

  • bacula

  • Designed to backup a large number of machines

  • Quite a bit of setup (Directory, Storage Daemon, SQL Database, File Daemons (Clients))

  • Cross platform

  • Powerful deduplication system, and ‘base backups’

  • Support for Windows Volume Shadow Copy (snapshots of open files)

  • flexbackup

  • simple perl script that creates archives (tar, cpio, etc) with optional compression (gzip, bzip2, etc).

  • Uses the ‘find’ command to create multi-level backups based on modified date

  • backupmypc

  • rsync based

  • Supports FTP, SCP, RCP, & SMB for Windows

  • s very smart about how it handles portable devices that miss backups.

  • It’s magic is it’s de-dupe hard-link mojo that saves tons of space

  • Bit of a nerd project to get going, but is bullet proof once its in

  • TarSnap – BSD Encrypted Cloud Backup

  • Mondo Rescue – GPL disaster recovery solution

  • CrashPlan – Online Backup Software, Disaster Recovery

  • Allan’s AppFail.com article about backups

Round Up:

Jupiter Broadcasting stats


  1. Firefox 42.66%
  2. Chrome 29.73%
  3. Internet Explorer 14.43%

Ultimate File Server | TechSNAP 25

without comments

post thumbnail

Coming up on this week’s on TechSNAP…

Have you ever been curious how hackers pull off massive security breaches? This week we’ve got the details on a breach that exposed private data of 35 millions customers.

Plus MySQL.com spreads custom malware tailored just for your system, and the details are amazing!

On top of all that, we’ll share our insights are setting up the ultimate network file server!


Direct Download Links:

HD Video | Large Video | Mobile Video | WebM | MP3 Audio | OGG Audio | YouTube

Subscribe via RSS and iTunes:

Show Notes:

South Korea’s SK Telecom hacked, detailed forensics released

  • Between July 18th and 25th, SK Telecom’s systems were compromised, and all of their customer records (35 million customers) were compromised. The records included a wealth of information, including username, password, national ID number, name, address, mobile phone number and email address.
  • The attack was classified as an Advanced Persistent Threat, the attackers compromised 60 computers at SK Telecom in total, biding their time until they could compromise the database. Data was exchanged between the compromised computers at SK Telecom, and a server at a Taiwanese publishing company that had been compromised by the attackers at an earlier date.
  • The attack was very sophisticated, specifically targeted, and also seems to indicate a degree of knowledge about the the target. The well organized attackers managed to compromise the software updates server of another company (ESTsoft) who’s software (ALTools) was used by SK Telecom, then piggyback a trojan in to the secure systems that way. Only computers from SK Telecom received the malicious update.
  • The attackers send the compromised data through a number of way points before receiving it, masking the trail and the identities of the attackers. A similar pattern was seen with the RSA APT attack, the attackers uploaded the stolen data to a compromised web server, and once they had removed the data from there, destroyed the server and broke the trail back to them selves.
  • Proper code signing, or GPG signing could have prevented this
  • Original BBC Article about the attack

Mac OS X Lion may expose your hashed password

  • The Directory Services command allows users to search for data about other users on the machine. This is the intended function.
  • The problem is that the search results for the current user also include sensitive information, such as the users’ password hash. You are authorized to view this information, because you are the current user.
  • However, any application running as that user, could also gain that information, and send it back to an attacker.
  • Using the hash, an attacker could perform an offline brute force attack against the password. These attacks have gotten more common and less time consuming with the advent of better parallel computing, cloud computing and high performance GPGPUs.
  • My bitcoin mining rig could easily be converting to a password hash cracking rig, especially now that the current value of bitcoin is sagging. If there were a big enough market for cracking hashed passwords, there are now a huge number of highly specialized machines devoted to bitcoin that could be easily switched over.
  • The tool can also allow the current user to overwrite their own password hash with a new one, without the need to provide the current plain text password. This means that rather than spend time cracking the password, the attacker could just change the current users password, and then take over the account that way.
  • These attacks would require some kind of exploit that allowed the attack to perform the required actions, however we have seen a number of flash, java and general browsers exploits that could allow this.
  • The current recommended work around is to chmod the dscl command such that it can only be used by root
  • Additional Article

MySQL.com compromised, visitors subject to drive by infection

  • The MySQL.com front page was compromised and had malicious code injected in to it.
  • The code (usually an iframe) caused a java exploit to be executed against the visitor. The exploit required no interaction or confirmation from the user. This type of attack is know as a ‘drive by infection’, because the user does not have to take any action to become infected.
  • Two different trojans were detected being sent to users, Troj/WndRed-C and Troj/Agent-TNV
  • Because of the nature of the iframe attack, and the redirect chain the attackers could have easily varied the payload, or selected different payloads based on the platform the user was visiting the site on.
  • There are reports of Russian hackers offering to sell admin access to mysql.com for $3000
  • Detailed Analysis with malicious source code, video of the infection process
  • Article about previous compromise
  • When the previous compromise was reported, it was also reported that MySQL.com was subject to a XSS (Cross Site Scripting) attack, where content from another site could be injected in to the MySQL site, subverting the browsers usual ‘Same Origin’ policy. This vulnerability, if not repaired, could have been the source of this latest attack.

Feedback:

Continuing our Home Server Segment – This week we are covering file servers.
Some possible solutions:

  • Roll Your Own (UNIX)
  • Linux or FreeBSD Based
  • Install Samba for SMB Server (allow windows and other OS machines to see your shared files)
  • Setup FTP (unencrypted unless you do FTPS (ftp over ssl), high speed, doesn’t play well with NAT, not recommended)
  • Configure SSH (provides SCP and SFTP) (encrypted, slightly higher cpu usage, recommended for Internet access)
  • Install rsync (originally designed to keep mirrors of source code and websites up to date, allows you to transfer only the differences between files, rather than the entire file) (although it is recommended you do rsync over SSH not via the native protocol)
  • Configure NFS (default UNIX file sharing system)
  • Build your own iSCSI targets (allows you to mount a remote disk as if it were local, popular in virtualization as it removes a layer of abstraction. required for virtual machines that can be transferred from one host to another.
  • Roll Your Own (Windows)
  • Windows provides built in support for SMB
  • Install Filezilla Server for FTP/FTPs (Alternative: CyberDuck)
  • There are some NFS alternatives for windows, but not are not free
  • There is an rsync client for windows, or you could use cygwin, same goes for SSH. Similar tools like robocopy and synctoy
  • FreeNAS
  • FreeBSD Based. Provides: SMB, NFS, FTP, SFTP/SCP, iSCSI (and more)
  • Supports ZFS
  • Chris’ Previous Coverage of FreeNAS:
  • FreeNAS, IN DEPTH
  • FreeNAS Vs. HP MediaSmart WHS
  • FreeNAS vs Drobo

Round Up:

Bitcoin Blaster:

Backups & Server Hardware | TechSNAP 6

without comments

post thumbnail

Every six hours the NSA collects as much data that exists in the entire lib of congress and we have a few practical notes on how a system like that could even function.

We follow up on Dropbox, and what looks like the FTC is getting involved with their recent snafus.

Plus we answer a big batch of your emails, and our backup tips for home, small business, and the enterprise!

Please send in more questions so we can continue doing the Q&A section every week! techsnap@jupiterbroadcasting.com


Direct Download Links:

HD Video | Large Video | Mobile Video | MP3 Audio | OGG Audio | YouTube

Subscribe via RSS and iTunes:

Show Notes:

Topic: NSA collects data on a massive scale

NSA Gathers 4x the Amount of Info than the Library of Congress, Daily

  • NSA gathers data at an incredible rate, equivalent to the entire content of the US Library of Congress every 6 hours.
  • The Library of congress contains nearly 150,000,000 catalogued entries.
  • The Library of congress ‘American Memory’ site contains tens of petabytes of public domain images and audio/video recordings.
  • The NSA has the ability to apply for patents under a gag-order, if and only if another entity tries to patent the same process, do the NSA patents become public. NSA patents never expire.
  • http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6947978 – the NSA patented the geo-location by pinging a series of routers technique we discussed a few weeks ago during the iPhone GPS story.


Topic: new US Internet censorship bill, the ‘PROTECT IP’ Act

Revised ‘Net censorship bill requires search engines to block sites, too
http://arstechnica.com/tech-policy/news/2011/04/google-private-web-censorship-lawsuits-would-create-trolls.ars

  • Law is in part about attacking foreign sites that US law enforcement currently cannot target
  • Proposes to require search engines to remove results for sites as the request of not only the government, but also of rights holders. Have we not seen enough false positives and trolling via the DMCA?
  • rights holders would not have to seek government assistance to have sites censored, but could seek court orders directly against payment processors and advertising networks (but not ISPs or search engines)
  • actively encourages search engines and other sites to take action without any sort of court order
  • Act will protect ad networks and payment processors from being sued by the customers they spurn if they “voluntarily cease doing business with infringing websites, outside of any court ordered action”. The definition of infringing is left up to the rights holder.

Book recommendation: The Master Switch (Audio Book / Audible Sign up)


Topic: Lieing about security for a competitive edge

http://www.wired.com/threatlevel/2011/05/dropbox-ftc/
http://www.wired.com/images_blogs/threatlevel/2011/05/dropbox-ftc-complaint-final.pdf

  • A complaint has been filed with the Federal Trade Commission claiming that Dropbox engaged in Deceptive Trade Practices by claiming to securely store your data when they in fact do not store it according to industry best practices.
  • It is the belief of the complainant that the security claims made by dropbox gave them a competitive advantage over other services, specifically, users might have chosen a more secure service if they were aware of the problems with dropbox
  • At issue is a specific claim from the dropbox website that has since been retracted when it was discovered that it was false. “All files stored on Dropbox servers are encrypted (AES-256) an are inaccessible without your account password.”
  • Because Dropbox uses only a single AES-256 key, rather than a separate one for each user, employees and others at Dropbox may access your files at any time without your password. The Dropbox page has been updated to reflect the fact that Dropbox will turn over your files if requested by law enforcement or possibly other parties.

Topic: Q&A

Q: (akito) What do data centers use for fire suppression now that Halon is frowned upon?
A: Some data centers still use Halon, however most have switched to using ‘clean agents’ such as FM-200 that are designed to remove the ‘heat’ from a fire. Unlike other agents, FM-200 does not leave an oily residue or otherwise degrade your equipment. Some systems use CO2 to displace the oxygen in the space and suppress the fire that way. Also 3M has developed a non-conductive fluid that can be used in place of Halon without damaging equipment.
http://solutions.3m.com/wps/portal/3M/en_US/Novec/Home/Product_Information/Fire_Protection/
http://www.youtube.com/watch?v=1iz4o3W6IJM

War Story: No means none, not even a little bit

(Allan) Interesting story from when I worked at Ontario Power Generation. There was a problem with one of the CRAC (Computer Room Air Conditioner) units in the on-site data center, and a refrigeration technician was dispatched. Before we let him into the server room we specifically told him that he must come to us before he started any kind of soldering or welding, as it would set off the fire suppression system, which thankfully no longer flooded the room with Halon, but still triggered an emergency shutdown of all electrical systems in the entire IT wing of the North Admin building. Basically, when a fire is detected by the system, the klaxon sounds and you have 30 seconds to silence the alarm before it is escalated, at which time the power is cut and Halon (if it had not been disabled) would be deployed. I was down the hall from the server room in one of the test labs, working on the windows NT4 to Win2000 migration. Out of nowhere, the fire alarm goes off; At first I was startled, then it clicked, the repairman had forgotten to warn us that he was going to begin soldering. I took off at a dead run towards the alarm panel, as I got closer I heard the alarm tone change, I only had 10 seconds left before the power to every server would be cut and the UPS system would be bypassed. We’d spend hours cleaning up the mess, and explaining what went wrong. Thankfully, I reached the panel in time, and jammed the big red silence button, saving the day.

Q: (DreamsVoid) I would like to backup my linux and windows computers to my linux server using rsync. How should I set this up
A: rsync has many advantages, specifically the way it can compute the delta between files and significantly reduce the amount of data that has to be transferred during a backup. However, rsync is not a good backup solution because it only creates a copy of the file, not a true backup. In a true backup system, you retain multiple versions of each file from different dates. Say for example a file is corrupted, if you do not notice this right away, during the next rsync, the ‘backup’ copy of the file will be replaced with the corrupted one, and you will have no recourse. If all of your computers are on a LAN, you don’t have any real worries about the amount of bandwidth you are using transferring the files, and a proper backup solution is best.

rsync for windows: http://itefix.no/cwrsync/
BackupPC – open source backup to disk: http://backuppc.sourceforge.net/
Bacula – high end open source network backup system: http://www.bacula.org

Q: (Nean) What are the differences between a server and a normal desktop computer?
A: Generally they are not all that different, but some servers have additional features and capabilities that are not necessary in a regular desktop. Typically, higher end servers have redundant power supplies, either because they need to draw more power than a single power supply can provide, but also to be able to continue operating in the event that one of the power supplies dies. Servers, and some high end desktops also have redundant disks, taking advantage of various RAID configurations to allow the server to continue operating even if one or more disks stop functioning. Servers typically have dedicated RAID controllers that support more exotic forms of RAID than your typical on-board controller found it high end desktops. Servers also tend to have remote management cards that allow an administrator to access the bios and even manipulate the keyboard/mouse remotely, instead of having to be local to the machine.


Download:

Written by chris

May 23rd, 2011 at 2:20 am

HPR: A private data cloud

without comments

The Hacker Public Radio Logo of a old style microphone with the letters HPR

Transcript(ish) of Hacker Public Radio Episode 544

Over the last two years I have stopped using analogue cameras for my photos and videos. As a result I also don’t print out photos any more when the roll is full. This goes some way to explaining why my mother has no recent pictures of the kids. Living in a digital world comes a realization that we need to take a lot more care when it comes to making backups.

In the past if my pc’s hard disk blew up virtually everything of importance could be recreated. That simply isn’t the case any more when the only copy of your cherished photos and videos are now on your computer. Add to that the fact that in an effort to keep costs decreasing and capacity increasing hardisks are becoming more and more unreliable (pdf).

A major hurdle to efficient backups is that the capacity of data storage is exceeding what can be practically transferred to ‘traditional’ backup media. I now have a collection of media reaching 250G where backing up to DVD is not feasible any more. Even if your collection is smaller be aware that sd cards, USB Sticks, DVD’s or even tapes also degrade over time.

And yet hard disks are cheap. You can get a 1.5 TB disk from amazon.com for $95 or a 1T for €70 from mycom.nl. So the solution would appear to be a juggling act where you keep moving your data on a pool of disks and replace the drives as they fail. Probably the easiest solution is to get a hand holding drobo or a sub 100$/€ low power NAS solutions. If you fancy doing it yourself Linux has had support for fast software mirroring/raid or years.

Problem solved ….

NASA Image of the Earth taken by Apollo 17 over green binary data

…well not quite. What if your nas is stolen or accidentally destroyed ?

You need to consider a backup strategy that also mirrors your data to another geographic location. There are solutions out there to store data in the cloud (ubuntu one, dropbox, etc.) The problem is that these services are fine for ‘small’ amounts of data but get very expensive very quickly for the amount of data we’re talking about.

The solution, well my solution, is to mirror data across the Internet using rsync over ssh to my brothers NAS and he mirrors his data to mine. This involves a degree of trust as you are now putting your data into someone else’s care. In my case it’s not an issue but if you are worried about this you can take the additional step of shipping them an entire pc. This might be a low power device that has enough of an OS that can get onto the Internet. From there you can ssh in to mount an encrypted partition. When hosting content for someone else you should consider the security implications of another user having access to your network from behind your firewall. You would also need to be confident that they are not hosting anything or doing anything that would lead you to get in trouble with the law.

Once you are happy to go ahead what you need to do is to start storing all your important data to the NAS in the first place. You will want to have all your PC’s and other devices back up to it. It’s probably a good idea to mount the nas on the client PC’s directly using nfs, samba, sshfs etc so that data is saved there directly. If you and your peering partner have enough space you can start replicating immediately or you may need to purchase an additional disk for your remote peer to install. I suggest that you do the initial drop locally and transfer the data by sneaker net, which will be faster and avoid issues with the ISP’s.

It’s best to mirror between drives that can support the same file attributes. For instance copying files from ext3 to fat32 will result in a loss of user and group permissions.

When testing I usually create a test directory on the source and destination that have some files and directories that are identical, different and modified so that I can confirm rsync operations.

To synchronize between locally mounted disks you can use the command:

rsync -vva --dry-run --delete --force /data/AUTOSYNC/ /media/disk/

/data/AUTOSYNC/ is the source and /media/disk/ is the destination. The --dry-run option will go through the motions of copying the data but not actually do anything and this is very important when you start so you know what’s going on. The -a option is the archive option and is equivalent to -rlptgoD. Here’s a quick run through the rsync options

-n, --dry-run
    perform a trial run that doesn't make any changes
-v, --verbose
    increases the amount of information you are given during the transfer.
-r, --recursive
    copy directories recursively.
-l, --links
    recreate the symlink on the destination.
-p, --perms
    set the destination permissions to be the same as the source.
-t, --times
    set the destination modification times to be the same as the source.
-g, --group
    set the group of the destination file to be the same as the source.
-o, --owner
    set the owner of the destination file to be the same as the source.
-D
    transfer character, block device files, named sockets and fifos.
--delete
    delete extraneous files from dest dirs
--force
    force deletion of dirs even if not empty

For a complete list see the rsync web page.

Warning: Be careful when you are transferring data that you don’t accidentally delete or overwrite anything.

Once you are happy that the rsync is doing what you expect, you can drop the --dry-run and wait for the transfer to complete.

The next step might be to ship the disk off to the remote location and then setup the rsync over ssh. However I prefer to have an additional testing step where I rsync over ssh to a pc in the home. This allows me to work out all the rsync ssh issues before the disk is shipped. The steps are identical so you can repeat this step once the disk has been shipped and installed at the remote end.

OpenBSD and OpenSSH mascot Puffy

OpenSSH

On your NAS server you will need to generate a new ssh public and private key pair that has no password associated. The reason for this is that you want the synchronization to occur automatically so you will need to be able to access the remote system securely without having to enter a password. There are security concerns with this approach so again proceed with caution. You may wish to create a separate user for this but I’ll leave that up to you. Now you can add the public key to the remote users .ssh/authorized_keys file. Jeremy Mates site has more information on this.

To confirm the keys are working you can try to open a ssh session using the key you just setup.

ssh -i /home/user/.ssh/rsync-key user@example.com

You may need to type yes to add the keys to the .ssh/known_hosts file, so it makes sense to run that command as the user that will be doing the rsyncing. All going well you should now be logged into the other system.

Once you are happy that secure shell is working all you now need to do is add the option to tell rsync to use secure shell as the transport.

rsync -va --delete --force -e "ssh -i /home/user/.ssh/rsync-key" /data/AUTOSYNC/ user@example.com:AUTOSYNC/

All going well there should be no updates but you may want to try adding, deleting and modifying files on both ends to make sure the process is working correctly. When you are happy you can ship the disk to the other side. The only requirement on the other network is that ssh is allowed through the firewall to your server and that you know the public IP address of the remote network. For those poor people without a fixed IP address, most systems provide a means to register a dynamic dns entry. Once you can ssh to your server you should also be able to rsync to it like we did before.

Of course the whole point is that the synchronization should be seamless so you want your rsync to be running constantly. The easiest way to do this is just to start a screen session and then run the command above in a simple loop. This has the advantage of allowing you to get going quickly but is not very resistant to reboots. I created a simple bash script to do the synchronization.

user@pc:~$ cat /usr/local/bin/autosync
#!/bin/bash
while true
  do
  date
  rsync -va --delete --force -e "ssh -i /home/user/.ssh/rsync-key" /data/AUTOSYNC/ user@example.com:AUTOSYNC/
  date
  sleep 3600
done
user@pc:~$ chmod +x /usr/local/bin/autosync

We wrap the rsync command into a infinite while loop that outputs a time stamp before and after it has run. I then pause the script for an hour after each run so that I’m not swamping either side. After making the file executable you can add it to the crontab of the user doing the rsync. See my episode on Cron on how to do that. This is a listing of the crontab file that I use.

user@pc:~$ crontab -l
MAILTO=""
0 1 * * * timeout 54000 /usr/local/bin/autosync > /tmp/autosync.log  2>&1

There are a few additions to what you might expect here. Were I to run the script directly from cron then it would spawn a new copy of the autosync script at one o’clock every morning. The script itself would never terminate so over time there would be many copies of the script running simultaneously. This isn’t an issue here as I am actually calling the timeout command first and it’s the one that actually calls the autosync script. The reason for this is that my brother doesn’t want me rsyncing in the evening when he is usually online. I could have throttled the amount of bandwidth I used as well but he said not to bother.

--bwlimit=KBPS
    This option allows you to specify a maximum transfer rate in kilobytes per second.

As the timeout command runs in it’s own process it’s output is not redirected to the logfile. In order to stop the cron owners email account getting a mail every time the timeout occurs I added a blank MAILTO="" line at the start of the crontab file. Thanks to UnixCraft for that tip.

Well that’s it. Once anyone on your network saves a file it will be stored on their local NAS and over time it will be automatically replicated to the remote network. There’s nothing stopping you replicating to other sites as well.

An image from screencasters.heathenx.org episode 94

screencasters.heathenx.org

This months recommended podcast is screencasters at heathenx.org.
From the about page:

The goal of Screencasters.heathenx.org is to provide a means, through a simple website, of allowing new users in the Inkscape community to watch some basic and intermediate tutorials by the authors of this website.

heathenx and Richard Querin have produced a series of shows that put a lot of ‘professional tutorials’ to shame. Their instructions are clear and simple and have given me a good grounding into a complex and powerfull graphics program despite the fact I have as yet not even installed inkskape. They even have mini tutorials on how to make your way around the interface and menus.

After watching the entire series I find myself looking at posters and advertisements knowing how that effect could be achieved in inkskape. If you are interested in graphics you owe it to yourself to check out the series. If you know someone using photoshop then burn these onto DVD and install inkskape for them. Even if you have no creative bone in your body this series would allow you to bluff your way through graphic design.

Excellent work.

Written by ken_fallon

May 29th, 2010 at 2:12 am