Archive for the ‘facebook’ Category
Facebook is fooled again, remote controlled voting machines, and Sony has another 93,000 accounts hacked, we’ll load you up on the details!
Then – We cover your best options for pimping your home network for speed!
Direct Download Links:
Subscribe via RSS and iTunes:
- Facebook has a malicious URL scanner that checks urls linked to in posts to make sure they do not contain content that could be harmful to users
- The most simple content cloaking technique, displaying different content to different users (ie, look for the facebook bots user-agent string) and fool this system
- In the example proof of concept attack, the url looks like a .jpg file, and will get a thumbnail in the facebook preview, but if you follow the link, you will be rickrolled
- Proof of Concept
- Sony has suspended 93,000 accounts that were successfully accessed during a massive wave of failed login attempts.
- This suggests that Sony does not have any automated systems for slowing, or blocking such brute force attacks.
- The attack effected large numbers of users on both the PSN/SEN, and SOE
- While Sony claims the the attackers must have had a list of username/password combinations from some other site that was attacked, the fact that 100s of thousands of accounts had attempts against them, and 93,000 succeeded, suggests one of a few hypothesises:
- The attack used user data from the original sony hack (and/or users reset their passwords back to the same stolen passwords)
- The flaw in the PSN password reset system that allowed attackers to reset other users’ passwords was more widespread that first though
- Users were the victims of the multiple phishing attempts we saw around the the PSN compromise
- Sony was compromised again
- Additional Article
- Sony CISO Statement
- As many as 25% of American voters in the 2012 election will use voting machines that can be compromised using just $10.50 worth of off-the-shelf hardware (or $26 if you want a remote control).
- This attack is the most simplistic yet known to exist, as it requires far less programming or cyber warfare skills
- The researches developed three different types of attack
- Programmer under oath admits computers rig elections
- Insider Attack Against Diebold Voting Machines
Dominic emails in:
YOU’RE DOING IT WRONG
Q: When building physical network topology, say you have 5x 8 port switches, are you best to connect the router to port 1 of switch#1 then connect various other computers to the rest of the ports on switch#1 with the last port connecting to switch#2 which has one port to switch#3 and so on (essentially daisy chaining) or have one ‘master’ switch where each port of the switch connects to each of the other switches (2, 3, 4 and 5) then have the router and PCs plugged into those (I know its a bit overkill for a home network but its just in theory as I’ve had to deal with stuff like network loops and such before and wondering if there is any real advantage between the two methods).
A: The second setup you described is a proper ‘hierarchical networking model’, which usually consists of three layers. The first layer is the Access Layer, this is where individual computers are connected to the network, this is typically just a (relatively) low-end switch. The next layer, is the Distribution Layer, this is where a lot of routers and firewalls do their work, they usually also acts as the separation between departments, locations and regions. Typically computers in the same Access Layer can reach each other directly without going through a router. The top layer of the network is the Core Layer, this is the fastest part of the network, where data is exchanged between the different Distribution Layers. In your more limited setup, the ‘master’ switch would be the Core Layer, and exchange traffic between each of the different Access Layer switches. However, for your home this may not be the best setup. If all of the switches are 100mbit, then the links between the Core Layer switch, and the Access Layer switch can be a bottleneck. For example, if you had 2 pairs of clients communicating with each other on the same switch (so 4 machines, A<->B and C<->D), they could each communicate at 100mbit/second. However, if A and C are on Access Layer switch#2, and B and D are on Access Layer switch#3, then the bandwidth between #2 and #3 is limited to 100mbit total, and so each stream would only be able to use 50mbit/sec. However, if A and B are on one switch, and C and D are on another, then no data is exchange through the Core Layer at all. So a number of factors, especially your traffic patterns, must be considered when setting up your network topology. You do not have to worry about creating ‘loops’ or anything as long as each switch only has a single path to each other switch. Higher end switches (managed ones) will have ‘STP’ (Spanning Tree Protocol), which allows them to avoid loops even when they have multiple paths, while still adapts and using one of the extra paths if the preferred path is disconnected.
At my house, I have a 5 port gigabit switch, and 3 100mbit switches. My PC, Router/File Server, and Media center connect to the gigabit switch, the 4th port goes to the wireless AP, and the 5th to the switch in my bedroom. The remaining 100mbit switch (used for the machines in the rack in my living room) is fed off the wired ports for the wireless AP.
- Apple removes DigiNotar root certificates from iOS 5
- Virus Scanner performance benchmarks
- Amazon In Talks With HP To Buy Palm
- FBI makes arrest after Johansson, Aguilera e-mails hacked
- Google Hands Wikileaks Volunteer’s Gmail Data to U.S. Government
- Blackberry service loss questions. : techsnap
- BlackBerry services return after historical global outage
- Microsoft Security Products Flag Google Chrome As a Virus
HP Getting Out of Computer Sales and Killing Off webOS
The Guardian is reporting that, “Why would the world’s biggest seller of PCs exit hardware sales? Because there isn’t enough profit in it. So what’s next? What do you call it when the world’s biggest PC manufacturer gets out of manufacturing PCs? Wise. Though people have been surprised by HP’s announcement on Thursday that it is getting out of all its hardware businesses – PCs, the TouchPad tablet and the smartphones that were to have followed – the inescapable conclusion is that… the new head of HP who came from the enterprise-focused SAP last September, is declining to throw good money after bad… and shifting HP’s focus towards the places where he sees profit: enterprise services.”
Seattle-PI explains that, “The overhaul will have three parts:  HP will stop making tablet computers and smartphones by October.  It will try to spin off or sell its PC business, the world’s largest. By the end of next year, HP computers could be sold under another company’s name.  The company plans to buy business software maker Autonomy Corp. for about $10 billion in one of the biggest takeovers in HP’s 72-year history.”
What does all this mean for the Linux based WebOS? TechCrunch is reporting that, “Brace yourselves, webOS fans. In the hours leading up to their Q3 conference call later today, HP has just confirmed that they will be discontinuing operations surrounding the TouchPad and all webOS phones. To quote their press release: HP reported that it plans to announce that it will discontinue operations for webOS devices, specifically the TouchPad and webOS phones. HP will continue to explore options to optimize the value of webOS software going forward…. HP’s Stephen DeWitt says “We are not walking away from webOS.” They will continue efforts to advance and perhaps license the OS, but its life as we have known it is certainly over.”
eBay Completes Acquisition of Magento
Web Pro News is reporting that, “eBay announced that it has officially completed its acquisition of Magento, which it announced in June. eBay has owned a minority stake in the company since 2010. Now, it owns the whole thing. Magento is an open source e-commerce platform…. The Magento platform serves tens of thousands of merchants worldwide and is supported by a global community of solution partners and third-party developers…. Magento is a feature-rich, open-source, enterprise-class commerce solution that offers merchants a high degree of flexibility and control over the user experience, catalog, content and functionality of their online stores. Magento Go, the company’s hosted software-as-a-service solution, provides small and growing merchants with the tools to help them succeed online – from payments to inventory management.” Terms of the deal were still not disclosed.”
Google and Facebook Owe it All to Linux and Open Source
ZDnet is reporting that, “Google and Facebook owe their success largely to Linux — not the technology per se, but to the cheap innovation and mass collaboration it enables, Red Hat’s CEO says. Yes, free, as in freedom, but also free as in free beer, said Jim Whitehurst, CEO of Red Hat. Had it not been for the no cost software, open licensing and mass collaboration, all business models enabled by Linux and open source, none of the top Web 2.0 companies — including the cloud crowd — would have been able to lift off, scale and run their businesses.”
Ubuntu’s Next Unity Begins to Take Shape
PC World is reporting that, “Ubuntu 11.10 will ship with both the client and server components of Cloud Foundry, the “platform cloud” VMware open sourced this spring. On Wednesday, VMware and Ubuntu guardian Canonical announced that the next incarnation of the Linux distro – due for official release in October – will include Cloud Foundry packages built by real live Canonical engineers. Canonical claims 12 million active Ubuntu desktop users, and VMware boasts that with the Cloud Foundry client on the… [next release of Ubuntu], these millions will be only a few commands away from deploying an application on its existing Cloud Foundry service…. And with the Cloud Foundry server deployment tools bundled as well, Ubuntu users will have the ready option of building their own cloud based on the platform. Ubuntu is already the core OS behind VMware’s service.
Linux Compliance Hits Milestone with SPDX 1.0
Linux Planet is reporting that “The issue of open source license compliance is not a difficult one to deal with if you know what to look for.That’s where the new Software Package Data Exchange (SPDX) standard comes into play. The SPDX 1.0 release is being made at the Linux Foundation’s LinuxCon event in Vancouver. SPDX is a working group of the Linux Foundation. According to the Linux Foundation, the SPDX standard defines a standard file format that lists detailed license and copyright information for a software package and each file it comprises. “SPDX solves a problem that came from big trends,” Jim Zemlin, Executive Director of the Linux Foundation told InternetNews.com. “One being the increased used of open source software to create devices and also the increased importance and complexity of software in general.”
Red Hat RHEV Freed From Windows Dependencies
PCWorld is reporting that “With the next release of its Red Hat Enterprise Virtualization (RHEV) package, Red Hat has finally rid itself of one of its most notorious dependencies, namely the use of Microsoft’s Windows Server and SQL Server.The beta of RHEV 3.0, released Tuesday, will be the first version of the virtualization package that does not require a copy of Microsoft Windows Server to run the management console…. The new beta version also shows that the company has put forth considerable effort in allowing the software to handle larger workloads, which should make it competitive with another chief rival of Red Hat in the virtualization space, namely VMware.”
Not Quite Dead Yet, Symbian Gains a Big Update
Gigaom is reporting that, “Facing a stark decline in handset sales, Nokia continues to support its current smartphone users with a major software update on Thursday. The Symbian Anna release is available for download and supports the Nokia N8, C6-01, C7 and E7 handsets. The updated firmware, which includes a number of new features and an improved user interface, follows an earlier but minor update that Nokia pushed to handsets back in February. And the company isn’t done yet with Symbian, as earlier this week, video surfaced showing the next software update.”
Non-profit Group Releases Open Source Mesh WiFi Network Software
Hot Hardware is reporting that, “The non-profit group Geeks Without Frontiers today released open source software based on an upcoming WiFi standard. It lets Linux machines be their own WiFi network, no hardware required. The software is based on the not-yet-ratified IEEE 802.11s, an extension to the 802.11 WiFi standard. 11s creates wireless “mesh” networks. Ratification is expected to happen by Q4 2011. 11s allows multiple wireless devices to connect with each other without having a hardware access point between them and to “multi-hop” to reach nodes that would otherwise be out of range.”
Backlash to Mozilla Dropping Firefox Version Number
Computer World is reporting that, “Mozilla’s decision to strip the version number from Firefox’s “About” dialog box has been greeted by a nearly unanimous thumbs down, according to a lengthy, and at times heated, debate on a company discussion list. The pushback was the second in as many months against Mozilla, which found itself the center of a late-June controversy over its apparent lack of interest in enterprise customers. Asa Dotzler, a director of Firefox, explained why Mozilla was dumping the version number. “We’re moving to a more Web-like convention where it’s simply not important what version you’re using as long as it’s the latest version…. We have a goal to make version numbers irrelevant to our consumer audience.”
Mozilla Patches 10 Serious Security Vulnerabilities in Firefox 6
eWeek is reporting that, “Mozilla fixed 10 “critical” and “high-risk” security vulnerabilities in its popular Firefox Web browser, several of which could have led to remote code execution by malicious attackers. Mozilla addressed vulnerabilities relating to memory management, heap overflows and unsigned scripts in Firefox 6, released Aug.17. The latest version arrived just two months after Firefox 5, and is more or less a cosmetic upgrade, albeit with 1,300 under-the-hood changes and fixes. Ten of the fixes closed critical or “high-risk” security flaws, according to the accompanying security advisory. Several of the bugs, if exploited, could have resulted in a remote attacker running code just by having the unsuspecting user browse on a malicious Website.”
Open Source Business Intelligence Heats Up
Enterprise Apps Today is reporting that, “Open source business intelligence is growing faster than the rest of the BI market, according to a recent study. Open source has been invading many segments of technology over the last decade. It started with Linux in the operating system space and has since spread to areas such as backup, storage and various applications. But it has taken quite some time to get going in the business intelligence software space. But that is changing. Most analysts concur that open source business intelligence… is very much on the rise.”
Recording App Audioboo Makes Its Android Effort Open Source
Paid Content is reporting that, “Audioboo, voice recording app, was once a darling of the app world. It has gone a little more quiet of late, as a rush of other apps, such as Sound Cloud, have also entered the space. Now, it is taking the Android version of its app open source, as it prepares to launch a premium, paid version of its app for the iOS public. Mark Rock, Audioboo’s founder and CEO, told paidContent that the decision to make its Android app was not a light one, but that it was a necessary step in managing the app for a company that only has five full-time employees.”
IBM Produces First Brain Chips
The BBC is reporting that, “IBM has developed a microprocessor which it claims comes closer than ever to replicating the human brain. The system is capable of “rewiring” its connections as it encounters new information, similar to the way biological synapses work. Researchers believe that that by replicating that feature, the technology could start to learn. Cognitive computers may eventually be used for understanding human behaviour as well as environmental monitoring.”
Distrowatch is reporting the release of…
BlankOn Linux 7.0
Salix OS 13.37 “LXDE” edition
Puppy Linux 5.2.8
Bookstore – Get Linux software and books about Linux.
T-Shirts – Show your support with cool t-shirts, mugs, and more.
About Us – Introduces you to the podcast and the podcaster.
Contact – Complaments, Problem, concerns, and suggestions welcomed.
Twitter Updates – Get the latest news updates and sneak peaks.
Indenti.ca – A open source client like Twitter.
Facebook Page – Like the podcast and get the latest episodes in your friend stream.
Attackers take aim at Apple with an exploit that could brick your Macbook, or perhaps worse. Plus you need to patch against a 9 year old SSL flaw.
Plus find out about a Google bug that could wipe a site from their Index, and a excellent batch of your feedback!
All that and more, on this week’s TechSNAP!
Direct Download Links:
Subscribe via RSS and iTunes:
- A nine year old bug discovered and disclosed by Moxie Marlinspike in 2002 allows attackers to decrypt intercepted SSL sessions. Moxie Marlinspike released a newer, easier to use version of the tool on monday, to coincide with Apple finally patching the flaw on iPhone and other iOS devices.
- Any unpatched iOS device can have all of it’s SSL traffic trivially intercepted and decrypted
- This means anyone with this new easy to use tool sitting near a wifi hotspot, can intercept encrypted login information (gmail, facebook), banking credentials, e-commerce transactions, or anything else people do from their phone.
- The bug was in the way iOS interpreted the certificate chain. Apple failed to respect the ‘basicConstraint’ parameter, allowing an attacker to sign a certificate for any domain with an existing valid certificate, a condition normally prevented by the constraint.
- There are no known flaws in SSL it self, in this case, the attacker could perform a man-in-the-middle attack, by feeding the improperly signed certificate to the iPhone which would have accepted it, and used the attackers key to encrypt the data.
- Patch is out with a support doc and direct download links
- After analyzing a battery firmware update that Apple pushed in 2009, researchers found that all patched batteries, and all batteries manufactured since, use the same password
- With this password, it is possible to control the firmware on the battery
- This means that an attacker can remotely brick your Macbook, or cause the battery to overheat and possibly even explode
- The attacker can also falsify the data returned to the OS from the battery, causing odd system behaviour
- The attacker could also completely replace the Apple firmware, with one designed to silently infect the machine with malware. Even if the malware is removed, the battery would be able to reinfect the machine, even after a complete OS wipe and reinstall.
- Further research will be presented at this years Black Hat Security Conference
- In the meantime, researchers have notified Apple of the vulnerability, and have created a utility that generates a completely random password for your Mac’s battery.
- A glitch in facebook allowed you to see the thumbnail preview and description of private videos posted by other users, even when they were not shared with you.
- It was not possible to view the actual videos
- Using the google webmaster tools, users were able to remove websites that did not belong to them from the Google Index
- By simply modifying the query string of a valid request to remove your own site from the google index, and changing one of the two references to the target url, you were able to remove an arbitrary site from the google index
- The issue was resolved within 7 hours of being reported to Google
- Google restored sites that were improperly removed from its index.
- Inproper input validation and output sanitation allowed attackers to inject code into their skype profile
- By entering html and java script in to the ‘mobile phone’ section of your profile, anyone who had you on their friends list would execute the injected code.
- This vulnerability could have allowed attackers to high your session, steal your account, capture your payment data, and change your password
Q: (Sargoreth) I downloaded eclipse, and I didn’t bother to verify the md5 hash they publish on the download page, how big a security risk is this?
A: Downloadable software often has an MD5 hash published along with the downloadable file, as a measure to allow you to ensure that the file you downloaded is valid. Checking the downloaded file against this hash can ensure that the file was not corrupted during transfer. However it is not a strong enough indicator that the file has not been tampered with. If the file was modified, the MD5 hash could just as easily have been updated along with it. In order to be sure that the file has not been tampered with, you need a hash that is provided out of band, from a trusted source (The FreeBSD Ports tree comes with the SHA256 hashs of all files, which are then verified once they are downloaded). SHA256 is much more secure, as MD5 has been defeated a number of times, with attackers able to craft two files with matching hashes. SHA-1 is no longer considered secure enough for cryptographic purposes. It should also be noted that SHA-512 is actually faster to calculate than SHA256 on 64bit hardware, however it is not as widely supported yet. The ultimate solution for ensuring the integrity of downloadable files is a GPG signature, verified against a trusted public key. Many package managers (such as yum) take this approach, and some websites offer a .asc file for verification. A number of projects have stopped publishing the GPG signatures because the proportion of users who checked the signature was too low to justify the additional effort. Some open source projects have had backdoors injected in to their downloadable archives on official mirrors, such as the UnrealIRCd project.
Q: (Christoper) I have a windows 7 laptop, and a Ubuntu desktop, what would be a cheap and easy way to share files between them?
A: The easiest and most secure way, is to enable SSH on the ubuntu machine, and then use an SFTP client like FileZilla (For Windows, Mac and Linux), and then just login to your ubuntu machine using your ubuntu username/password. Alternatively, If you have shared a folder on your windows machine, you should be be able to browse to it from the Nautilus file browser in Ubuntu. Optionally, you can also install Samba, to allow your Ubuntu machine to share files with windows, it will appear as if it were another windows machine in your windows ‘network neighbourhood’.
Q: (Chad) I have a network of CentOS servers, and a central NFS/NIS server, however we are considering adding a FreeNAS box to provide ZFS. I need to be able to provide consistent centralized permissions control on this new file system. I don’t want to have to manually recreate the users on the FreeNAS box. Should I switch to LDAP?
A: FreeNAS is based on FreeBSD, so it has a native NIS client you can use (ypbind) to connect to your existing NIS system. This would allow the same users/groups to exist across your heterogeneous network. You may need to modify the /etc/nsswitch.conf file to configure the order local files and NIS are checked in, and set your NIS domain in /etc/rc.conf. Optionally, you could use LDAP, again, adding some additional parameters to nsswitch.conf and configuring LDAP. If you decide to use LDAP, I would recommend switching your CentOS machines to using LDAP as well, allowing you to again maintain a single system for both Linux and BSD, instead of maintaining separate account databases. If you are worried about performance, you might consider setting the BSD machine up as an NIS slave, so that it maintains a local copy of the NIS database. The FreeBSD NIS server is called ypserv. You can find out more about configuring NIS on FreeBSD here
- Allan’s Bitcoin mining rig mined it’s 36th bitcoin today
- Research shows Bitcoin may be less anonymous than initially though
- Buy Humble Bundle 3 with Bitcoins!
- Why We Are No Longer Accepting Dwolla « TradeHill
- Do It Yourself Dropbox Alternatives
- Attackers steal 8GB of data from the Italian Cybercrime unit
- Build your own 135 Terabyte storage server for under $8000
- Anonymous claims to have 1GB of stolen data from NATO and plans to release it
- Google is now actively warning users who it detects are infected with malware, especially attempts to hijack their search results
- The US Department of Defense lost 24k files via a compromised contractor
- Australian ISP’s Wireless Routers setup second hidden unprotected WiFi network
Coming up on This Week’s TechSNAP!
We’ll cover a story that really drives home how serious cell phone hijacking has gotten, and what new technology just made it a lot easier for the bad guys.
Plus find out why TrendJacking is more than a stupid buzz term, and we load up on a whole batch of audience questions!
All that and more, on this week’s TechSNAP!
Direct Download Links:
Subscribe via RSS and iTunes:
- Vodaphone sells a 3G Signal Boosting appliance for home users to boost mobile reception in their homes. The device sells for 160GBP ($260 USD)
- The FemtoCell or SureSignal appliance connects to the VodaPhone network via your home internet connections, and relays mobile phone signals
- The Hackers Choice (THC, developers of the well known hacking tool Hydra) managed to reserve engineer the device and brute force the root password. THC has been actively working on exploiting various devices of this nature since 2009
- Once compromised, the device can be turned in to a full blown 3G/UMTC/WCDMA call interception device.
- The FemtoCell uses the internet connection to retrieve the private key of the handset that is attempting to use the cell, in order to create an encrypted connection.
- In it’s intended mode of operation, the FemtoCell can only be used by the person who purchased it
- The FemtoCell has a limited range of about 50 meters (165 feet)
- With a rooted device, an attacker can get the secret key of any Vodaphone Subscriber
- With a users secret key, you can decrypt their phone calls (if they are within range), but also masquerade as their phone, and make calls at the victims expense.
- This attack also grants you access to the victims voicemail
- The root password on the Vodaphone device was ‘newsys’
- Some question whether Vodaphone should be held liable for not protecting their customers
- Quote from THC “Who is liable if the brakes on my car malfunction? The drive or the manufacture? Or the guys who tell us how insecure they are?”
THC Wiki page on the Vodaphone device, includes Diagrams
- When you visit the unofficial page for Google+ on Facebook, you are invited to allow the 3rd party app to access your facebook account (common requirement to use any facebook app)
- Specifically, this app requests access to post on your wall, allowing it to spam all of your friends, inviting them to join as well. It also requests access to all of your personal data
- You are then requested to ‘Like’ the app, and then invite all of your friends (Again, this is common with many Facebook apps, especially games, where inviting your friends can offer in-game rewards)
- Your friends then accept the invite, assuming it is legitimate because it came from you
- Now this application has managed to spread wildly and has complete access to your facebook profile, allowing it to scrape all of your personal information, as well as use your account to promote further fake and malicious applications.
- You need to watch what applications you are allowing access to your profile, and specifically which rights they are requesting. Does that game really need ‘access to your data at any time’, rather than only when you are using it? Do you trust it with access to post to your wall?
- This trend has been dubbed TrendJacking
Q: (Peter) While investigating different data centers to house our application, one of them mentioned that we should use physical servers to host our database, rather than hosting the database in virtualization like vmware. This this true?
A: There are a number of reasons that a physical server is better for a database. The first is pure I/O. In virtualization, there is always some level of overhead in accessing the physical storage medium, compared to doing it natively. There is also an overhead even with hardware virtualization for CPU cycles, Disk Access, Network Access, etc. In it generally considered best practise to keep your database on physical hardware. That doesn’t mean you can’t virtualize it, but if you are worried about performance, I wouldn’t.
Q: (nikkor_f64) In the recent ‘usage based billing’ legal battles in Canada, the smaller ISPs are proposing to use 95th Percentile Billing, what is that?
A: 95th Percentile billing is the way most carrier grade Internet connections have been billed for as long as I have been in the business. The concept is quite simple, rather then charging the subscriber for the amount of bandwidth that they use, such as pricing per gigabyte, the billing is based on peak usage. Typically, the rate of data up and down the link is measured every 5 minutes (routers count every bit as it goes though, but looking at that counter every 5 minutes, and subtracting the value from 5 minutes ago, you can determine the average speed for the last 5 minutes). Then, as the name suggests, you take the 95th percentile of those values. This is done by sorting the list of measurements, then deleting the top 5%, the highest measurement left, is the 95th percentile, and you pay for that much bandwidth. Some might argue, but that is more than I actually used, my average was far less than that. The key to why this system works, is that it charges the subscriber for the peak amount of bandwidth they used, save for a small grace. This allows the ISP to properly budget for the capacity they need to serve that customer. Normally, your contract will be something like: a 5 megabit/second commitment, with 100megabit burstable. This means you have a full 100/100 megabit connection, and you will pay for 5 megabits/second minimum at a fixed price. You will also be quoted a price for ‘overage’. If your 95th percentile is over 5 megabits, you pay the overage rate per megabit that you are over. You get a lower per megabit rate on your commitment level, but that is a minimum, you have to buy at least that much each month, even if you don’t use it, but the more you buy, the cheaper it is. So, this means that during peak periods, you can use the full 100 megabits, without having to pay extra, as long as your 95th percentile stays below 5 megabits. (5% of a month is about 36 hours, meaning you get the busiest 1 hour of each day, for free)
Q: (Justin) What would be the weaknesses of using GPG to encrypt my files before storing them in the cloud.
A: There are a few issues:
1. Key Security – You need to keep the keys safe, if they fall in to the wrong hands, then your data is no longer secure.
2. Key Management – You also have to have access to the key, where ever you are, in order to access your data. Unlike data that is protected with a simple passphrase, in order to access your data, you need the key. So if you are on your mobile, and you need access to your data, how do you get access to your key? If you store a copy of your key on the mobile, is it secure? Also, if your key is lost or destroyed, then there is no way to access your data, so you have to safely back it up.
3. Key Lifecycle – How often should you change your key? How many different keys should you use? If you use multiple keys, less data is compromised in the event that one of your keys is exposed, but it also complicates Key Security and Key Management.
4. Speed – Asymmetric encryption, such as GPG is far slower than symmetric encryption algorithms like AES. This is especially true with the newer Intel i7 processors having a specific AES instruction set that increases performance by about 8 times. This is way sometimes, you will see a system, where the data is encrypted with AES, and then the key for the AES is then encrypted with GPG. Giving you a hybrid, the strength of GPG with the speed of AES.
5. Incremental Changes –
- After 4 Years… New PuTTY update released!
- The fanless spinning heatsink: more efficient and immune to dust
- Follow-Up: More on Stuxnet
- “Artist” adds spyware to apple store computers to photograph customers
- Follow-Up: “Artist” Gets Secret Service Visit Over Apple Store Webcam Spying
- Silk Road is still kickin, here’s a review
- Bitcoin Mining Update: Power Usage Costs Across the US
- Canadian Bitcoin Exchange, Buy BTC via Electronic Bill Payment
- Chris’ bitcoin linkroll on Pinboard
Download & Comment:
We’ll cover the dirty details of a Facebook flaw that exposes your private account info to snoops, look into the privacy issues around “Smart Meters” and discuss a few big tech rivals coming together to fight a bad law.
Plus Allan shares one of his many war stories in our first installment of our continuing series!
Topic: Facebook app platform flaw exposes personal data
- Older form of user authentication used by some facebook apps returned a token that could be used for both read and write access to a users account
- Access to personal information
- Write wall posts and private messages
- Send invites and RSVPs
- Access photographs and other objects
- These tokens can be leaked via the referrer field or given to advertisers and other third parties who you did not authorize, when the tokens should only be able to be used by the application you did authorize
- The flaw was fixed when facebook switched to the standardized OAUTH API that works differently and requires public and private keys, but the old API is still supported to avoid breaking existing apps
- Facebook is phasing out the old method, by September 1st all apps must use OAUTH 2.0 and by October 1st all apps must have an SSL Certificate.
- Changing your password invalidates all old tokens
Topic: Google and Facebook oppose California Law against tracking cookies
- would create an unnecessary, unenforceable and unconstitutional regulatory burden on Internet commerce
- Would require sites to allow users to opt-out of storing information such as:
- date and hour of online access
- the location from which the information was accessed
- The device and its operating system used for access
- IP addresses
- Would kill google analytics and other such products
- Google also claims it would reduce online safety and increase fraud
- Many e-commerce providers use details such as where the user is located, previous access patterns, referral information etc, to help determine the validity of online transactions.
- The Federal Trade Commission and Department of Commerce have also promoted the idea of self-regulation
Topic: Is your electric meter spying on you?
- The California Public Utilities Commission proposes new privacy regulations to prevent utilities from sharing or selling your smart meter data
- Such data could be used to build a complex behavioural profile on you
- when you wake up
- when you shower
- when you leave for and return from work
- when you go on vacation
- Where is this information stored? How is it protected? could it be compromised like PSN/SOE was?
War Story: When it goes wrong, it goes very wrong
- Background: We were co-located with the local cable company, they had been offering data center services for years and were rapidly expanding and becoming a major force in the area. We particularly liked the data center because it was a block from my house in the event of a problem. However, after a few years there, the local cable company was bought out by a major national cable system. This new company decided that data center services were outside of their business focus and requested that all customers find a new home.
- So we signed a contract with a new data center and arranged with our customers to physically move half of our equipment over two consecutive weekends. In preparation for this we shifted load away from the gear we were going to move and adjusted our router configuration, leaving the redundant router as the primary at the old location. However, to save reconfiguration time during the final phase of the move, the new configuration on the backup router was only the ‘running’ config, not the ‘boot’ config. The boot config was setup so that when the router was physically moved next weekend, it would be read to go online immediately.
- Things were all going according to plan until about an hour before we were scheduled to being our move. A massive power event knocked out utility power to about a 40 block radius that included the data center. This was obviously nothing to be concerned about, this was a real data center, with redundant battery backups and a generator system.
- The servers kicked over to the battery backup, and things went well for the first few minutes. Once the system decided that the utility power was not going to be restored quickly, it spooled up the generator, which came online cleaning and started generating power. Once the generator was online, the system attempted to switch to generator power, however this failed, so the system resumed off of battery power, and sent out an alarm to the data center staff. The system attempted the transfer again, but again power was not getting from the generator to the battery backup system.
- Eventually, after about 15 minutes, the batteries ran flat and the entire data center went dark. The catastrophic loss of power also took out my home Internet connection from the same provider. Now, all of my servers were down, but I was unaware of the issue because my own Internet connection was down as well. I quickly became aware of the situation when my offsite monitoring starting pinging my phone.
- I called the ISP/Data Center and they said they had suffered a power loss, this explained why my home internet was down, but surely the data center should be fully functional. It wasn’t.
- Once utility power was restored after a total of about 30 minutes, things started to come back online, however because our routers and servers had modified ‘boot’ configs in preparation for the move that was going to happen later that day, things did not come back online cleanly. We rushed to the data center and started reconfiguring the gear that was to remain in the old data center for another week.
- Root Cause Analysis:
- Due to lack of maintenance by the new management of the data center, the transfer cables between the generators and the battery banks had corroded and failed to transfer power
- Due to the closing of the business unit, the data center was understaffed to deal with the catastrophic event
- Lessons Learned:
- Don’t change your boot config until the last possible minute. The idea was to save time at the far end during the move, but it ended up costing us when the power prematurely changed our configuration
- Internal directories services need to be redundant, reconfiguring the servers took an excessive amount of time because the servers were unable to lookup user to uid mappings
- Make sure your SSH server does not wait for DNS
- When a data center is going down hill, GET OUT. We were one of the first customers to leave, but we wish we had left 2 weeks sooner.