Archive for the ‘PSN’ Category
Facebook is fooled again, remote controlled voting machines, and Sony has another 93,000 accounts hacked, we’ll load you up on the details!
Then – We cover your best options for pimping your home network for speed!
Direct Download Links:
Subscribe via RSS and iTunes:
- Facebook has a malicious URL scanner that checks urls linked to in posts to make sure they do not contain content that could be harmful to users
- The most simple content cloaking technique, displaying different content to different users (ie, look for the facebook bots user-agent string) and fool this system
- In the example proof of concept attack, the url looks like a .jpg file, and will get a thumbnail in the facebook preview, but if you follow the link, you will be rickrolled
- Proof of Concept
- Sony has suspended 93,000 accounts that were successfully accessed during a massive wave of failed login attempts.
- This suggests that Sony does not have any automated systems for slowing, or blocking such brute force attacks.
- The attack effected large numbers of users on both the PSN/SEN, and SOE
- While Sony claims the the attackers must have had a list of username/password combinations from some other site that was attacked, the fact that 100s of thousands of accounts had attempts against them, and 93,000 succeeded, suggests one of a few hypothesises:
- The attack used user data from the original sony hack (and/or users reset their passwords back to the same stolen passwords)
- The flaw in the PSN password reset system that allowed attackers to reset other users’ passwords was more widespread that first though
- Users were the victims of the multiple phishing attempts we saw around the the PSN compromise
- Sony was compromised again
- Additional Article
- Sony CISO Statement
- As many as 25% of American voters in the 2012 election will use voting machines that can be compromised using just $10.50 worth of off-the-shelf hardware (or $26 if you want a remote control).
- This attack is the most simplistic yet known to exist, as it requires far less programming or cyber warfare skills
- The researches developed three different types of attack
- Programmer under oath admits computers rig elections
- Insider Attack Against Diebold Voting Machines
Dominic emails in:
YOU’RE DOING IT WRONG
Q: When building physical network topology, say you have 5x 8 port switches, are you best to connect the router to port 1 of switch#1 then connect various other computers to the rest of the ports on switch#1 with the last port connecting to switch#2 which has one port to switch#3 and so on (essentially daisy chaining) or have one ‘master’ switch where each port of the switch connects to each of the other switches (2, 3, 4 and 5) then have the router and PCs plugged into those (I know its a bit overkill for a home network but its just in theory as I’ve had to deal with stuff like network loops and such before and wondering if there is any real advantage between the two methods).
A: The second setup you described is a proper ‘hierarchical networking model’, which usually consists of three layers. The first layer is the Access Layer, this is where individual computers are connected to the network, this is typically just a (relatively) low-end switch. The next layer, is the Distribution Layer, this is where a lot of routers and firewalls do their work, they usually also acts as the separation between departments, locations and regions. Typically computers in the same Access Layer can reach each other directly without going through a router. The top layer of the network is the Core Layer, this is the fastest part of the network, where data is exchanged between the different Distribution Layers. In your more limited setup, the ‘master’ switch would be the Core Layer, and exchange traffic between each of the different Access Layer switches. However, for your home this may not be the best setup. If all of the switches are 100mbit, then the links between the Core Layer switch, and the Access Layer switch can be a bottleneck. For example, if you had 2 pairs of clients communicating with each other on the same switch (so 4 machines, A<->B and C<->D), they could each communicate at 100mbit/second. However, if A and C are on Access Layer switch#2, and B and D are on Access Layer switch#3, then the bandwidth between #2 and #3 is limited to 100mbit total, and so each stream would only be able to use 50mbit/sec. However, if A and B are on one switch, and C and D are on another, then no data is exchange through the Core Layer at all. So a number of factors, especially your traffic patterns, must be considered when setting up your network topology. You do not have to worry about creating ‘loops’ or anything as long as each switch only has a single path to each other switch. Higher end switches (managed ones) will have ‘STP’ (Spanning Tree Protocol), which allows them to avoid loops even when they have multiple paths, while still adapts and using one of the extra paths if the preferred path is disconnected.
At my house, I have a 5 port gigabit switch, and 3 100mbit switches. My PC, Router/File Server, and Media center connect to the gigabit switch, the 4th port goes to the wireless AP, and the 5th to the switch in my bedroom. The remaining 100mbit switch (used for the machines in the rack in my living room) is fed off the wired ports for the wireless AP.
- Apple removes DigiNotar root certificates from iOS 5
- Virus Scanner performance benchmarks
- Amazon In Talks With HP To Buy Palm
- FBI makes arrest after Johansson, Aguilera e-mails hacked
- Google Hands Wikileaks Volunteer’s Gmail Data to U.S. Government
- Blackberry service loss questions. : techsnap
- BlackBerry services return after historical global outage
- Microsoft Security Products Flag Google Chrome As a Virus
The guys focus on the recent major network compromises, and outages – and what was at the core of their failure. Like Sony’s PSN and SOE attacks, and the recent Amazon EC2 outages. What do these very separate events have in common?
Find out what simple mistakes snowballed into full-on network meltdowns. Plus the EU’s nutty plans to convince websites to prompt every user to sign a EULA for their cookies!
Topic: SOE Breached as well, 24 million records stolen
- Old database from 2007 compromised, 12,700 credit cards with expiry dates and 10,700 direct debit accounts
- Old data was not destroyed, why?
- Was this data not encrypted, as sony claims the PSN credit card database was?
- most of these cards are likely expired, but some banks use extended expiration dates
- direct debit accounts are likely more at risk, although harder to exploit
- Sony says that PSN and SOE are isolated systems, but it seems the attacks are related
- Data was stolen as part of the original compromise on April 16-17th (earlier than previously reported), not a separate compromise
- If the data is separate, how were both databases compromised?
- If the data is not isolated, why were SOE customers not notified weeks ago when the breech was discovered? More attempted cover-up by Sony.
- SOE passwords are hashed (no specifics on algorithm or if they were salted)
- Data includes: name. address, e-mail, birthdate, gender, phone number, username name, and hashed password
- Unconfirmed rumours that the credit card lists have been offered for sales or to Sony
- Sony offering customers from Massachusetts free identity theft protection service, as required by state law in the event of such a breech
- It later came to light in congressional hearings in the US (which Sony declined to attend) that Sony was using outdated, known vulnerable software, and that this fact had been reported to them by security researches months before these attacks
- Sony says that it has added automated monitoring and encryption to its systems in the wake of the recent attacks.
Topic: Wikileaks may have forced the US Government’s Hand
- US knew that someone was hiding in the compound since at least last summer
- US was unsure who was in the compound, believed it was UBL but were unsure, and unwilling to risk disclosing the depth of their penetration of the oppositions security
- Classic Intelligence Paradox, what use is having the information if you cannot use it, but using it will expose your sources and methods.
- The wikileaks release of Guantanamo documents exposed the US’s penetration of UBL’s courier network
- US likely decided to move immediately to avoid squandering the opportunity
Topic: Stupid EU law of the week
- Basically will result in users being met with mini-EULA asking you to opt in to cookies in order to enter every site on the internet
- Law has a specific provision to allow cookies to be used to track the contents of your shopping cart
- Cookies are an important part of web applications. HTTP is stateless, and cookies are the easiest and most convenient way to maintain state
- Controls for cookies are best left to the browser, which decides and enforces policies on cookies
- There already exists the ‘same-domain’ policy in all browsers, cookies can only be read by the site that set them
- There exists a better alternative already supported by Google and Mozilla, the DNT (Do Not Track) opt-out system asks advertisers to not use or not collect behavioural data. Google’s system works slightly differently but accomplishes the same goal.
- This is yet another example of governments passing laws without considering the technical implications of their implementation. Governments seem to purposefully avoid consulting actual experts and instead hire consultants that will agree with their position.
Topic: Image authentication system cracked
- Digital SLR camera technology that signs photos with a private key when they are taken to allow their originality to be verified.
- The image and the meta data are both hashed with SHA-1 (this is possibly insufficient, SHA-256 or better should be used for cryptographic security and future proofing)
- The two hash values are then encrypted separately using a 1024-bit RSA key (again, insufficient key size, even SSL requires 2048 bit keys now) and stored in the EXIF data
- The verification software then validates the signature and compares the hashes
- Very similar system with similar flaw found in the Canon Original Data Security system. Neither Canon or Nikon have responded nor indicated they will address the issues
- ElcomSoft managed to extract the private key and sign forged images that then passed verification
- It seems all Nikon cameras use the SAME key, not separate keys per camera, so once the key is exposed, the entire system is compromised, not just the single camera
Topic: Amazon Post Mortem, some data loss
- Original failure was caused by network operator error
- Failure caused some data loss, a small portion but still significant
- Online cloud services such as Chartbeat lost data
- Replica system had no rate limiting, so when a large number of EBS volumes failed, the creation of replicas to replace them overloaded the centralized management system (the only shared part of the EBS infrastructure)
- All Availability zones ran out of capacity, new replicas of data could not be created
- EBS nodes that needed to create replicas as well as EC2 and RDS nodes backed by them became ‘stuck’ waiting for capacity to store replicas. Effected about 13% of all nodes in the availability zone.
- Create Volume API calls have a long timeout, caused thread starvation as the requests continued to back up on the shared centralized management system (EBS Control Plane)
- The overload of the control plane caused all EBS nodes in US-EAST to experience latency and higher error rates
- To combat this, amazon disabled all ‘Create Volume’ API calls to restore service to the unaffected Availability zones
- EBS control plane again became overwhelmed with other API calls caused by the degradation of the effected availability zone, all communications between the broken EBS volumes the control plane were disabled to restore service to other customers
- Lessons going forward:
- Rate limiting on all API calls
- Limit any one availability zone from dominating the control plane
- Move some operations into separate control planes in each availability zone
- Increase stand-by capacity to better accommodate growth and failure scenarios
- Increase automation in network configuration to prevent human error
- Additional intelligence to prevent and detect ‘re-mirroring storms’
- Increase back off timers more aggressively in a failure scenario
- Focus on re-establishing connections with existing replicas instead of making new ones
- Educate customers about using multiple-AZ (Availability Zone) setups to reduce the impact of partial failures of the cloud
- Improve communications and Service Health Monitoring tools