LinuxPlanet Casts

Media from the Linux Moguls

Archive for the ‘dns’ Category

How Malware Makes Money | TechSNAP 31

without comments

post thumbnail

The FBI shuts down a cyber crime syndicate, and we’ll tell you just how much profit they were bring in.

Plus we’ll cover how to securely erase your hard drive, Xbox Live’s minor password leak, how researches remotely opened prison cell doors, in my own state!

All that and more, on this week’s episode of TechSNAP!

Thanks to:
GoDaddy.com Use our codes TechSNAP10 to save 10% at checkout, or TechSNAP20 to save 20% on hosting!


Pick your code and save:

  • techsnap7: $7.49 .com
    techsnap10: 10% off
    techsnap20: 20% off 1, 2, 3 year hosting plans
    techsnap40: $10 off $40
    techsnap25: 25% off new Virtual DataCenter plans
  •  


    Direct Download Links:

    HD Video | Large Video | Mobile Video | MP3 Audio | OGG Audio | YouTube

    Subscribe via RSS and iTunes:

       

    Show Notes:

    FBI takes out malware operation that illicitly made 14 million dollars

    • The malware was said to have infected as many as 4 million computers in 100 countries
    • Atleast 500,000 infected machines in the USA alone
    • Operation Ghost Click resulted in indictments against six Estonian and one Russian national. The Estonians were taken in to custody by local authorities and the US is seeking to extradite them.
    • The malware, called DNSChanger, changed the users DNS servers, to use rogue servers run by the botnet operators, and allowed the attackers to basically perform man-in-the-middle attacks against any site they wished.
    • The attackers redirected all traffic related to Apple and iTunes to a site that sold fake apple software and pirated music.
    • The attackers also stole traffic from legitimate advertising networks and replaced it with their own network, charging advertisers for their ill gotten traffic.
    • The malware also blocked windows update and most known virus scanners and help sites.

    Pastebin of XBox Live IDs and passwords published

    • The pastebin contained 90 game tags, passwords and possibly email addresses
    • Microsoft says that they do not believe their network was compromised, and that this list is the result of a small scale phishing attack
    • The size of the credential dump seems to support that conclusion
    • Regardless, it is recommended that you change your XBox Live password, and the password on any other service that shared the same password, especially the email address used for your XBox Live.

    Researchers Uncover ‘Massive Security Flaws’ In Amazon Cloud

    • The vulnerability (since fixed) allowed an attacker to completely take over administrative rights on another AWS account, including starting new EC2 and S3 instances, and deleting instances and storage
    • An attacker could have run up a huge bill very quickly, and it would appear legitimate.
    • Using EC2 to crack passwords becomes even more effective when someone else is paying for your instances
    • The vulnerability was exploited using an XML signature wrapping attack, allowing them to modify the signed message while still having it verify as unmodified.
    • Amazon said “customers fully implementing the AWS security best practices were not susceptible to these vulnerabilities”
    • Previous Article about Amazon AWS Security
    • The previous article mostly covers vulnerabilities created by users of AWS, including people publicly publishing AMIs with their SSH keys still in them.

    Prison SCADA systems vulnerable to compromise

    • Researchers have been able to compromised the SCADA systems and open/close cell doors, overload door mechanisms so they cannot be open/closed, and disable the internal communications systems.
    • The researches worked in one of their basements, spent less than $2,500 and had no previous experience in dealing with these technologies.
    • Washington Times Article confirms that the research was delivered to state and prison authorities, and that Homeland Security has verified the research
    • Researchers were called in after an incident where all of the cell doors on death row at once prison opened spontaneously
    • While the SCADA systems are not supposed to be connected to the Internet, it was found that many of them were.
    • Some were used by prison staff to browse the Internet, leaving them open to malware and other such attacks.
    • While others had been connected to the Internet so they could be remotely managed by consultants and software vendors
    • Even without the Internet, researchers found that the system could be compromised by an infected USB drive, connected to the
      SCADA system either via social engineering or bribery of prison employees.

    Feedback:

    Simon asks about destroying your data before recycling/selling your used hard drives

    • There are a number of tools that will overwrite the contents of your hard drive a number of times in various patterns. The goal here is to ensure that any data that was on the drive can not be recovered. There is never a guarantee that the data will not be recoverable.
    • Allan Recommends: DBAN – Darik’s Boot And Nuke
    • It is still a very good idea to overwrite the data on your disks before you recycle/sell them. The methods are slightly different now, specifically, some methods such as the ‘Gutmann Wipe’ which was designed for a specific type of disk encoding that is no longer users in modern hard drives are no longer effective.
    • DBAN supports a number of methods:
    • PRNG Stream (recommend) – literally overwrites the entire drive with a stream of data from the Pseudo Random Number Generator. It is recommended that you use 4 passes for medium security, and 8 or more passes for high security.
    • DoD 5220.22-M – The US Department of Defence 7 pass standard. The default is DBAN is the DoD Short, which consists of passes 1, 2 and 7 from the full DoD wipe.
    • RCMP TSSIT OPS-II – The Canadian governments “Technical Security Standard for Information Technology”: Media Sanitization procedure. (8 passes)
    • Quick Erase (Not recommended) – Overwrite the entire drive from 0s, only 1 pass. This is designed for when you are going to reuse the drive internally, and is not considered secure at all
    • DBAN also verifies that the data was overwritten properly, by reading back the data from the drive and verifying that the correct pattern is found.
    • I am not certain about the answer to your question concerning SD cards and other flash storage not in the form of a hard disk. A file erasure utility may be the only option if the device does not actually accept ATA/SCSI commands (careful, some USB devices pretend to accept the commands but just ignore ones they do not understand)
    • Simon’s method of using the shred utility (designed to overwrite an individual file) on the block device, is not recommended. a proper utility like DBAN uses ATA/SCSI commands to tell the disk to securely erase it self, which involves disabling write caching, and erasing unaddressable storage such as those that have been relocated due to bad sectors.
    • Special consideration should be given to SSDs, as they usually contain more storage than advertised, and as the flash media wears out, it is replaced from this additional storage. You want to be sure your overwrite utility overwrites the no-longer-used sectors as they will still contain your data. This is why a utility that uses the proper ATA/SCSI commands is so important.
    • A utility like DBAN is also required if the disk contained business or customer data. Under legislation such as PIPEDA (Personal Information Protection and Electronic Documents Act, Canada), HIPAA and Sorbanes-Oxley (USA), the information must be properly destroyed.

    Round UP:

    ZFS Server Build Progress:

    • Finalized Parts List
    • Parts Summary:
    • Supermicro CSE–829TQ-R920UB Chassis
      • 8 hot swapable SAS bays
      • dual redundant 920 watt high-efficiency PSUs
    • Supermicro X8DTU–6F+ motherboard
      • Dual Socket LGA 1366
      • 18x 240pin DDR3 1333 slots (max 288GB ram)
      • Intel 5520 Tylersburg Chipset, ICH10R
      • LSI 6Gb/s SAS Hardware RAID controller
      • Intel ICH10R SATA 3Gb/s SATA Controller
      • IPMI 2.0 with Virtual Media and KVM over LAN
      • Dual Intel 82576 Gigabit Ethernet Controller
    • Dual Intel Xeon E5620 Processors (4×2.4Ghz, HT, 12MB Cache, 80W)
    • 48GB DDR3 1333mhz ECC Registered RAM
    • 2x Seagate Barracuda XT 2TB SATA 6Gb/s 7200rpm Drives (for OS)
    • 9x Seagate Consellsation ES 2TB SAS 6Gb/s 7200rpm Drives (8x for RAID Z2, 1x cold spare)
    • Adaptec RAID 6805 Controller (8 Internal drives, supports up to 256 drives, 512mb DDR2 667 cache)
    • Adaptec AFM 600 Flash Module (Alternative to BBU, provides 4GB NAND flash power by super capacitor to provide zero maintenance battery backup)

    Written by chris

    November 10th, 2011 at 8:18 pm

    Rooted Trust | TechSNAP 22

    without comments

    post thumbnail

    Remember the Man in the Middle attack on google from last week? Turns out it was far worse than though, we now have more details on the DigiNotar compromise, and a number of other important sites have had their DNS hijacked.

    Plus we cover the advantages of running your own DNS server at home, and how Allan and Chris got their start in the world of IT!

    All that and more, in this week’s TechSNAP!

    Direct Download Links:

    HD Video | Large Video | Mobile Video | MP3 Audio | OGG Audio | YouTube

    Subscribe via RSS and iTunes:

    Show Notes:

    DigiNotar Hack Details

    • A company spokesman said that “several dozen” certificates had been acquired by the attackers.
    • The confirmed count of fraudulently-issued SSL (secure socket layer) certificates now stands at 531.
    • The first known-bad certificate, for Google.com, was created by attackers on July 10, 2011. Between July 19 and July 29, DigiNotar began discovering bad certificates during routine security operations, and blocking them.
    • But the attack didn’t come to light until August 27
    • Comodohacker said the attack against DigiNotar was payback for the Srebrenica massacre.
    • He also suggested that he wasn’t operating under the auspices of Iranian authorities, but that he may have given them the certificates.
    • Comodohacker also posted additional proof that he had the private key for the invalid google.com certificate, by using it to sign a copy of calc.exe, a feature a regular website SSL certificate should not have.
    • The DigiNotar hack has already had wide-ranging repercussions for the 9 million Dutch citizens–in a country with a population of 17 million–that use DigiD , a government website for accessing services, such as paying taxes.
    • According to news reports, the country’s lawyers have been forced to switch to fax and mail, to handle many activities that were supported by an intranet.
    • The Netherlands has also indefinitely extended the country’s tax deadline until DigiD can again be declared secure.
    • Mozilla has made this public statement: “This is not a temporary suspension, it is a complete removal from our trusted root program.”. Such harsh action was taken because DigiNotar did NOT notify everyone when the breech was discovered.
    • F-Secure Weblog says they were hacked by someone who was connected to “ComodoGate” — the hacking of another Certificate Authority earlier this year, by an Iranian attacker.

    Removing the DigiNotar Root CA certificate : Ubuntu

    Microsoft out-of-cycle patch to fix DigiNotar bogus certificates

    Hacker claims to have compromised Other SSL Cert Authorities

    • Soon after the Comodo forged certificates hack an Iranian using the handle Comodohacker posted a series of messages via Pastebin account providing evidence that he carried out the attack.

    • The hacker boasted he still has access to four other (unnamed) “high-profile” CAs and retains the ability to issue new rogue certificates, including code signing certificates.

    • ComodoHacker also claims to have compromised StartSSL, however issuance of invalid certificates was prevented by a policy change that required the CEO to manually offline approve each issued certificate. The HSM (Hardware Signing Module) being offline seems like the only way to be entirely sure that invalid certificates are not issued. A proper policy, more than just rubber stamping any certificate that doesn’t say google.com on it should be required.

    • GlobalSign on Tuesday announced that it would temporarily cease issuing any new certificates.
      “GlobalSign takes this claim very seriously and is currently investigating,” according to a statement released by the company

    • Is the fifth-largest CA

    • GlobalSign Suspends Issuance of SSL Certificates

    • BBC Article

    DNS hack hits popular websites: Telegraph, Register, UPS, etc

    • Further websites which have been affected by the DNS hack include National Geographic, BetFair, Vodafone and Acer.
    • Instead of breaching the website itself, the hackers have managed to change the DNS records for the various sites affected.
    • Because of the way that DNS works, it may take some time for corrected DNS entries for the affected websites to propagate worldwide – meaning there could be problems for some hours even after the fix.
    • The attack was against the domain registrars Ascio and NetNames, both owned by the same parent company.
    • Apparently the attacker managed to use an SQL injection attack to gain access to the domain accounts, and change the name servers.
    • BBC Article

    Feedback:

    Home DNS Software:

    A different kind of question for TechSNAP! : techsnap

    Round-Up:

    Bitcoin-Blaster:

    Smarter Google DNS | TechSNAP 21

    without comments

    post thumbnail

    Google and openDNS join forces to improve the speed of your downloads, find out what they are doing and how it works!

    Plus gmail suffered another man in the middle attack, and Kernel.org gets some egg on their face!

    All that and more, on this week’s episode of TechSNAP!

    Direct Download Links:

    HD Video | Large Video | Mobile Video | WebM Video | MP3 Audio | OGG Audio | YouTube

    Subscribe via RSS and iTunes:

    Show Notes:

    Another SSL Certificate Authority Compromised, MitM Attack on Gmail

    • Sometime before July 10th, the Dutch Certificate Authority DigiNotar was compromised and the attackers we able to issue a number (apparently as many as 200) of fraudulent certificates, including a wildcard certificate for *.google.com. The attack was only detected by DigiNotar on July 19th. DigiNotar revoked the certificates, and an external security audit determined that all invalid certificates had been revoked. However, it seemed that probably the most important certificate, *.google.com was in fact not revoked. This raises serious questions and seems to point to a coverup by DigiNotar. Detailed Article Additional Article
    • Newer versions of Chrome were not effected, because Google specifically listed a small subset of CAs who would ever be allowed to issue a certificate for gmail. This also prevents self-signed certificates, which some users fall for regardless of the giant scary browser warning. Chrome Security Notes for June
    • Mozilla and the other browsers have taken more direct action disabled than they did with the Comodo compromise. All major browsers have entirely removed the the DigiNotar root certificate from their trust list. With the Comodo compromise, the effected certificates were blacklisted, but the rest of the Comodo CA was left untouched. One wonders if this was done as strong signal to all CAs that that must take security more seriously, or if DigiNotar was in fact cooperating with the Iranian government in its efforts to launch MitM attacks on its citizens. Mozilla Security Blog
    • Part of the issue is that some of the certificates issued were for the browser manufacturers them selves, such as Mozilla.org. With a fake certificate from Mozilla, it is possible that the MitM attack could block updates to your browser, or worse, feed you a spyware laden version of the browser.
    • Press Release from Parent Company VASCO
    • Pastebin of the fraudulent Certificate
    • Allan’s blog post about the previous CA compromise, and more detail than can fit even in an episode of TechSNAP
      *

      GoogleDNS and OpenDNS launch ‘A Faster Internet’

    • The site promoted a DNS protocol extension called edns-client-subnet that would have the recursive DNS server pass along the IP Subnet (not the full IP, for privacy) of the requesting client, to allow the authoritative DNS server to make a better Geo Targetting Decision.
    • A number of large content distributors and CDNs rely on GeoIP technology at DNS time to direct users to the nearest (and as such, usually fastest) server. However this approach is often defeated when a large portion of users are using GoogleDNS and OpenDNS and all of those requests come from a specific IP range. As this technology takes hold, it should make it possible for the Authoritative DNS servers to target the user rather than the Recursive DNS Server, resulting in more accurate results.
    • Internet Engineering Task Force Draft Specification
    • This change has already started effecting users, many users of services such as iTunes had complained of much slower download speeds when using Google or Open DNS. This was a result of being sent to a far-away node, and that node getting a disproportionate amount of the total load. Now that this DNS extension has started to come online and is backed by a number of major CDNs, it should alleviate the problem.
    • ScaleEngine is in the process of implementing this, and already has some test edns enabled authoritative name servers online.
      *

      Kernel.org Compromised

    • Attackers were able to compromise a number of Kernel.org machines
    • Attackers appear to have compromised a single user account, and then through unknown means, gained root access.
    • Attackers replaced the running OpenSSH server with a trojaned version, likely leaking the credentials of users who authenticated against it.
    • Kernel.org is working with the 448 people who have accounts there, to replace their passwords and SSH keys.
    • The attack was only discovered due to an extraneous error message about /dev/mem
    • Additional Article

    Feedback:

    Q: (DreamsVoid) I have a server setup, and I am wondering what it would take to setup a backup server, that would automatically take over if the first server were to go down. What are some of the ways I could accomplish this?

    A: This is a rather lengthy answer, so I will actually break it apart, and have given one possible answer each week, for the last few weeks. This weeks solution is Anycast. This is by far the most complicated and resource intensive solution, but it is also the most scalable. Standard connections on the Internet are Unicast, meaning they go from a single point to another single point (typically, from a client to a specific server). The are also Broadcast (send to all nodes in the broadcast domain, such as your local LAN), and Multicast (send to a group of subscribed peers, used extensively by routers to distribute routing table updates, but does not work on the Internet). Anycast is different than a Unicast, instead of sending the packet to a specific host, the packet is sent to the nearest host (in network terms, hops, not necessarily geographic terms). The way Anycast works is your BGP enabled routers broadcast a route to your subnet to the Internet from each of the different locations, and the other routers on the Internet update their routing tables with the route to the location that is the fewest hops away. In this way, your traffic is diverted to the nearest location. If one of your locations goes down, when the other routers do not get an update from the downed router, they automatically change their route to the next nearest location. If you want only fail over, and not to distribute traffic geographically, you can have your routers prefix their routes with their own AS number a sufficient number of times to make the backup location always more hops than the main location, so it is only used if the main is down. There are some caveats with this solution, the first being that TCP packets were never meant to randomly redirect to another location, if a route change happens in the middle of an active session, that session will not exist at the second location, and the connection will be dropped. This makes Anycast unsuitable for long-lived connections, as routes on the Internet change constantly, routing around faults and congestion. Connections also cannot be made outbound from an Anycast IP, as the route back may end up going to a different server, and so a response will never be received, so servers would require a regular Unicast address, plus the Anycast address. A common solution to overcome the limitations of Anycast, is to do DNS (which is primarily UDP) via Anycast, and have each location serve a different version of the authoritative zone, which the local IP address of the web server, this way the users are routed to the nearest DNS server, which then returns the regular IP of the web server at the same location (this solution suffers from the same problems mentioned above in the Google DNS story). Another limitation is that due to the size of the address space on the Internet, most provides will not accept a route for a subnet smaller than a /24, meaning than an entire 256 ip address subnet must be dedicated to Anycast, and your servers will each require a regular address in a normal subnet. Broadcasting routes to the Internet also requires your own Autonomous System number, which are only granted to largish providers, or an ISP willing to announce your subnet on their AS number, but this requires a Letter of Authorization from the owner of the IP block.
    *

    ROUND-UP:

    Bitcoin-Blaster:

    Written by chris

    September 2nd, 2011 at 12:42 am

    Keeping it Up | TechSNAP 20

    without comments

    post thumbnail

    Apache and PHP have hooked up at the fail party, and we’ll share all the details to motivate you to patch your box!

    Then Microsoft takes a stab at AES and we wrap it all up with a complete run down of Nagios, and how this amazing tool can alert you to a potential disaster!

    All that and more, on this week’s TechSNAP!

    Direct Download Links:

    HD Video | Large Video | Mobile Video | WebM Video | MP3 Audio | OGG Audio | YouTube

    Subscribe via RSS and iTunes:

    Show Notes:


    All versions of the apache web server are vulnerable to a resource exhaustion DoS attack

    • A single attacker with a even a slow internet connection can entirely cripple a massive apache server
    • The attack uses the ‘Range’ header, requesting 1300 different segments of the file, causing the web server to create many separate memory allocations. The existing attack script defaults to running 50 concurrent threads of this attack, which will quickly exhaust all of the ram on the server and drive the server load very high.
    • Apache 1.3 is past it’s End Of Life and will not receive an official patch
    • A different aspect of this bug (using it to exhaust bandwidth) was pointed out by a Google security engineer over 4 years ago

    PHP 5.3.7 contains a critical vulnerability in crypt()

    • Official Bug Report
    • The crypt() function used for hashing password received much attention in this latest version of php, and a bug was inadvertently introduced where when you hash a password with MD5, only the salt is returned. This means that when validating a login attempt, when the hash of the attempt is compared to the stored hash, only the salt will match, resulting in a failed login attempt. However if the user changes their password, or a new user registers, the stored hash will only be the salt, and in that case, any attempted password will result in a successful login attempt.
    • PHP 5.3.7’s headline bug fix was an issue with the way blowfish crypt() was implemented on linux (it worked correctly on BSD). Some passwords that contained invalid UTF-8 would result in very weak hashes
    • It seems that this error was caught by the PHP unit testing framework, so the fact that it made it in to a production release means that the unit testing was likely not properly completed before the release was made.
    • 5.3.7 was released on August 18th. The release was pulled on August 22nd, and 5.3.8 was released on August 23rd

    Researches have developed a new attack against AES

    • Researchers from a Belgian (Katholieke Universiteit Leuven) and a French (Ecole Normale Suprieure) University, working with Microsoft research have developed a new attack against AES that allows an encryption key to be recovered 3 to 5 times faster than all previous attacks
    • The attack would still take billions of years of CPU time with currently existing hardware
    • Full Paper with Details
    • Comments by Bruce Schneier
    • Additional Article

    Feedback

    Q: (DreamsVoid) I have a server setup, and I am wondering what it would take to setup a backup server, that would automatically take over if the first server were to go down. What are some of the ways I could accomplish this?

    A: This is a rather lengthy answer, so I will actually break it apart, and give one possible answer each week, for the next few weeks. This weeks solution is to use DNS Failover. For this feature, I personally use a 3rd party DNS Service called DNS Made Easy . Once you are hosting your DNS with them, you can enable Monitoring and DNS Failover. This allows you to enter the IPs of more than one server for the DNS entry such as www.mysite.com. Only one IP will be used at a time, so it is not the same as a ‘Round Robin’ setup. This simplifies problems with sessions and other data that would need to be shared between all of the servers if they were used at the same time. DNSMadeEasy will monitor the website every minute from locations all over the world, and if the site is unreachable, it will automatically update your DNS record to point traffic to the next server on your list. It will successively fail over to each server on the list until it finds one that is up. When the primary server comes back, it can automatically switch back. We use this for the front page of ScaleEngine.com, if the site were ever down, it would fail over to a backup server we have at a different hosting provider. This backup copy of the site is still reliant of a connection to our centralized CMS (which also uses DNS Failover), and if that were down too, it fails over to a flat-HTML copy of our website that is updated once per day. This way, our website remains online even if both our primary and secondard hosting are offline, or if all 3 fail over servers for the CMS are down as well.


    Q: (Al Reid) Nagios seems to be a very good open source and widely used network monitoring software solution, is it possible that you guys could discuss the topic of network monitoring for services, hosts, router, switches and other uses?

    A: Nagios is an open source network monitoring system that can be used to monitor a number of different aspects of both the hosts (physical and virtual servers, routers) and the services of those hosts (programs like apache, mysql, etc). The most basic monitoring is just pinging the host, and entering an alert state if the host does not response, or if the latency or packet loss exceed a specific threshold. However the real power of a network monitoring system comes not only from alerting you (via email, text message, audible alarm) when something is down, but actually monitoring and graphing performance over time. For example, with my MySQL servers, nagios monitors not only that they are accessible, but graphs the number of queries per second, and the number of concurrent connections. This way, if I notice higher than expected load on one of the servers, I can pull of the graph and see that, yes, a few hours ago the number of queries per second jumped by 30%, and that is obviously what is causing the additional load. A huge number of things can be monitored using a combination of the nagios tools and the SNMP (Simple Network Management Protocol) interfaces exposed by many devices. For example, we monitor power utilization from our PDUs and traffic through each of our switch ports. Some of the main metrics we monitor on each server are: CPU load, load averages, CPU temperature, free memory, swap usage, number of running processes, uptime (alerts us when a device reboots unexpectedly), free disk space, etc. We also monitor our web servers closely, monitoring the number of connections, requests per second, number of requests waiting on read or write, etc. Nagios monitoring can be taken even further, more advanced SNMP daemons on servers can list the packages that are installed, and a nagios tool could be setup to alert you when a known vulnerable package is detected, prompting you to upgrade that package. Nagios can also monitor your SSL certificates and Domain Names, and alert you when they are nearing their expiration dates (Chris should have this so he doesn’t forget to renew JupiterBroadcasting.com every year). Nagios supports two different methods of monitoring. The first is ‘active’, which is the most commonly used, nagios connects to the server/service and checks that it is running, and gets the performance data, if any. However nagios can also support ‘passive’ data collection, where the server or service pushes performance data to nagios, and nagios can trigger an alert if an update is not received within a specific time frame, this can help solve a common issue we have discussed before, where the monitoring server is a weak point in the security of the network, a single host that is able to connect to even the most secure hosts in your network. With passive monitoring, you can have secure hosts or unroutable LAN hosts push their monitoring and performance data to nagios from behind the firewall, even when nagios cannot connec to that host. Other alternative to nagios are Zabbix, SpiceWorks or Cacti, but I have never used them.


    Random SQL Injection Comic

    Round Up:

    Bitcoin Blaster:

    Written by chris

    August 25th, 2011 at 11:33 pm