LinuxPlanet Blogs

By Linux Geeks, For Linux Geeks.

Initial Support for Freedreno on Android

without comments

I now have Android-x86 running Freedreno on my IFC6410 o/!

Freedreno on Android

The setup:

on an IFC6410 (A320 GPU) with an ancient CRT


The gralloc module carries two responsibilities: providing an interface to /dev/graphics/fb0 for creating FrameBufferNativeWindow-s and, handling buffer requests that come in through a singleton system wide GraphicBufferAllocator object. Each request is stamped with a usage flag to help gralloc decide how to provide it.

Current setup uses the default eglSwapBuffers() based HWC, i.e. all-GPU composition for now:

$ dumpsys SurfaceFlinger
[...]
numHwLayers=3, flags=00000000
type    |  handle  |   hints  |   flags  | tr | blend |  format  |          source crop            |           frame           name
------------+----------+----------+----------+----+-------+----------+---------------------------------+--------------------------------
GLES | 2a2a31e0 | 00000000 | 00000000 | 00 | 00100 | 00000005 | [      0,     13,   1024,    744] | [    0,   13, 1024,  744] com.example.android.home/com.example.android.home.Home
GLES | 2a060ad8 | 00000000 | 00000000 | 00 | 00105 | 00000005 | [      0,      0,   1024,     13] | [    0,    0, 1024,   13] StatusBar
GLES | 2a0614d8 | 00000000 | 00000000 | 00 | 00105 | 00000005 | [      0,      0,   1024,     24] | [    0,  744, 1024,  768] NavigationBar

Shortlog of issues faced:

I started with getting bare kitkat-x86 up on the IFC6410 with an ARM build setup. Fast forward multiple tries figuring out the build system and setting up the board-specific init scripts, I added Mesa to the build. The default HWC doesn’t implement blank(), which took some time to realize and fix a segfault. With all parts in place, I had the boot logo, with two boot flows due to a race condition. Disabling GPU authentication was another hack that needed to be put in.

Finally, I had Skia causing trouble while starting the UI. This turned out to be a simple fix to drm_gralloc, but had me looking everywhere through the sources and mistaking it to be a NEON specific issue with Skia.

Quirks:

Currently, Android only renders a few windows before SurfaceFlinger crashes while gralloc::unregisterBuffer() and restarts. Also, dexopt dies causing app installation failure, so not many apps to play around with.

TODOs:

  • Add a KMS based Hardware Composer.
  • GBM for drm_gralloc

Here is my trello board: https://trello.com/b/bIXu6QL6/freedreno-android

Thanks Rob, Emil, #freedreno, #android-x86 and #linaro!



Written by Varad

June 27th, 2015 at 2:22 am

Posted in android,freedreno,X.Org

John Oliver Falls For Software Patent Trade Association Messaging

without comments

I've been otherwise impressed with John Oliver and his ability on Last Week Tonight to find key issues that don't have enough attention and give reasonably good information about them in an entertaining way — I even lauded Oliver's discussion of non-profit organizational corruption last year. I suppose that's why I'm particularly sad (as I caught up last weekend on an old episode) to find that John Oliver basically fell for the large patent holders' pro-software-patent rhetoric on so-called “software patents”.

In short, Oliver mimics the trade association and for-profit software industry rhetoric of software patent reform rather than abolition — because trolls are the only problem. I hope the worlds' largest software patent holders send Oliver's writing staff a nice gift basket, as such might be the only thing that would signal to them that they fell into this PR trap. Although, it's admittedly slightly unfair to blame Oliver and his writers; the situation is subtle.

Indeed, someone not particularly versed in the situation can easily fall for this manipulation. It's just so easy to criticize non-practicing entities. Plus, the idea that the sole inventor might get funded on Shark Tank has a certain appeal, and fits a USAmerican sensibility of personal capitalistic success. Thus, the first-order conclusion is often, as Oliver's piece concludes, maybe if we got rid of trolls, things wouldn't be so bad.

And then there's also the focus on the patent quality issue; it's easy to convince the public that higher quality patents will make it ok to restrict software sharing and improvement with patents. It's great rhetoric for a pro-patent entities to generate outrage among the technology-using public by pointing to, say, an example of a patent that reads on every Android application and telling a few jokes about patent quality. In fact, at nearly every FLOSS conference I've gone to in the last year, OIN has sponsored a speaker to talk about that very issue. The jokes at such talks aren't as good as John Oliver's, but they still get laughs and technologists upset about patent quality and trolls — but through carefully cultural engineering, not about software patents themselves.

In fact, I don't think I've seen a for-profit industry and its trade associations do so well at public outrage distraction since the “tort reform” battles of the 1980s and 1990s, which were produced in part by George H. W. Bush's beloved M.C. Rove himself. I really encourage those who want to understand of how the anti-troll messaging manipulation works to study how and why the tort reform issue played out the way it did. (As I mentioned on the Free as in Freedom audcast, Episode 0x13, the documentary film Hot Coffee is a good resource for that.)

I've literally been laughed at publicly by OIN representatives when I point out that IBM, Microsoft, and other practicing entities do software patent shake-downs, too — just like the trolls. They're part of a well-trained and well-funded (by trade associations and companies) PR machine out there in our community to convince us that trolls and so-called “poor patent quality” are the only problems. Yet, nary a year has gone in my adult life where I don't see a some incident where a so-called legitimate, non-obvious software patent causes serious trouble for a Free Software project. From RSA, to the codec patents, to Microsoft FAT patent shakedowns, to IBM's shakedown of the Hercules open source project, to exfat — and that's just a few choice examples from the public tip of the practicing entity shakedown iceberg. IMO, the practicing entities are just trolls with more expensive suits and proprietary software licenses for sale. We should politically oppose the companies and trade associations that bolster them — and call for an end to software patents.

Written by Bradley M. Kuhn

June 26th, 2015 at 1:25 pm

Posted in Uncategorized

LinuxQuestions.org Turns Fifteen

without comments

WOW. Fifteen years ago today I made the first post ever at LQ, introducing it to the world. 15 Years. I know I’ve said it before, but 5,354,618 posts later the site and community have exceeded my wildest expectations in every way. The community that has formed around LQ is simply amazing. The dedication that the members and mod team has shown is both inspiring and truly humbling. I’d like to once again thank each and every LQ member for their participation and feedback. While there is always room for improvement, that LQ has remained a friendly and welcoming place for new Linux members despite its size is a testament to the community. Reaching this milestone has served to energize and refocus my efforts on making sure the next fifteen years are even better than the first fifteen. Visit this thread for more on how we plan to do that. We can’t do it without you.

–jeremy


Written by jeremy

June 25th, 2015 at 10:16 am

Posted in LinuxQuestions.org

HTC One M9 Review

without comments

Here’s my review from the latest episode of Bad Voltage. Note that a slightly longer version, with some pictures and a quote is available at LQ.

It’s no secret that I’m a fan of Android. As a result, I use and test a lot of different Android phones. I plan to start actually reviewing more of them. First up is the HTC One M9. You may remember that I mentioned the One M8 when I reviewed the Nexus 5. HTC’s 2015 top-of-the-line phone builds on the same sleek design as last year’s M8, sticking to the luxurious all-metal case and 5 inch Super LCD3 1080p HD screen while incorporating some key spec improvements, such as an upgraded Octo-core Snapdragon 810 processor, a 20 mega-pixel camera and a 2840 mAh battery. While it’s a bit heavy at 157g, especially compared to the iPhone 6 or Galaxy S6, I prefer the weight and balance HTC has created. At 5.69 by 2.74 inches, it’s about as large as I prefer a phone to be (For comparison, the iPhone 6+ is 6.22 x 3.06 and the Nexus 6 is 6.27 x 3.27) The M9 is one of the few flagship phones to still feature expandable storage via SD card, and it offers a unique Uh Oh one-year replacement program in the US. While the phone ships with Android 5.0, I’d expect a 5.1 roll-out in the next month or so. The device is priced at $649 unlocked in the US, with on-contract pricing starting at $199.

With the specs out of the way, let’s get to what’s important; how does the One M9 perform on a day to day basis. Let’s start with the first thing you’ll notice if you’re coming from a non-HTC phone, which is Sense 7. Sense is the UI skin that HTC puts on their Android phones. If you’re a Samsung user, it’s the equivalent to TouchWiz. My last couple full time phones have been from the Nexus family and I tend to prefer the stock Android experience. That said, Sense 7 is actually quite nice. It’s clean, performs well and has a few little touches that would be welcome additions to Android proper. An interesting new feature is a home-screen widget which dynamically changes which apps are displayed within it, depending on your location. (Work, Home, on the go). The theme generator is also pretty cool: you can take a snap of anything and the phone will analyze the image and create a full palette of colors to use with icons and app headers. Even the font and icon shapes will be altered to match the overall feel of the new theme.

While the screen doesn’t have the density or resolution of the Galaxy S6 or LG G4, its 441 pixel per inch screen looks better than the similarly spec’d Nexus 5. HTC has once again eschewed playing the number game here and opted for a non-2k experience which offers almost no discernible benefits to me at this screen size while eating up more of your limited battery. While the speakers haven’t changed much since the previous version, they are still far and away the best available in any phone. The camera is one area that has had a big change since the previous model. The 4-mega-pixel Ultrapixel sensor has been moved to the front of the phone and the aforementioned 20-mega-pixel camera now sits on the back. The phone produced quality photos in my tests, although low light scenarios are a bit of a weak point. I did notice some shutter lag at times, but there are similar lags on my Nexus 5.

While the battery is slightly more capacious than the previous One and HTC estimates you should get a full day of use out of the phone, I’d say that’s ambitious. To be fair, most Android flagship phones seem to be roughly equivalent in this regard and it’s really an area manufacturers need to focus on in my opinion. One other thing that’s changed, and this time not for the better in my opinion, is the power button transferring to the right-hand side of the phone. This may be a more natural place for it to be positioned and some people seem to prefer it, but the fact that it’s the same size and shape as the volume buttons above it results in me inadvertently hitting the incorrect button at times. It’s placement has also resulted in me accidentally powering the screen off. Perhaps I hold my phone in a different position than most people, but I suspect it’s something I’d get used to over time.

One frustrating thing about the phone is that, while it supports QuickCharge 2.0, which can charge the phone 60% in just 30 minutes, the charger that ships with the phone is not QuickCharge enabled. That seems ludicrous for a phone in this price range. It should also be noted that during serious use, the phone tends to get a bit hotter than other phones I’ve used.

So, what’s the Bad Voltage verdict? The One M8 was one of my favorite phones last year. The slick design of the M9 is still amazing, but I will say the competition has upped its game considerably. While the M8 had the plasticky S5 and the small iPhone to contend with, the M9 has to compete with the also well designed S6 and the newer updated iPhone 6. A flagship phone has to score well in a lot of areas for me to consider it a phone worth recommending. It has to have solid performance, gorgeous design, a camera that will capture memories accurately and expediently, last through a full day of use and be reasonably priced. That’s a tall order to be sure. I think the HTC One M9 makes the short list (along with the Samsung Galaxy S6 and if you don’t mind a giant phone, the Nexus 6 or LG G4). If you’re looking for an Android phone I’d recommend you look at those phones and pick the one that suits your personal tastes best. As the Nexus 6 is too big for me, my personal choice would currently be the One M9. As a testament to just how good the phone is, I lent my review device to an iPhone user so they could get a feel for Android. They’re no longer an iPhone user.

–jeremy


Written by jeremy

June 25th, 2015 at 9:49 am

Posted in android

Catching up!

without comments

A lot has changed since the last blog post (more than three years). I was happily running a successful business around Videocache till Google decided to push HTTPS really hard and enforced SSL even for video content. That rendered Videocache completely useless as YouTube video caching was the unique selling point. Though people are still using it for other websites (whatever supported and not HTTPS yet), I personally didn’t find it good enough for selling. To add to the trouble, Mozilla and friends announced that there will be free certs for everyone. Now, that took away whatever motivation was left to keep working on Videocache. I decided to open source Videocache and the source is now available on GitHub. If you have better ideas or you are looking forward to make things work by forging certs etc, fork it and give it a shot.

After all that mess with Videocache, I am left with a contract job which is not working out well. So, the learning hat is back and I am trying to catch up with the tech world. I didn’t really want to get into the whole JavaScript framework mess because there is no clear winner and there are too many of them but it looks rather unavoidable or Unflushable (if you remember Jeff from Coupling). I have been trying to make simple apps with Angularjs. There is little documentation and you have to dig really deep at times but still you can make things work if you persist. Once you spend some quality time with it, you may actually start liking Angularjs. So, I did give it some time and implemented angular fronted for Pixomatix-Api (another learning project I am working on) and I think I sort of like it now.

In 2015, if you are web developer, you must know how APIs work and you should be able to consume them. So, to learn to expose APIs and version them properly, I fired a small project Pixomatix. Being a Rails developer, you really get obsessed with it and try to implement everything using Rails. Even when you want an API with 2-3 endpoints, you tend to make the horrible mistake of doing it in Rails. This kept bugging me and a few weeks later, I decided to freshen up my Sinatra memories. But working with Sinatra is not all that easy especially if you are used to all the niceties of Rails. Dug up my attempt of implementing Videocache in Ruby, and extracted few tasks and configurations I had automated long time ago. Ended up working a lot more on it and packages into a template app with almost all the essential stuff. Though I need to document it a little more, the app has got everything needed to expose a versioned API via Sinatra.

On the other hand, I tried to use devise gem to authentication for Pixomatix. It was all good for integration with standard web apps and APIs but it sort of failed me when I tried to make the API versioned. Devise turned out to be black-hole when I tried to dig deeper to make things work. I tried a few other gems which supported token authentication but they were also no good for versioning. Generally, you may not need to version the authentication part of your API, but what if you do! Since, this was just a learning exercise, I was hell bent on implementing this. So, I just reinvented the wheel and coded basic authentication (including token authentication) for the API.

That’s it for this post. I am looking forward to post regularly on the new stuff I learn.

Written by Kulbir Saini

June 18th, 2015 at 8:17 am

Why Greet Apple’s Swift 2.0 With Open Arms?

without comments

Apple announced last week that its Swift programming language — a currently fully proprietary software successor to Objective C — will probably be partially released under an OSI-approved license eventually. Apple explicitly stated though that such released software will not be copylefted. (Apple's pathological hatred of copyleft is reasonably well documented.) Apple's announcement remained completely silent on patents, and we should expect the chosen non-copyleft license will not contain a patent grant. (I've explained at great length in the past why software patents are a particularly dangerous threat to programming language infrastructure.)

Apple's dogged pursuit for non-copyleft replacements for copylefted software is far from new. For example, Apple has worked to create replacements for Samba so they need not ship Samba in OSX. But, their anti-copyleft witch hunt goes back much further. It began when Richard Stallman himself famously led the world's first GPL enforcement effort against NeXT, and Objective-C was liberated. For a time, NeXT and Apple worked upstream with GCC to make Objective-C better for the community. But, that whole time, Apple was carefully plotting its escape from the copyleft world. Fortuitously, Apple eventually discovered a technically brilliant (but sadly non-copylefted) research programming language and compiler system called LLVM. Since then, Apple has sunk millions of dollars into making LLVM better. On the surface, that seems like a win for software freedom, until you look at the bigger picture: their goal is to end copyleft compilers. Their goal is to pick and choose when and how programming language software is liberated. Swift is not a shining example of Apple joining us in software freedom; rather, it's a recent example of Apple's long-term strategy to manipulate open source — giving our community occasional software freedom on Apple's own terms. Apple gives us no bread but says let them eat cake instead.

Apple's got PR talent. They understand that merely announcing the possibility of liberating proprietary software gets press. They know that few people will follow through and determine how it went. Meanwhile, the standing story becomes: Wait, didn't Apple open source Swift anyway?. Already, that false soundbite's grip strengthens, even though the answer remains a resoundingly No!. However, I suspect that Apple will probably meet most of their public pledges. We'll likely see pieces of Swift 2.0 thrown over the wall. But the best stuff will be kept proprietary. That's already happening with LLVM, anyway; Apple already ships a no-source-available fork of LLVM.

Thus, Apple's announcement incident hasn't happened in a void. Apple didn't just discover open source after years of neutrality on the topic. Apple's move is calculated, which led various industry pundits like O'Grady and Weinberg to ask hard questions (some of which are similar to mine). Yet, Apple's hype is so good, that it did convince one trade association leader.

To me, Apple's not-yet-executed move to liberate some of the Swift 2.0 code seems a tactical stunt to win over developers who currently prefer the relatively more open nature of the Android/Linux platform. While nearly all the Android userspace applications are proprietary, and GPL violations on Android devices abound, at least the copyleft license of Linux itself provides the opportunity to keep the core operating system of Android liberated. No matter how much Swift code is released, such will never be true with Apple.

I'm often pointing out in my recent talks how complex and treacherous the Open Source and Free Software political climate became in the last decade. Here's a great example: Apple is a wily opponent, able to Open Source (the cooption of Free Software) to manipulate the press and hoodwink the would-be spokespeople for Linux to support them. Many of us software freedom advocates have predicted for years that Free Software unfriendly companies like Apple would liberate more and more code under non-copyleft licenses in an effort to create walled gardens of seeming software freedom. I don't revel in my past accuracy of such predictions; rather, I feel simply the hefty weight of Cassandra's curse.

Written by Bradley M. Kuhn

June 15th, 2015 at 1:00 pm

Posted in Uncategorized

My Frustration with Mozilla

without comments

I recently decided to stop using Firefox as my main Browser. I’m not alone there. While browser statistics are notoriously difficult to track and hotly debated, all sources seem to point toward a downward trend for Firefox. At LQ, they actually aren’t doing too badly. In 2010 Firefox had a roughly 57% market share and so far this year they’re at 37%. LQ is a highly technical site, however, and the broader numbers don’t look quite so good. Over a similar period, for example, Wikipedia has Firefox dropping from over 30% to just over 15%. At the current rate NetMarketShare is tracking, Firefox will be in the single digits some time this year. You get the idea. So what’s going on , and what does that mean for Mozilla? And why did I choose now to make a switch personally?

First, let me say it’s not all technical. While it’s troubling that they have not been able to track down some of the memory leaks and other issues for years, Firefox is an incredibly complex piece of software and overall it runs fine for me. Australis didn’t bother me as much as it did many, nor did the Pocket integration. I understand that the decision to include EME was a pragmatic one. I think the recent additional add-ons rules were as well. Despite these issues, I remained an ardent Firefox supporter who actively promoted its adoption. Taking a step back now, though, it is surprising to see just how many of the technical decisions they’re making are not being well received by the Firefox community. I think part of that is due to the fact that while Firefox started as the browser of the early adopter and power user, as it gained in popularity Mozilla felt pressure to make a more mainstream product and recently that pressure has manifested itself in Firefox looking more like Chrome. I think they’ve lost their way a little bit technically and have forgotten what actually made them popular, but that was not enough for me to stop using Firefox.

On a recent Bad Voltage episode, we discussed some of these issues (and more), with the intention of having someone from Mozilla on the next show to give feedback on our thoughts. After reaching out to Mozilla, they not only declined to participate, they declined to even provide a statement (there is a fair bit more to the story, but it’s off record and unfortunately I can’t provide further details at this time). This made me step back a bit and reassess what I thought about Mozilla as a whole. Something I hadn’t done in a while to be honest. Mozilla used to be a place where you were encouraged to speak your mind. What happened?

For context, I held Mozilla in the highest regard. It’s not hyperbole to say that I genuinely believe the Open Web would not be where it is today without what Mozilla has been able to accomplish. I consider their goals and the Mozilla Manifesto to be extremely important to the future of the web and it would be a shame to see us lose the freedom and openness we’ve fought so hard to gain. But somewhere along the line it appears to me Mozilla either forgot who they were, or who they were changed. Mozilla’s mission is “to promote openness, innovation & opportunity on the Web”. Looking at their actions recently, and I’m not just referring to the Bad Voltage-related decision, they don’t appear willing to be open or transparent about themselves. Their responses to incidents like the Pocket one resemble the response of a large stodgy corporation, not one of the Open Source spirited Mozilla I was accustomed to dealing with.

Maybe part of the issue is my perception. Many people, myself included, look at Mozilla as a bastion of freedom; the torch bearer for the free and Open Web. But the reality is that Mozilla is now a corporation, and one with over 1,000 employees. Emailing their PR department will get you a response from someone who used to work for CNN and the BBC. As companies grow, the culture often changes. The small, scrappy, steward of the Open Web may not exist any more. At least not in the pure concentrated form it used to; I know there is a solid core of it that very much burns within the larger organization. But this puts Mozilla in a really difficult position. They are not only losing market share rapidly, but doing so to a browser that is a product of the company that used to represent the vast majority of their revenue. With both revenue and market share declining, does Mozilla still have the clout it needs to direct the evolution of the web in a direction that is open and transparent?

I am a firm believer that the web would be a worse place without Mozilla. One of my largest concerns is that it appears many higher level Mozillians don’t seem to think anything is wrong. Perhaps they are too close to the issue, or so focused on the cause that it’s difficult or impossible to take a step back and assess where the organization came from, where they are and where they are going. Perhaps the organization is a little lost internally… struggling with decreasing market share of their main project, less than stellar adoption on mobile, interesting projects such as rust and servo taking resource and internal conflict about which direction is the best path forward. Whatever the case, it appears externally, based on the number of people leaving and the decreasing willingness to discuss anything, that something is systemically culturally amiss.

Or perhaps I’m wrong here and everything really is fine. Perhaps this is simply the result of an organization that has seen tremendous growth and this new grown up and more corporate Mozilla really is the best organization to move the Open Web forward. I’m interested in hearing what others think on this topic. Has Mozilla lost its way and if so, how? More importantly if so, how do we move forward and pragmatically address the issue(s)? I think Mozilla is too important to the future of the web to not at least ask these questions.

NOTE: We also discussed this topic on the most recent episode of Bad Voltage. You should listen to the entire episode, but I’ve included just the Mozilla segment here for your convenience.

–jeremy

PS: I have reached out to a few people at Mozilla to get their take on this. Ideally I’d like to have an interview with one or more of them up at LQ next week, but I don’t have any firm confirmations yet. If you work or worked at Mozilla and have something to add, feel free to post here or contact me directly so we can set something up. We need you Mozilla; let’s get this fixed.


Written by jeremy

June 12th, 2015 at 9:51 am

Bad Voltage Season 1 Episode 44 Has Been Released

without comments

Jono Bacon, Bryan Lunduke (he returns!), Stuart Langridge and myself present Bad Voltage, in which all books are signed from now on, we reveal that we are coming to Europe in September and you can come to the live show, and:

  • 00:01:39 In the last show, Bad Voltage fixed Mozilla, or at least proposed what we think they might want to do to fix themselves. We asked Mozilla PR for comment or a statement, and they declined. This leads into a discussion about Mozilla’s internal culture, and how their relationships with the community have changed
  • 00:18:14 Stuart reviews Seveneves, the new book by Neal Stephenson
  • 00:29:28 Bad Voltage Fixes the F$*%ing World: we pick a technology or company or thing that we think isn’t doing what it should be, and discuss what it should be doing instead. We look at a company who have been in the news recently, but maybe wish they weren’t: Sourceforge
  • 00:51:30 Does social media advertising work? We tried a challenge: we’d each spend fifty dollars on advertising Bad Voltage on Twitter, Reddit, Facebook, and the like, and see how we got on and whether it’s worth the money. Is it? Maybe you can do better?

Listen to 1×44: Bad Voltage

As mentioned here, Bad Voltage is a project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.

–jeremy

 


Written by jeremy

June 12th, 2015 at 8:21 am

Posted in Bad Voltage

Application Password Best Practice – CEH Certified Ethical Hacker

without comments

Our Certified Ethical Hacker CEH training course is attended by all types, from  system administrators and application developers to cyber security professionals. A common question from developers is how to implement secure passwords in applications when there is no LDAP, Kerberos or Active Directory integration.

This question comes up in the sections of the CEH training covering password cracking techniques, password algorithms and best practice so its appropriate to cover what these are first.

Password History

Back-in-the-day it was thought it was enough to merely hash user passwords in a one-way encryption algorithm such as MD5. If the password was revealed, for example via someone getting read access to the password database or via an unencrypted network connection, assuming the password is being transferred encrypted, the perpetrator would not have access to the password.

A lot of protocols and application still use password authentication protocol (PAP) where plain text passwords are passed around until they reach the password database where they are encrypted and then compared to the stored encrypted password for authorisation.

In early versions of Unix, for example, encrypted password were stored in the world-readable /etc/passwd file until they were moved out to the shadow password file readable only by root. It was soon realised that this was not a good idea. The first step is never to let anyone get hold fo the encrypted password  :)

Password Encryption

The principle is, given a sound encryption algorithm, the amount of computing time required to crack an encrypted password using that algorithm should makes it infeasible or uneconomic to perform. But, as computing power increases, so the algorithms used to encrypt need to be enhanced or changed to keep the cost curve high enough. Additionally some formally "secure" algorithms are found to have fatal flaws some time after adoption.

MD5, for example, is now considered insecure due to a flaw in the algorithm and as a way has been found to generate collisions, ie the same hash is generated for different input.

SHA1 algorithms are still considered secure but as computing power is increasing it is consider a matter of time (years) before computers are commonly available that can crack a SHA1 password. Everyone is upgrading to SHA2.

"Uncrackable" Encryption Algorithm

So, assuming you have an "uncrackable" algorithm, is it enough for password security? The answer is, of course, no. If someone has an encrypted password they can simply attempt to brute force it. i.e guess the password by trying different combination of characters until they get the matching hash, hence those minimum password requirements.

Even if proper password policies are enforced such as minimum length of 10 characters or more with a combination of upper case and lower case etc password as still vulnerable simply because people will tend to use common combinations of letters and characters, thereby limiting the key space to a size that becomes feasible to crack.

Rainbow Tables

Still brute forcing passwords with a good policy is time consuming. To speed up attacks hash password can be pre-generated and then compared against any encrypted password via lookup to check for a match. This speeds up password "cracking" by orders of magnitude as one no longer needs to laboriously encrypt every combination in the key space and then compare it to the actual password aka brute forcing.

These pre-generated tables of passwords are commonly called rainbow tables and are one of the first things run against a password database once obtain. 

Password Salting and Hashing

So even if you force your users to use strange combinations of keys, users will probably use the common special characters of "!","#" and digits and common passwords across applications and services. If I get your password from a service that is less secure I can then simply compare encrypt it with a different algorithm and compare it to another database for a match.

An technique to work around rainbow tables, reused or common passwords, that heightens the cost of cracking is to use what is referred to as a salt. A salt is a random combination of characters used as a prefix or suffix to a user password which is then encrypted and stored. The salt also needs to be stored along with the password otherwise it would not be possible for the algorithm to generated the encrypted password again.

The benefits of this is that rainbow tables cannot be used, forcing the person trying to crack the password back to brute-forcing it. The important point to note is that the salt is kep along with the password so if the database is stolen the perpetrator has access to the salt as well. So they can still brute force passwords but not use rainbow tables. (Unless of course you slat generator has a limited key space too and then they just try and pre-generate all combinations thereof.)

Application Password Best Practice

So to implement a good password scheme for your application you need:

  • A good algorithm, preferably one which can increase cost of computation over time without requiring algorithm changes
  • A good salting generator and a
  • A protocol to store the salt and encrypted password

Bcrypt Algorithm and Password API

The state-of-the-art encryption algorithm for password encryption, that is also easy to use, is the Bcrypt algorithm.

Bcrypt is based on the Blowfish cipher, uses salts to encrypt passwords and is an adaptive algorithm. The algorithm incorporate an iteration count, the number of times the password is hashed, that can be increased over time. This ensures that the cost to compute the encrypted password increases even as computing power increases simply by increasing the number of iterations.

As an application developer you will not have to change any code to generate stronger encryption, simply read the number of iterations from a configuration file to generate or regenerate an encrypted password and you good to go. Of course this assume there is no flaw discovered in Blowfish at some point.

The are many libraries out there for bcrypt for your favorite language. We use the PHP and Java libraries and using it couldn't be simpler. Below is an example in Java.


import org.mindrot.jbcrypt.BCrypt;

public class UserService {

          public boolean savePassword(User user,String password){
                ...                   
                String enc = BCrypt.hashpw(password,BCrypt.gensalt(20));
                ...  
          }

          public boolean authenticate(String username,String password){
                ...
                String enc = getUserPassword(username);
                if (enc!=null){
                        return BCrypt.checkpw(password,enc));
                } else{
                        return false;
                }
          }
}

Just remember to make it hard to get to the user database in the first place.

Written by Mark Clarke

June 10th, 2015 at 12:57 am

Posted in Uncategorized

Application Password Best Practice – CEH Certified Ethical Hacker

without comments

Our Certified Ethical Hacker CEH training course is attended by all types from  system administrators and application developers to cyber security professionals. A common question from developers is how to implement secure password in applications when there is no LDAP, Kerberos or Active Directory integration.

This question comes up in the sections of the CEH training covering password cracking techniques, password algorithms and best practice so its appropriate to cover what these are first.

Password History

Back-in-the-day it was though it was enough to merely hash user passwords in a one-way encryption algorithm such as MD5. If the password was revealed, for example via someone getting read access to the password database or via an unencrypted network connection the perpetrator would not have access to the password.

A lot of protocols and application still use password authentication protocol (PAP) where plain text passwords are passed around until they reach the password database where they are encrypted and then compared to the stored encrypted password for authorisation.

In early versions of Unix, for example, encrypted password were stored in the world-readable /etc/passwd file until they were moved out to the shadow password file readable only by root.

The first step is never to let anyone get hold fo the encrypted password :)

Password Encryption

The principle is that, given a sound encryption algorithm, the amount of computing time required to crack an encrypted password makes it infeasible or uneconomic to perform. As computing power increases so the algorithms used to encrypt need to be enhanced or changed to keep the cost curve high enough. Sometime formally "secure" algorithms are found to have fatal flaws.

MD5 is now considered insecure due to a flaw in the algorithm and as a way has been found to generate collisions, ie the same hash is generated for different input.

SHA1 algorithms are still considered secure but as computing power is increasing it is consider a matter of years before computers are commonly available that can crack SHA1 password. Everyone is upgrading to SHA2.

"Uncrackable" Encryption Algorithm

So is it enough to have an "uncrackable" algorithm for password encryption? The answer is, of course, no. If someone has an encrypted password they can simple attempt to brute force it. i.e simply guess the password by trying different combination of characters until they get the matching hash.

Even if proper password policies are enforced such as minimum length of 10 characters or more with a combination of upper case and lower case etc password as still vulnerable simple because people will tend to use common combinations of letters and characters, thereby limiting the key space to a size that becomes feasible to crack.

Rainbow Tables

To speed up attacks hash password can be pre-generated and then compared against any encrypted password to check for a match. This speeds up password "cracking" by orders of magnitude as one no longer needs to laboriously encrypt every combination in the key space and then compare it to the actual password aka brute forcing.

These pre-generated tables of passwords are commonly called rainbow tables and are one of the first things  run against any password database once obtain. This is also the reason secure applications insist on using a strange combination of characters to force perpetrators to use the slower technique of brute-forcing the password. 

Password Salting and Hashing

So even if you force your users to use strange combinations of keys, users will probably use the common special characters of "!","#" and digits. An additional technique that heightens the cost of cracking passwords is to use what is referred to as a salt.

A salt is a random combination of characters used as a prefix or suffix to a user password which is then encrypted and stored. The salt also needs to be stored along with the password otherwise it would not be possible to generated the encrypted password again.

The benefits of this is that rainbow tables cannot be used forcing the person trying to crack the password back to brute-forcing it. The important point to note is that the salt is kep along with the password so if the database is stolen the perpetrator has access to the salt as well. So they can still brute force it but not use rainbow tables. (Unless of course you slat generator has a limited key space too and then they just try and generate all combinations thereof.)

Application Password Best Practice

So to implement a good password scheme for your application you need:

  • A good algorithm, preferably one which can increase cost of computation over time without requiring algorithm changes
  • A good salting generator and a
  • A protocol to store the salt and encrypted password

Bcrypt Algorithm and Password API

The state-of-the-art encryption algorithm for password encryption, that is also easy to use, is the Bcrypt algorithm.

Bcrypt is based on the Blowfish cipher, uses salts to encrypt passwords and is an adaptive algorithm. The algorithm incorporate an iteration count, the number of times it is hashed, that can be increased over time ensure that the cost to compute the encrypted password increases even as computing power increases.

As an application developer you will not have to change any code to generate stronger encryption, simply read the number of iterations from a configuration file to generate or regenerate an encrypted password and you good to go.

The are many libraries out there for bcrypt for your favorite language. We use the PHP and Java libraries and using it couldn't be simpler. Below is an example in Java.


import org.mindrot.jbcrypt.BCrypt;

public class UserService {

          public boolean savePassword(User user,String password){
                ...                   
                String enc = BCrypt.hashpw(password,BCrypt.gensalt(20));
                ...  
          }

          public boolean authenticate(String username,String password){
                ...
                String enc = getUserPassword(username);
                if (enc!=null){
                        return BCrypt.checkpw(password,enc));
                } else{
                        return false;
                }
          }
}

Just remember to make it hard to get to the user database in the first place.

Written by Mark Clarke

June 10th, 2015 at 12:57 am

Posted in Uncategorized