A question on Twitter [1] [2] prompted us to take a look at the password hashing mechanisms available to the .NET Framework, and specifically to the standard SqlMembershipProvider.
For those who don't work with this aspect of ASP.NET, the .NET framework provides a simple, SQL Server-based store for web application user data, which includes user details like logon ID and email address, logon count, password failures, plus the password salt and password hash.
The membership provider can be configured to use any CLR class that implements System.Security.Cryptography.HashAlgorithm, always with a 16-byte salt, and the out-of-the-box hash algorithms are:
These algorithms are generally good for showing data integrity, but they aren't well-suited for password hashing because it's possible to run them at an extremely high speed--millions or hundreds of millions per second on a modern GPU--which means a low overall cost and effort to crack a list of leaked password hash data, despite salting. See here for Hacker News's favorite article about why these are unacceptable for hashing passwords.
In short, if an attacker were to gain access to the SQL database, it would be feasible to discover many of the passwords within via brute force because all of these hash algorithms are too fast. An attacker could then use these discovered, plaintext passwords to attempt to access other sites, impersonating your users (who, we suspect, have not diligently used an unique, random password at each site... all the more reason they should use STRIP).
Now, there are alternatives, one of which is already built in: The .NET Framework has included an implementation of Password Based Key Derivation Function 2 (PBKDF2) in the Rfc2898DeriveBytes class, going all the way back to .NET Framework 2. However, Rfc2898DeriveBytes does not implement the HashAlgorithm method that would make it compatible with the ASP.NET SqlMembershipProvider or with other general-purpose programmatic .NET hashing interfaces.
The bcrypt algorithm is even more resistant to brute-force attacks (i.e., it's more computationally expensive), and there's already a .NET implementation of bcrypt, but it also does not implement HashAlgorithm.
Importantly, both PBKDF2 and bcrypt are adaptive algorithms: scaling up the effort needed to compute them is built into their design, such that if computers were 10x faster, you could ratchet up their work factors to make them do 10x more computation.
Taking all this into account, we decided to build a simple .NET library that makes PBKDF2 and bcrypt work with SqlMembershipProvider and other areas within the .NET crypto API.
First, install Zetetic.Security.dll into the .NET Global Assembly Cache: you can either:
Next, you'll need to register the new algorithms and aliases for them in the .NET Framework's "machine.config" file. For example, if you want to use the new algorithms with .NET 4 64-bit applications, launch an elevated command prompt and edit the file C:\Windows\Microsoft.NET\Framework64\v4.0.30319\machine.config. (Do this for each .NET Framework version that will need to take advantage of the new hash algorithms.) You'll want to add the following section just before the end of the file (or at least, not before the configSections area, which must always come first):
Almost there -- the only remaining task is to associate the new hash algorithm to your SqlMembershipProvider in the web application's Web.config file:
And, that's all there is to it. Of course, bear in mind that any pre-existing users in the database will need to reset their passwords, as the SqlMembershipProvider doesn't include any per-password details about the hash algorithm used to create it... so, simply applying this new configuration to an existing user database will cause every login attempt to fail, considering that the default algorithm is salted SHA1 or SHA256.
One important note: in order to achieve a balance of server performance and security, the version of Zetetic.Security uses 5,000 computations of PBKDF2, and 2^10 rounds of bcrypt. It is certainly possible to increase these factors, but we'd opt to do so in separate classes, so that no easily-forgotten configurations are needed to maintain consistent hash results.