You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

12 KiB

Hash

Hash is a number that's computed from some data in a chaotic way and which is used for many different purposes, e.g. for quick comparisons (instead of comparing big data structures we just compare their hashes) or mapping data structures to table indices.

Hash is computed by a hash function, a function that takes some data and turns it into a number (the hash) that's in terms of bit width much smaller than the data itself, has a fixed size (number of bits) and which has additional properties such as being completely different from hash values computed from very similar (but slightly different) data. Thanks to these properties hashes have a very wide use in computer science -- they are often used to quickly compare whether two pieces of non-small data, such as documents, are the same, they are used in indexing structures such as hash tables which allow for quick search of data, and they find a great use in cryptocurrencies and security, e.g. for digital signatures or storing passwords (for security reasons in databases of users we store just hashes of their passwords, never the passwords themselves). Hashing is extremely important and as a programmer you won't be able to avoid encountering hashes somewhere in the wild.

{ Talking about wilderness, hyenas have their specific smells that are determined by bacteria in them and are unique to each individual depending on the exact mix of the bacteria. They use these smells to quickly identify each other. The smell is kind of like the animal's hash. But of course the analogy isn't perfect, for example similar mixes of bacteria may produce similar smells, which is not how hashes should behave. ~drummyfish }

It is good to know that we distinguish between "normal" hashes used for things such as indexing data and cryptographic hashes that are used in computer security and have to satisfy some stricter mathematical criteria. For the sake of simplicity we will sometimes ignore this distinction here. Just know it exists.

It is generally given that a hash (or hash function) should satisfy the following criteria:

  • Have fixed size (given in bits), even for data that's potentially of variable size (e.g. text strings).
  • Be fast to compute. This is mostly important for non-security uses, cryptographic hashes may prioritize other properties to guarantee the hash safety. But a hash function certainly can't take 10 minutes to compute :)
  • Have uniform mapping. That is if we hash a lot of different data the hashes we get should be uniformly spread over the space of the hashes, i.e. NOT be centered around some number. This is in order for hash tables to be balanced, and it's also required in security (non-uniform hashes can be easier to reverse).
  • Behave in a chaotic manner, i.e. hashes of similar data should be completely different. This is similar to the point above; a hash should kind of appear as a "random" number associated to the data (but of course, the hash of the same data has to always be the same when computed repeatedly, i.e. be deterministic). So if you change just one bit in the hashed data, you should get a completely different hash from it.
  • Minimize collisions, i.e. the probability of two different values giving the same hash. Mathematically collisions are always possible if we're mapping a big space onto a smaller one, but we should try to reduce collisions that happen in practice. This property should follow from the principle of uniformity and chaotic behavior mentioned above.
  • Be difficult to reverse (mainly for security related hashes). Lots of times this comes naturally from the fact that a hash maps a big space onto a smaller space (i.e. it is a non-injective function) and from their chaotic nature. Hashes can typically be reversed only by brute force.

Hashes are similar to checksums but are different: checksums are simpler because their only purpose is for checking data integrity, they don't have to have a chaotic behavior, uniform mapping and they are often easy to reverse. Hashes are also different from database IDs: IDs are just sequentially assigned numbers that aren't derived from the data itself, they don't satisfy the hash properties and they have to be absolutely unique. The term pseudohash may also be encountered, it seems to be used for values similar to true hashes which however don't quite satisfy the definition.

{ I wasn't able to find an exact definition of pseudohash, but I've used the term myself e.g. when I needed a function to make a string into a corresponding fixed length string ID: I took the first N characters of the string and appended M characters representing some characteristic of the original string such as its length or checksum -- this is what I called the string's pseudohash. ~drummyfish }

Some common uses of hashes are:

  • Hash tables, data structures that allows for quick search and access of data. For example in chess programs and databases hashes of chess positions are used to identify and get some information associated with the position.
  • Passwords in user databases are for security reasons not stored as plain text, instead only password hashes are stored. When a user enters a password, the system computes its hash and compares it to that stored in the database: if the hashes match, the password was correct. This is a way of allowing password authentication without giving the system the knowledge of user passwords.
  • In digital signatures hashes of documents are used to prove a document hasn't been modified by a third party.
  • Digital fingerprints are hashes computed from known data about a user. The fingerprint is a small number that identifies a tracked user.
  • In blockchain based on proof of work the computational difficulty of reversing a hash is used in the process of mining as a puzzle whose solution is rewarded. Miners compete in finding bits such that if appended to a newly added block will result in the block's hash being some defined number.

Example

Let's say we want a hash function for string which for any ASCII string will output a 32 bit hash. How to do this? We need to make sure that every character of the string will affect the resulting hash.

First thought that may come to mind could be for example to multiply the ASCII values of all the characters in the string. However there are at least two mistakes in this: firstly short strings will result in small values as we'll get a product of fewer numbers (so similar strings such as "A" and "B" will give similar hashes, which we don't want). Secondly reordering the characters in a string (i.e. its permutations) will not change the hash at all (as with multiplication order is insignificant)! These violate the properties we want in a hash function. If we used this function to implement a hash table and then tried to store strings such as "abc", "bca" and "cab", all would map to the same hash and cause collisions that would negate the benefits of a hash table.

A better hash function for strings is shown in the section below.

Nice Hashes

{ Reminder: I make sure everything on this Wiki is pretty copy-paste safe, from the code I find on the Internet I only copy extremely short (probably uncopyrightable) snippets of public domain (or at least free) code and additionally also reformat and change them a bit, so don't be afraid of the snippets. ~drummyfish }

Here is a simple and pretty nice 8bit hash, it outputs all possible values and all its bits look quite random: { Made by me. ~drummyfish }

uint8_t hash(uint8_t n)
{
  n *= 23;
  n = ((n >> 4) | (n << 4)) * 11;
  n = ((n >> 1) | (n << 7)) * 9;

  return n;
}

The hash prospector project (unlicense) created a way for automatic generation of integer hash functions with nice statistical properties which work by XORing the input value with a bit-shift of itself, then multiplying it by a constant and repeating this a few times. The functions are of the format:

uint32_t hash(uint32_t n)
{
  n = A * (n ^ (n >> S1));
  n = B * (n ^ (n >> S2));
  return n ^ (n >> S3);
}

Where A, B, S1, S2 and S3 are constants specific to each function. Some nice constants found by the project are:

A B S1 S2 S3
303484085 985455785 15 15 15
88290731 342730379 16 15 16
2626628917 1561544373 16 15 17
3699747495 1717085643 16 15 15

The project also explores 16 bit hashes, here is a nice hash that doesn't even use multiplication!

uint16_t hash(uint16_t n)
{
  n = n + (n << 7); 
  n = n ^ (n >> 8);
  n = n + (n << 3); 
  n = n ^ (n >> 2);
  n = n + (n << 4);
  return n ^ (n >> 8);
}

Here is a simple string hash, works even for short strings, all bits look pretty random: { Made by me. Tested this on my dataset of 70000 programming identifiers, got no collisions. ~drummyfish }

uint32_t strHash(const char *s)
{
  uint32_t r = 11;

  while (*s)
  {
    r = (r * 101) + *s;
    s++;
  }

  r = r * 251;
  r = ((r << 19) | (r >> 13)) * 113;

  return r;
}

TODO: more

BONUS: Here is a kind of string pseudohash for identifiers made only of character a-z, A-Z, 0-9 and _, not starting with digit -- it may be useful for symbol tables in compilers. It is parameterized by length n, which must be greater than 4. It takes an arbitrary length identifier in this format and outputs another string, also in this format (i.e. also being this kind of identifier), of maximum length n - 1 (last place being reserved for terminating zero), which remains somewhat human readable (and is the same as input if under limit length), which may be good e.g. for debugging and transpiling (in transpilation you can just directly use these pseudohashes from the table as identifiers). In principle it works something like this: the input characters are cyclically written over and over to a buffer, and when the limit length is exceeded, a three character hash (made of checksum, "checkproduct" and string length) is written on positions 1, 2 and 3 (keeping the first character at position 0 the same). This means e.g. that the last characters will always be recorded, so if input identifiers differ in last characters (like myvar1 and myvar2), they will always give different pseudohash. Also if they differ in first character, length (modulo something like 64), checksum or "checkproduct", their pseudohash is guaranteed to differ. Basically it should be hard to find a collision. Here is the code: { I found no collisions in my dataset of over 5000 identifiers, for n = 16. ~drummyfish }

char numPseudohash(unsigned char c)
{
  c %= 64;

  if (c < 26)
    return 'a' + c;
  else if (c < 52)
    return 'A' + (c - 26);
  else if (c < 62)
    return '0' + (c - 52);
 
  return '_';
}

void pseudohash(char *s, int n)
{
  unsigned char
    v1 = 0,     // checksum
    v2 = 0,     // "checkproduct"
    v3 = 0,     // character count
    pos = 0;

  const char *s2 = s;

  while (*s2)
  {
    if (pos >= n - 1)
      pos = 4;

    v1 += *s2;
    v2 = (v2 + 1) * (*s2);
    v3++;

    s[pos] = *s2;

    pos++;
    s2++;
  }

  if (v3 != pos)
  {
    s[1] = numPseudohash(v1);
    s[2] = numPseudohash(v2);
    s[3] = numPseudohash(v3);
  }

  s[n - 1] = 0;
}

Here are some example inputs and output strings:

"CMN_DES"                             -> "CMN_DES"
"CMN_currentInstrTypeEnv"             -> "CBcxrTypeEnvnst"
"LONG_prefix_my_variable1"            -> "L4kyvariable1y_"
"TPE_DISTANCE"                        -> "TPE_DISTANCE"
"TPE_bodyEnvironmentResolveCollision" -> "TxMJCollisionve"
"_TPE_body2Index"                     -> "_TPE_body2Index"
"_SAF_preprocessPosSize"              -> "_RpwPosSizecess"