Block ciphers, as the name suggests, encrypt a cleartext by splitting it into individual blocks. Therefore, a key property of a block cipher is its block size which describes how much data that cipher encrypts at a time. For example, the Advanced Encryption Standard (AES) has a block size of 128 bits regardless of key size. AES128, AES192, and AES256 all describe the key size in bits, but the block size is always 128 bits. (As a side note, the Rijndael algorithm which was chosen as the Advanced Encryption Standard is capable of operating at other block sizes, the standard just chose 128 bits.) It’s predecessors, the Data Encryption Standard (DES) and Triple DES (3DES), both have block sizes of 64 bits. This property of block ciphers, naturally, leads one to ask, “How does a block cipher encrypt more than a single block of data?”

# Electronic Code Book

The answer is that block ciphers need to use an algorithm which allows them to iterate over a series of blocks, encrypting each one. Such an algorithm is called a *mode of operation*. The most obvious mode is to simply encrypt a block of data, emit its ciphertext, and then move onto the next block of data continuing until all the blocks have been encrypted. Unfortunately, this algorithm, known as the *Electronic Code Book* (ECB) mode of operation, does not fully obscure the cleartext.

To demonstrate this short coming of ECB, let’s perform an experiment. We’ll start with a simple bitmap of the Greek letter lambda:

Now, lets encrypt it by operating AES in ECB mode using the OpenSSL `enc`

command:

openssl enc -aes-128-ecb -in lambda-clear.bmp -out lambda-ecb.bmp

Obviously, the output file isn’t a valid bitmap file since it’s been encrypted, but we can remedy that problem by copying the bitmap header from our unencrypted bitmap into the encrypted version using `dd`

. All we need to know is a bitmap header is 54 bytes in length:

dd bs=54 count=1 conv=notrunc if=lambda-clear.bmp of=lambda-ecb.bmp

`bs=54`

and `count=1`

indicate we want to copy precisely one block of exactly 54 bytes from the input file (`if`

) to the output file (`of`

). `conv=notrunc`

instructs `dd`

to leave the rest of the output file intact rather than truncating it. This results in:

To understand how we can clearly discern the original cleartext from the ciphertext, we need to familiarize ourselves with two properties of strong encryption algorithms, *confusion* and* diffusion*, identified by Claude Shannon, the American mathematician and cryptographer widely regarded as the father of information theory.

# Confusion and Diffusion

An algorithm which shows good confusion obfuscates the relationship between the encryption key and the ciphertext. This requires that each symbol in the ciphertext (in modern cryptography, a symbol is a bit) depends on multiple parts of the key. An algorithm which exhibits good diffusion ensures a single symbol (or bit) in the cleartext influences a disproportionately large number of symbols in the ciphertext. Taken together, confusion and diffusion ensure that a cipher successfully prevents patterns in the cleartext from being discernible in the ciphertext. AES is known to have excellent confusion and diffusion, so what’s going on?

The answer is that ECB exhibits extremely poor confusion and diffusion due to the fact that the encryption of each block is completely independent of the encryption of all the other blocks. This effectively isolates how much of the overall ciphertext a single bit in the overall cleartext can influence. A single bit in one block of cleartext can only impact 128 bits in the ciphertext, thus poor diffusion. The effect of confusion is similarly limited. While each bit does depend on multiple parts of the key, this effect can only operate within each block rather than across the whole ciphertext. This means that ECB is incapable of obscuring patterns at the block level.

Additionally, every time a given cleartext is encrypted using ECB mode, the exact same ciphertext is emitted. So, if the cleartext contains any identical blocks, the ciphertext will also contain identical blocks which also preserves patterns in the ciphertext. (It just so happens that our cleartext contains a large number of repeated blocks due to the large regions of white and black.) This is because ECB does not allow for the use of an *initialization vector*. An initialization vector, or IV, is a random bit string equal in size to the block size of the cipher. An IV is used to initialize the encryption of a cleartext. This ensures a unique ciphertext every time even when encrypting the same cleartext repeatedly. While it would be possible to use initialization vectors with ECB, this would be space ineffecient as a new IV would have to be generated and stored for every block of cleartext. The resulting ciphertext would be twice as large as the original cleartext while still resulting in poor confusion and diffusion.

# Cipher Block Chaining

Obviously, a mode of operation which permits full confusion and diffusion is necessary. One of the simplest modes which displays such properties is the Cipher Block Chaining (CBC) mode. CBC accomplishes this by requiring each block of ciphertext to depend on the previous block of ciphertext (and therefore all the previous blocks) thus allowing the dissemination of influence across an entire cleartext:

- CBC starts with the generation of an IV. The IV itself does not need to be encrypted when it is stored. Because it is randomly generated, it has no relationship to the cleartext and thus divulges nothing about the cleartext.
- This IV is then XORed (exclusive or) with the first block of cleartext. XOR is frequently used in cryptographic algorithms because it is an
*involution*, i.e. it is it’s own inverse. To undo an XOR operation, you simply take the output and XOR with one of the original operands and get the other original operand back. For example:11011010 10101101 + 01110111 + 01110111 ---------- ---------- 10101101 11011010

When it comes time to decrypt the first block, the addition of the IV will need to be undone. Because XOR is an involution, this is a simple operation. Just XOR the decrypted block with the IV.

- The result of adding the IV to the first cleartext block is then

encrypted. - Now, the resulting block of ciphertext is XORed with the next block of

cleartext. - That block is then encrypted producing a further block of ciphertext.
- To encrypt the entire cleartext, steps 4 and 5 are repeated.
- Should the last block of cleartext be too short, i.e. less than the block size of the cipher, it will be padded out to the full block size of the cipher being used. Then steps 4 and 5 will be applied to the result.

Thus CBC causes each block of ciphertext to depend on all the previous blocks of cipher text achieving confusion and diffusion across the entire cleartext rather than within small blocks.

Now we repeat our experiment above using CBC mode instead:

openssl enc -aes-128-cbc -in lambda-clear.bmp -out lambda-cbc.bmp dd bs=54 count=1 conv=notrunc if=lambda-clear.bmp of=lambda-cbc.bmp

And we get the following result:

Electronic Code Book and Cipher Block Chaining are only two of a wide array of modes of operation. There are many others which provide additional properties and features. Some allow decryption to be parallelized while others implement *authenticated encryption*. But that is a discussion for another time.

# Further Reading

*Originally authored by Travis S.*