Computational complexity

This is a general question about the practical use of computational complexity in security. Wikipedia has a good article about the theoretical background of computational complexity. In the course of conversation with colleagues, a topic that is brought up occassionally is the security of any algorithm in relation to the security of any other algorithm, and I would like to make sure I have an accurate understanding of computational complexity for these situations.

Suppose I have an encryption algorithm like AES that can use a 256-bit key. My understanding is that the computational complexity would then be 2^256 for a brute force attack to recover the key. A "crack" of the algorithm would be anything that can reduce this complexity to less than 2^256. It would seem to make sense that this means a program would have to guess a maximum of 2^256 passwords before it has guessed every possible key, and therefore should have gained access at some point along the way. Is this the correct way to think about computational complexity in this context?

To explain my line of thinking in an example: While reading about the 256-bit AES "crack" published a few months back, one of the things I remember was that while the computational complexity of guessing 256-bit AES keys could be reduced to 2^119, this additionally required a size complexity of 2^119. (To my understanding, the size complexity of brute force attacks on any algorithm can be as low as 1 -- one key is guessed, rejected, and iterated at a time, so only storage space for one key is necessary since that space is re-used for each key.) My understanding of a "size complexity of 2^119" is that 2^119 keys need to be stored at one time in order for the attack to work. Given this, if I assume a key length of 256 bits and I need to store 2^119 keys simultaneously, I should be able to calculate the total storage size required to make an attempt at this particular attack:

In bits: 256 bits * 2^119 = 1.7 * 10^38 bits (rounded)

Converted to terabytes: 1.9 * 10^25 terabytes (rounded)

My question is this: When someone says "computational complexity" and "size complexity" in relation to the strength of an algorithm, am I thinking about the meaning correctly from a practical perspective?

Thanks for the help.

I would read this by Bruce Schnier:
Schneier on Security: New Attack on AES

His real point:
large computational complexities - even though reduced by cracks, still mean that cracking a given given algorithm is way, way beyond practical.

People who don't get what encryption is really used for are going to assume
that greater complexity == better protection. Security trancends computational complexity - people and procedures are the weakest points, not decent algorithms. (From one of Schnier's books).

So "strength" beyond a reasonable limit is pointless. I know this is not what you asked, but implied.

jim,

Thanks for the reply. The point you and Schneier make is an excellent one; the human is certainly the weakest link of the equation, and the recently published 256-bit AES "crack" referred to in the article has no reason to dissuade people from the use of AES. (Another interesting reference to the security vulnerability posed by the human factor is one of Kevin Mitnick's books on social engineering, The Art of Deception.)

The point you make is that the computational complexity requirements of cracking such encryption is beyond feasability -- and that a high level of "infeasibility" is what is important where the algorithm is concerned. To someone asking about a practical measure of "safety" given by an algorithm, like I did in my original post, this is a good answer -- it is plenty safe!

My real interest is to understand the degree to which such a task is infeasible. From my example, if the 256-bit AES crack required 1.9 * 10^25 terabytes (rounded) of storage in some combination of RAM and/or disk space, plus the amount of computation required to perform the attack, that would certainly be well beyond the realm of feasibility. If I am to say that something is infeasible, I have to learn the "why" behind it -- otherwise, I have no personal understanding of it; I would be reciting "something I heard on the Internet". Not very convincing in conversation, even if there has never been a truer statement! Having a quantifiable amount like the above so I can say something like, "You would need 1.9 * 10^25 Terabytes of storage to implement the attack" is how one knows that something is infeasible. For educational purposes, I am trying to find the ground where the theory meets reality.

So, for example, am I arriving at the figure "1.9 * 10^25 terabytes" correctly, from this explanation in my first post? I know I did the math correctly, but am I missing any fundamental ideas, or would this be a correct derivation?

Similarly, is my statement of 2^256 password guesses to completely guess all possible keys (using a key length of 256 bits) where the idea of a computational complexity of 2^256 comes from? For instance, this would let me figure out how many processing cycles a single guess attempt might take using a specific piece of software on a given platform, then crunch out some numbers to arrive at a measure of time (however obscene and unimaginable it might be).

Thanks again for your help.