This isn’t exactly a step by step guide to generate alpha and beat investing benchmarks – obviously I wouldn’t share the exact details if I could still exploit a specific strategy. No, today we are going to talk about how you can **try** to beat investing or gambling by generating alpha with clever strategies – by exploiting ‘randomness’ that is anything but, known as pseudorandomness. This becomes particularly powerful when *everyone else* fails to see pseudorandomness masquerading as randomness.

At DQYDJ, we’re fans of Nassim Nicholas Taleb’ philosophy on there being ‘*unknown unknowns*‘.

He may not have been the first person to use the phrase ‘Black Swans’ to describe previously unknown ideas suddenly becoming known, but he certainly jammed it into the investing lexicon pretty forcefully. We’re going to play off the title of his 2008 book, [amazon-product text=”Fooled By Randomness” type=”text”]1400067936[/amazon-product], for this piece.

Here goes: just like it is possible for people to not appreciate the randomness of risks, it is also possible for people to assume perfect randomness *where it doesn’t exist*.

## Generate Alpha: Beat Investing, Beat Gambling, and… Beat Computer Games?

On some articles, the fact that I’m a computer engineer actually becomes relevant- in Computer Science, we draw a very bright line between random numbers, and pseudo-random numbers. Early on in Computer Engineering, you will get a list looking like this:

- 11111111111111111111111111111111111111111
- 1010101010101010101010101010101010
- 00110111001011101111000100110101011

…and have to systematically explain why the number *can’t* be trusted. Usually the answers go something like:

- Can’t be random – it’s 100% 1s
- Can’t be random – the 1s and 0s are really just oscillating
- Can’t be random – you’re counting in binary (01- “one”, 10 – “two”, 11 – “three”, 100 – “four”).

Essentially, the list goes from ‘*definitely not random*‘, to ‘*much harder to tell*‘. But randomness, in a statement, boils down to one simple fact – *nobody can predict what number comes next in a sequence*. (The next numbers for the sequences above? 1, 1, and 1).

Notice… I didn’t say *you* can’t tell what number comes next.

I said *nobody* can tell.

I should have also said *nobody will ever tell *– but let’s not skip too far ahead! As we’ve seen many times here on DQYDJ, someone will beat the casinos or generate alpha because the general population assumed some process was random when it was actually quite predictable: Ed Thorpe recognizing that a computer could do the math to beat roulette (PDF)? People beating lotteries? The Price to Sales ratio?

Over and over again, relationships people assumed were random were broken because:

**Someone didn’t work off the assumption that a process was truly random****New technology or new discoveries made a***previously thought unpredictable*process*predictable*

Which brings us to a quick note on computer gaming, and software in general.

When true randomness isn’t absolutely necessary, engineers rely on pseudo-random numbers (computer randomness isn’t impossible, but pseudo-random generation is easier). These numbers use seeds, such as the current time, and push them through equations to come up with ‘random looking’ numbers. In some cases – that’s good enough. In other places it isn’t –* if you’re going to make a lot of money on an idea, in essence you want to try to figure out if a process labeled ‘random’ is actually ‘pseudo-random’*.

If you think this doesn’t matter, surely you’ve heard of Javascript. Google’s V8 Javascript interpreter engine went 6 years with a pseudorandomness bug, which was only fixed by the end of 2015.

## Another Example: Encryption

In this era when we’ve still got the NSA on our minds (and the NSA has us on their monitors!), let’s look at encryption.

Stated, simply, encryption is the process of making things impossible to read (except a recipient) by using mathematical properties of so called one-way functions. The general goal in encryption is to make it so that the only vulnerability encryption has is ‘brute force’ – literally trying every combination until a lock is broken. An encryption method is ‘broken’ when it takes much less time than brute force to crack it. If, for example, you’ve got an 8 digit password, but every password starts with ‘111111’… well, that certainly narrows the problem set!

Why do I bring this up? Well, *because of the NSA*, of course! In the 1970s, IBM released what was to become one of the first widely deployed encryption standards, known as Data Encryption Standard. Cryptically (has there ever been a more perfect word?), the NSA told IBM to make a few changes to how DES worked:

- IBM should order their keys in a very specific pattern (this was to make it easier to produce hardware implementations of the standard – yes, semiconductors were expensive at the time!)
- Curiously, IBM should shorten the key length and change how the “S-Boxes” worked.

For the purposes of this basic discussion, know that it later came out that the S-Boxes were vulnerable to an exploit which, while known to the NSA, wouldn’t be known to the general public for another (whopping) 15 years… differential cryptanalysis. Yes, the previous 15 years were full of lots of paranoia, but it turned out the NSA helped harden security before DC would later render the original spec obsolete.

And so it goes in cryptography. I learned about the subject in class from Leonard Adelman, who happens to be the ‘A’ in RSA. Encryption algorithms boil down to mathematical properties – one, the idea that some algorithms in computer science can not be reduced in complexity (known as one of the great unsolved problems of Mathematics, ‘NP=P?’), and two, that some algorithms are definitely NP-complete (the highest complexity algorithms).

In RSA, specifically, the assumption is that factoring integers (specifically prime numbers) is now and will always be an intensely difficult task. Yes, computers will get more powerful, but the idea is that *it’s easier to encrypt than to decrypt* – a 1024 bit key could simply increase in size to counteract a faster computer (even your cell phone can break small keys nowadays with brute force).

Of course, the future again will break what’s impossible today with factorization problems – Peter Shor’s algorithm, when applied on a quantum computer, can factor primes much quicker than any algorithm known today. But, unless there is an exploit out there I don’t know about or you’ve got a few thousand qubits in a quantum computer, you can’t crack RSA without brute forcing it today. (Many of your internet transactions are covered by an algorithm called AES).

And so it goes – things we assume are difficult today might be trivial tomorrow. For an investing example? High Frequency Trading – impossible in the 1960s, pervasive today.

Thus goes technology – and you need to take away another lesson here: *difficulty today alone may not prevent you from cracking the code – it might only delay yo*u.

## My Modification: Fooled By *Pseudorandomness*

… and *pseudodifficulty*, I suppose.

Seriously though, in all of these areas the key is the same thing – people **assume** something is too difficult, or assume a process is random… when it actually isn’t. If you’re able to cut through the noise and ** the doubts** – potentially even from close friends and family – you could be well on your way to untold riches. Or, hey, you could just be wasting your time.

But really, if you’re going to beat the system… and people will, in the future… these theoretical underpinning explain exactly how. File this away as the ‘Generate Alpha Manifest’, and go out there and beat investing and gambling!