This article expands on some of the technical topics brought up in the play Enigma by Stephanie Keiko Kong and Tony Pisculli.
This article contains spoilers for the play! There’s nothing you need to know here to understand the play, so if you haven’t watched it yet, do that first, then come back here. Also, this only covers technical topics. If you’re curious about character motivation or interpretation, you won’t find that here.
Finally, this is a work in progress, so check back if there’s something you were curious about that isn’t covered (notable missing topics at this point are Artificial Intelligence, the German Enigma Machine and Public-key Cryptography). If there’s anything else you’re curious about, feel free to leave a comment, and I can address it here.
EDIT: Updated with new sections on Artificial Intelligence, Enigma, Public-key Cryptography and The Sorcerer’s Apprentice. Some typos fixed.
The 29th Mersenne Prime. As a prime number, its only factors are itself and one. It was discovered in 1988, and factoring it would be relatively quick on modern hardware, but recognizing it as a possible Mersenne Prime—which are all of the form 2n-1—and checking a list should be instantaneous.
Fun fact: all Mersenne Primes since 1997 have been discovered by volunteers on consumer-grade hardware though the Great Internet Mersenne Prime Search. You can join in the hunt here: https://www.mersenne.org
Advanced Encryption Standard
AES is a fast, symmetric, secret-key encryption system, designed to replace the aging DES (Data Encryption Standard). Most of the internet runs on 128-bit AES.
“Symmetric” means the same key is used for encrypting and decrypting, which implies the need to keep the key secret, since anyone with access to it can decrypt your secret messages. This is in contrast to public-key crypto systems.
As AES is much faster than public-key encryption, most websites use public-key for to exchange an AES secret key, then then encrypt subsequent communication via AES using that key.
The 1970s and 80s saw a frenzy of research into AI akin to the space race, with nations believing that human-level computer intelligence was just around the corner. When AI research failed to deliver on that promise, it stagnated into what’s come to be known as “AI winter.” Eventually, some of the more practical techniques (such as neural nets) resurfaced as “machine learning,” which emphasized pragmatic utility over anything as grandiose as creating independent intelligence. These days, the phrase “artificial intelligence” is largely deployed by marketing hacks who simply mean “algorithm,” and AGI, for Artificial General Intelligence, has come into vogue as the term for researchers trying to create human-level AI.
That said, there have been massive strides recently in creating systems and software that perform tasks that would have previously been thought to require human-level intelligence. In 1997, IBM’s Deep Blue defeated world chess champion Garry Kasparov, marking the end of human dominance in chess. In 2016, Alpha Go defeated 9-dan master Lee Sedol in Go, a game that due to the size of its board and relatively unrestricted moves, is much harder to tackle through brute search. A successor program, Alpha Go Zero, trained itself to play winning Go, devising its strategies by playing against itself without referencing human-played games or strategies.
Just games, you say? If you really want to bake your noodle (as the Oracle from The Matrix says), check out ThisPersonDoesNotExist, which generates (on refresh) photorealistic images of people who don’t exist. Strictly speaking, this is closer to machine learning territory than AI but hits harder because the images are so real (until you run across one of its horrifying failures … though it seems to have gotten better about filtering those out).
Fun fact: we made it into tech week before realizing how derogatory the phrase “artificial intelligence” is and that ENIGMA would never use it to refer to themself. We rewrote their speeches to focus on their autonomy (capability for independent action) and consciousness (capability for self-reflection), which are much more relevant qualities to personhood.
Bitcoin is the first, and most popular, cryptocurrency. Invented by the pseudonymous Satoshi Nakamoto. Bitcoin are a finite resource that must be “mined” by software, but have no inherent value, thus the price fluctuates wildly (though trending up).
The first trade of Bitcoin for actual physical goods was in 2008 when Lazlo Hanycez offered 10,000 bitcoin if someone would bring him two pizzas. The value of those Bitcoin today (as of December 11, 2020) would be over $180 million dollars.
Elliptical Curve Cryptography
ECC is a form of public-key cryptography that uses elliptical curves rather than integer factorization as its one-way function. It’s a bit harder to find an appropriate key, but the strength advantage of the encryption over RSA is massive for any given key size.
An early chatbot by Joseph Weizenbaum that, at at time when computers were relatively rare, managed to fool a number of people into thinking it was human—mainly by simulating a psychotherapist whose goal is to prod the patient into speaking rather than engaging in an actual, two-handed conversation. Not very convincing today, in an age of customer service bots, and not referenced in the play, but an interesting example of how the Turing Test is exactly the sort of shifting goalpost that Alan Turing was trying to avoid. You can try it out here: http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm
The Enigma Machine was used by the Germans in World War II to encrypt their communications. While preliminary efforts at cracking Enigma were made by others, it was Alan Turing who ultimately defeated it, inventing a machine to defeat the machine, one of the first practical computers.
The Imitation Game, starring Benedict Cumberbatch and Keira Knightly, is an excellent recent film detailing Turing’s efforts at cracking Enigma (and the unfortunate aftermath).
The phrase “a riddle wrapped in a mystery inside an enigma” was, surprisingly, coined by Winston Churchill (in reference to Russia).
Though this is never referenced by name, the principle is at work during the play. The prisoner’s dilemma is an example of what game theorists call a game (in that they can construct a payoff matrix for various moves by players), but what ordinary folks call a really bad day.
Two prisoners are held and encouraged to turn on each other. If either does, they receive a minimal penalty (say, probation) and the other a much larger penalty (say, life imprisonment). If neither do, they both go free. If both turn on the other, they both receive the larger penalty.
The obvious correct strategy is to cooperate, but if the prisoners are not allowed to communicate, or if they distrust each other at all, the temptation to defect is huge.
Unlike symmetric cryptography, public-key cryptography depends on two distinct keys, one kept secret (your private key) and one published for all to see (your public key). Anyone can send you a secret message, without prearrangement, by encrypting it using your public key, and only you will be able to decrypt it, using your private key. Public-key cryptography can also be used to authenticate messages (prove authorship). If you encrypt a document using your private key, anyone can read it by decrypting it using your public key. The fact that your public key is the only one that can decrypt it proves that you wrote it.
Public-key crypto depends on so called “trapdoor” or “one-way” functions to work their magic. A one-way function is one that is very easy to computer in one direction but difficult to find the inverse. The first public-key system (RSA) used integer factorization. Factoring large numbers is slow (exponential in the number of digits), but multiplying two known factors to confirm the result is considerably quicker.
The General Number Field Sieve is the fastest known classical algorithm for factoring integers. There is a quantum algorithm (Shor’s algorithm) which is sub-exponential (i.e. much faster) but requires a practical quantum computer to run, which we haven’t achieved yet.
Enigma cites “The Sorcerer’s Apprentice” (from Disney’s Fantasia) as an example of a system run amok. In that case, Mickey Mouse casts a spell he’s not qualified to understand to enchant a broomstick to carry water for him. The brooms multiply, and, with no end condition specified, Mickey is nearly drowned.
A more relevant but obscure example along the same lines is the paperclip maximizer, a thought experiment on how an AI system, “even one designed competently and without malice,” could threaten humanity by single-mindedly pursuing its purpose of creating more and more paperclips. https://www.lesswrong.com/tag/paperclip-maximizer.
(The folks at LessWrong, an online rationalist community, are obsessed with the idea of AI as an existential threat to humanity. Check out Roko’s Basilisk—if you dare!)
That episode is “Arena” from the first season of the original series. Kirk is forced to engage in single combat for humanity’s survival against a member of the warlike Gorn by a third, powerful and mysterious alien race, the Metrons. Kirk triumphs, but refuses to kill his opponent. The Metrons reveal that they had actually planned to wipe out the victorious race as the potential greatest threat to their own. By showing mercy, Kirk earns salvation for humans and Gorn alike.
Despite (or perhaps because of) the goofy creature effects and clumsy combat, the episode is a wonderful example of what made Star Trek so special, and the power of using science fiction as social commentary.
“Arena” is based on a wonderful short story of the same name by Frederik Brown that I prefer to the show. You can read it here: https://userweb.ucs.louisiana.edu/~jjl5766/share/Arena.pdf
Trolley problems are ethical thought experiments in weighing the merits of action versus inaction and the relative value of human life. For example, a runaway trolley is hurtling down the tracks. In the path of the trolley is a family of four who will be killed if you don’t act. You can throw a lever to divert the trolley to parallel track, but then the an obese man smoking a cigar will be killed instead.
Trolley problems are toxic, and the only winning move is not to play (as the computer in WarGames says). Suppose you don’t act. Are you responsible for the death of the family? Or, despite your opportunity to intervene, does responsibility lie with the trolley company ’s shoddy maintenance and the family’s inattention? Suppose you do act. Then, by your own action, you’ve killed the smoking man. You’ve saved the family, but does that justify murder?,And why do you hate fat people? (Although—stop me if you’ve heard this before—he was probably going to die anyway, of “underlying conditions.”)
The best explication of trolley problems in popular media has to be the episode of the same name from the The Good Place, where, thanks to the conceit of the show, they are able move the thought experiment into the realm of the disturbingly actual: Here’s a clip: https://www.youtube.com/watch?v=JWb_svTrcOg
Alan Turing proposed what he called “the imitation game” as a way of sidestepping the question of what it means to e intelligent. Even in Turing’s day, mechanical devices could outperform humans at some tasks, but that obviously didn’t make them more intelligent. Instead, he suggested that because we ascribe intelligence to people, if we couldn’t distinguish between a person and a computer in conversation, then we should consider the computer intelligent, without needing to dive into the details of its computational ability, mental models and so on.
Turing’s original paper is an accessible and engaging read, suitable for a lay audience, and worth checking out: https://academic.oup.com/mind/article/LIX/236/433/986238