The kindle edition of my third novel, Brain Storm, is now available.
Words, words, words
Authors and writers of all stripes can learn a lot about creating and managing words from computer programmers, beginning with an appreciation for the simple, durable efficiencies of plain text. Anybody running Unix, Linux, or BSD already knows all about text, because it’s the third prong of the Unix Tools Philosophy:
- Write programs that do one thing and do it well;
- Write programs that work together;
- Write programs to handle text streams, because that is a universal interface.
Other writers discover the efficiencies of a good text editor by chance, because they dabble in writing code, even if it’s just HTML, or they need to manage gigantic, book-length text files, without the clunky overhead of a word processor (like Microsoft Word or iWork’s Pages or Open Office Writer). Unlike world processors, text editors are fast and capable of opening and editing multiple gigabyte-sized files that would cause mere word processors to choke. True geeks prefer plain text for many reasons, some of which are nicely summarized by the folks at the 43 Folders wiki.1 Popular text editors include Vim, Emacs, UltraEdit, TextPad, NoteTab, TextEdit, TextMate, BBedit, to name only a handful of the most popular. Wikipedia has a much longer list.
In the Beginning Were the Words
Writers and authors who don’t use a text editor and instead use mainly word processing programs may wonder: What’s plain text, how is it a “universal interface,” and why do I care? Plain text is unformatted characters that are program independent and require little processing. Text means words, sentences, paragraphs and, yes, computer code. It’s called plain when it’s stored as unformatted, unadorned text characters in a plain text file, which sometimes has a .txt extension on the filename. A file named readme.txt probably contains plain text; if you double-click on the file icon it might even open in your text editor (probably Notepad or Wordpad if you are on Windows and have not yet downloaded a real text editor).
Plain text does not light up, blink, or spontaneously create hyperlinks to Microsoft Live Search if you happen to type in a zip code, or to Bing if you type in a proper noun. You can’t insert your favorite YouTube video or MSNBC news item into a plain text file.
Plain text means words separated by spaces; sentences separated by periods; paragraphs usually separated by single blank lines. If you are in the writing business, even the publishing or screenwriting business, it’s often all you need.
To the PC user raised on word processors, these spartan virtues sound like deficits, that is, until you want to access the text in your file using a program different than the one you used to create it. Open a Microsoft Word file in a simple text editor (like Notepad, or Textmate, or TextEdit, or BBedit or UltraEdit), and you’ll see gobbledygook, not words. Open a plain text file with almost any program, including Microsoft Word, Corel WordPerfect, Apple iWorks or any of the hundreds of text editors and word processors on any computer, and you will be able to view and edit that text, just as you could have viewed and edited it twenty or thirty years ago, just as you’ll probably be able to view and edit it twenty or thirty years from now, whether Microsoft still exists or not.
The geeks who made Unix nearly 40 years ago made plain text the universal interface because they believed in economy, simplicity, and reliability. Instead of making big complicated, bloated programs that tried to do everything (“It looks like you are writing a suicide note! Enter your zipcode or area code, and we’ll show you any local laws you may have to comply with before offing yourself.”), Unix programmers prided themselves on making small, well-designed, text-oriented programs or tools that each did one job well. One program found your files, another could open them, another could pipe the text back and forth between programs, another could count the words in a file, another could search files for matching strings of text, and so on. These programs accepted plain text as input, and produced plain text as output. Programming ingenuity meant discovering new ways to combine tools to accomplish a given task, then pass the results along (in plain text) to the next program, which could also capture, process, and produce more plain text, until you ended up with the results you sought.
During the Unix era, only an idiot would have proposed creating programs that couldn’t talk to other programs. Why would anyone create files that can be edited and viewed only by the program that created them? Say, Adobe In Design, or Microsoft Word? To Unix programmers and computer scientists the whole point was to make another tool for the Unix toolbox, then share your work with others, who in turn did likewise, and gradually Unix grew into the perfect computer geek workbench, a collection of small, efficient programs sharing a common file format and universal interface: plain text. As novelist and Uber-geek Neal Stephenson2 put it in his manifesto, In The Beginning Was The Command Line:
Unix … is not so much a product as it is a painstakingly compiled oral history of the hacker subculture. It is our Gilgamesh epic . . . . What made old epics like Gilgamesh so powerful and so long-lived was that they were living bodies of narrative that many people knew by heart, and told over and over again—making their own personal embellishments whenever it struck their fancy. The bad embellishments were shouted down, the good ones picked up by others, polished, improved, and, over time, incorporated into the story. Likewise, Unix is known, loved, and understood by so many hackers that it can be re-created from scratch whenever someone needs it. This is very difficult to understand for people who are accustomed to thinking of OSes as things that absolutely have to be bought.
If Unix is the geek Gilgamesh epic, it’s a tale told in plain text. On a Unix or Linux command line, “cat readme.txt” will print the contents of readme.txt to the screen. From a Windows command line, entering the command “TYPE readme.txt” will do the same. However, if readme.doc is a Microsoft Word document, issuing the command “TYPE readme.doc” will produce a string of illegible symbols, because readme.doc is stored in a proprietary format, in this case, a Microsoft Word file.
Okay, so who cares? Most of us own a license to use Microsoft Word (on one machine, for a certain length of time), or else we can download various readers provided by Microsoft to read Word document files even if we don’t have a big honking Microsoft Word program on our computer. That’s true, for today, anyway. But what about ten years from now? What about forty years from now? If the past is any guide, when 2019 or 2029 comes around, you will not be able to open, read, and edit a Microsoft Word file that you created in 2012 and left in some remote sector of your capacious hard drive. Why? Because programs change. Companies that make proprietary programs come and go. Yes, even monster companies with the lion’s share of the word processing market. Just ask any customer of Wang Laboratories (the ruling vendor of word processors during the 1980s). Even if the company still exists, they are in the business of selling newer, bigger, more complicated, more sophisticated, and more expensive programs every other year or so. Those newer, “better,” programs come with newer, propietary file formats, to keep you purchasing those updates.
Tales of Woe from the Elder Geeks
It takes geeks and especially geek writers of a certain age to bring home the hazards of storing information in proprietary file formats. Consider first the Seer of the Singularity himself, Ray Kurzweil, as he looks back over almost forty years of his love affair with technology and the data formats he has accumulated along the way. In a plaintive, downright sad section of his otherwise generally upbeat take on the future of technology, Kurzweil includes a subsection called “The Longevity of Information” in a chapter of his Singularity book: How to access the data contained on a circa 1960 IBM tape drive or a Data General Nova I circa 1973? First, Kurzweil explains, you need to find the old equipment and hope it still works. Then you need software and an operating system to run it. Are those still around somewhere? What about tech support, he asks? Hah! You can’t get a help desk worker to call you back about the latest glitch running Microsoft Office much less a program from forty years ago. “Even at the Computer History Museum most of the devices on display stopped functioning many years ago.”3
Kurzweil uses his own archival horror stories as “a microcosm of the exponentially expanding knowledge base that human civilization is accumulating,” then asks the terrible question: What if we are “writing” all of this knowledge in disappearing ink? The upshot of Kurzweil’s elegy to lost data is: “Information lasts only so long as someone cares about it.”
Do you care about your writings? The first order of business is to back up! As Kurzweil sees it, the only way data will remain alive and accessible “is if it is continually upgraded and ported to the latest hardware and software standards.” That’s one way to do it. Another way is to try to use formats that don’t go out of style. I have a 1983 Kaypro computer down in the basement that still works. All of the files I created with text editors or converted to plain text are still legible and formatted just as I left them. Here in 2012, almost 30 years after I created them, I can open them in a different text editor and work on them. The files I created using Wordstar, a proprietary word processor, are lost.
Consider another elegy to a lost file, reprinted here with permission from Neal Stephenson:
I began using Microsoft Word as soon as the first version was released around 1985. After some initial hassles I found it to be a better tool than MacWrite, which was its only competition at the time. I wrote a lot of stuff in early versions of Word, storing it all on floppies, and transferred the contents of all my floppies to my first hard drive, which I acquired around 1987. As new versions of Word came out I faithfully upgraded, reasoning that as a writer it made sense for me to spend a certain amount of money on tools.
Sometime in the mid-1980′s I attempted to open one of my old, circa-1985 Word documents using the version of Word then current: 6.0 It didn’t work. Word 6.0 did not recognize a document created by an earlier version of itself. By opening it as a text file, I was able to recover the sequences of letters that made up the text of the document. My words were still there. But the formatting had been run through a log chipper–the words I’d written were interrupted by spates of empty rectangular boxes and gibberish.
Now, in the context of a business (the chief market for Word) this sort of thing is only an annoyance–one of the routine hassles that go along with using computers. It’s easy to buy little file converter programs that will take care of this problem. But if you are a writer whose career is words, whose professional identity is a corpus of written documents, this kind of thing is extremely disquieting. There are very few fixed assumptions in my line of work, but one of them is that once you have written a word, it is written, and cannot be unwritten. The ink stains the paper, the chisel cuts the stone, the stylus marks the clay, and something has irrevocably happened (my brother-in-law is a theologian who reads 3,250-year-old cuneiform tablets–he can recognize the handwriting of particular scribes, and identify them by name). But word-processing software–particularly the sort that employs special, complex file formats–has the eldritch power to unwrite things. A small change in file formats, or a few twiddled bits, and months’ or years’ literary output can cease to exist.
Now this was technically a fault in the application (Word 6.0 for the Macintosh) not the operating system (MacOS 7 point something) and so the initial target of my annoyance was the people who were responsible for Word. But. On the other hand, I could have chosen the “save as text” option in Word and saved all of my documents as simple telegrams, and this problem would not have arisen. Instead I had allowed myself to be seduced by all of those flashy formatting options that hadn’t even existed until GUIs had come along to make them practicable. I had gotten into the habit of using them to make my documents look pretty (perhaps prettier than they deserved to look; all of the old documents on those floppies turned out to be more or less crap). Now I was paying the price for that self-indulgence. Technology had moved on and found ways to make my documents look even prettier, and the consequence of it was that all old ugly documents had ceased to exist.4
Microsoft Word vs. Plain Text
If longevity of information isn’t high on your list, consider storage requirements. Open a text editor (Notepad if that’s all you have) and type two words: “Hello World!” then save the file and call it hello.txt. Now open a Microsoft Word document, type two words: “Hello World!” then save the document and call it hello.doc.
Now let’s compare the storage requirements for these two two-word files.
- hello.txt — ASCII plain text — 12 bytes;
- hello.doc — Microsoft Word — 19,968 bytes.
The two words “hello world!” saved in a plain text file take up 12 bytes of storage space. Storing the same words in a Microsoft Word file requires roughly 1,664 times as much disk space. If you want to see the kind of information that is embedded in a Word document file, read the article Binary versus ASCII (Plain Text) Files.
Not only is the two-word Microsoft Word document file hello.doc a monster, you also need a $200 word processing program to edit it properly. On my newish MacBook Pro, it still takes 45 seconds for the Microsoft Word beast to lumber into operation. By comparison, my favorite text editor Vim (or MacVim in my case) can open a file containing all of the text in one of my novels in less than three seconds. Not only is Vim fast and rock solid, but I can create files, from the tiny hello.txt to 150,000 word novels, which can be read with hundreds, nay, thousands of different, free programs on any kind of computer in the world.
Why should we care about how big the file is and whether you need special programs to read it? For starters, multiply our little file exercise by billions and trillions of files on hundreds of millions of computers all over the world. Electronic storage costs, at least at the corporate level, are soaring. Business email alone is estimated to be growing by 25-30% annually5
Moore’s Law as applied to hard drives has lulled us into thinking that storage is not a problem, at least not on the home front. But just ask your IT officer or CIO how he or she feels about it? Hard drives are cheap, but secure, off-site, redundant back-ups of massive accumulations of email files bloated by Word files, music files, and video files cost each company millions of dollars each year. It’s called “data proliferation”6 and it’s bringing one corporation after another to its knees in the courts. Companies incur massive legal fines if they are unable to produce emails in litigation, so they err on the side of keeping everything. That policy results in huge electronic storage bills and an inablility to find the needles in the data haystacks. These problems are all compounded by proprietary file formats. Not only are proprietary files monstrosities, but to find data in those files requires search and indexing programs capable of accessing dozens if not hundreds of different file formats, all created by different versions of dozens if not hundreds of different programs.
At some point we will have mandantory controls on CO2 emissions, and all of the power plants powering all of the data storage centers will be ripe targets. Is it time to rethink how and why we store gargantuan Microsoft Outlook .pst files for the sake of a few hundred emails that might be relevant to a future lawsuit? Are you beginning to think that those wise men who brought us the Unix Tools Philosophy and its adamant insistence on TEXT as the universal interface were onto something? They were: Plain text — universally readable since the days of Unix in 1970, and still universally readable, using free programs, probably forever.
The data storage crisis is complex and can’t be solved by converting fat Power Point files to text files, but let’s go back to our own laptops where this little experiment in plain text began. File size and electronic storage is not a problem at home . . . yet. The founding fathers of Unix did not glorify plain text only because they were worried about storage costs. No, they called plain text “universal” because it’s so easy to read, scan, search, access, pipe back and forth, share. Now. Forty years ago, and forty years from now.
Plain text it is! And in true Unix fashion, the best tools for creating and managing text (text editors and file search programs) are often not the same as the best tools for presenting text for the consumption of others (word processors, LaTeX, and other document preparation programs). I hope to post two more articles: One on text editors and Unix file utilities; and another on what might be called document presentation programs: Microsoft Word, LaTeX, Final Draft and Movie Magic Screenwriter. Programs like Highland and pandoc, Markdown and Fountain, and other mark-up and conversion systems allow writers and authors to type text once, and then convert it as needed for the Internet, for print, for e-book, screenplay, or manuscript format.
(Excerpted from Rapture For The Geeks: When AI outsmarts IQ, by Richard Dooling. This is the first of three articles I plan to write on plain text, text editors and other writing tools, including Fountain and Highland for screenwriters and Markdown for novelists.)
- One of the 43 Folders Life Hacks is to keep your to-do list and even your rolodex in a plain text files, as opposed to configuring one of the dozens of to-do widgets and databases du jour that come and go, often with price tags: . For Linux and Mac users, Michael Stutz, author of the popular Linux Cookbook, 2nd Ed. (No Starch Press: San Francisco, 2004), has an excellent HOWTO called CLI Magic: Command-line Contact Management. ↩
- Author of Snowcrash, Cryptnomicon, and the Baroque Cycle trilogy of books. ↩
- The Singularity Is Near, p. 327 ↩
- In The Beginning Was The Command Line. Available at Neal Stephenson’s website (reprinted here with permission from the author). ↩
- IBM Whitepaper, The Toxic Terabyte: How data-dumping threatens business efficiency. ↩
- Data proliferation. ↩
The Kindle edition of my short story is now free for Amazon Prime members at Amazon in the Kindle store.
Originally published in the New Yorker, this harrowing tale of reverse culture shock is a cult favorite among expats who wander abroad and are unprepared for the shock that awaits them upon return to the first world.
After three years in the bush, a Peace Corps Volunteer is evacuated from war-torn Sierra Leone and sent home to Omaha, Nebraska, where he attempts to celebrate his return in a steak house. What happens next is called reverse culture shock. G.K. Chesterton put it this way: “The whole object of traveling abroad is not to set foot on foreign land; it is to set foot on one’s own country as a foreign land when one returns.”
By Richard Dooling, author of White Man’s Grave, a novel.
I wrote this opinion piece for the New York Times in the fall of 2008. Since then I’ve become addicted to financial crisis entertainment and parables of the second gilded age: books, movies, documentaries, Matt Taibi in The Rolling Stone, and the incomparable Gretchen Morgenson in the New York Times business section.
The gateway drugs were William D. Cohan’s House of Cards and the Oscar-winning documentary Inside Job by Charles Ferguson, followed by Michael Lewis’s The Big Short, Andrew Ross Sorkin’s Too Big To Fail, and Reckless Endangerment by Gretchen Morgenson and Joshua Rosner. The first serious crisis film that made me feel the fear was HBO’s adaptation of Too Big To Fail; unfortunately it’s not out yet on DVD. But Margin Call looks quite promising, released in theaters and Video On Demand via Amazon and others.
As far as I can tell, the $65 trillion is still missing. Nobody has been prosecuted. And the Fed and the Treasury are still trying to pretend that the money will show up one day, if they can just keep up appearances until it happens.
Rise of the Machines, by Richard Dooling, from the New York Times, Sunday, October 11th, 2008.
“BEWARE of geeks bearing formulas.” So saith Warren Buffett, the Wizard of Omaha. Words to bear in mind as we bail out banks and buy up mortgages and tweak interest rates and nothing, nothing seems to make any difference on Wall Street or Main Street. Years ago, Mr. Buffett called derivatives “weapons of financial mass destruction” — an apt metaphor considering that the Manhattan Project’s math and physics geeks bearing formulas brought us the original weapon of mass destruction, at Trinity in New Mexico on July 16, 1945.
In a 1981 documentary called “The Day After Trinity,” Freeman Dyson, a reigning gray eminence of math and theoretical physics, as well as an ardent proponent of nuclear disarmament, described the seductive power that brought us the ability to create atomic energy out of nothing.
“I have felt it myself,” he warned. “The glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands, to release this energy that fuels the stars, to let it do your bidding. To perform these miracles, to lift a million tons of rock into the sky. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles — this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds.”
The Wall Street geeks, the quantitative analysts (“quants”) and masters of “algo trading” probably felt the same irresistible lure of “illimitable power” when they discovered “evolutionary algorithms” that allowed them to create vast empires of wealth by deriving the dependence structures of portfolio credit derivatives.
What does that mean? You’ll never know. Over and over again, financial experts and wonkish talking heads endeavor to explain these mysterious, “toxic” financial instruments to us lay folk. Over and over, they ignobly fail, because we all know that no one understands credit default obligations and derivatives, except perhaps Mr. Buffett and the computers who created them.
Somehow the genius quants — the best and brightest geeks Wall Street firms could buy — fed $1 trillion in subprime mortgage debt into their supercomputers, added some derivatives, massaged the arrangements with computer algorithms and — poof! — created $62 trillion in imaginary wealth. It’s not much of a stretch to imagine that all of that imaginary wealth is locked up somewhere inside the computers, and that we humans, led by the silverback males of the financial world, Ben Bernanke and Henry Paulson, are frantically beseeching the monolith for answers. Or maybe we are lost in space, with Dave the astronaut pleading, “Open the bank vault doors, Hal.”
As the current financial crisis spreads (like a computer virus) on the earth’s nervous system (the Internet), it’s worth asking if we have somehow managed to colossally outsmart ourselves using computers. After all, the Wall Street titans loved swaps and derivatives because they were totally unregulated by humans. That left nobody but the machines in charge.
How fitting then, that almost 30 years after Freeman Dyson described the almost unspeakable urges of the nuclear geeks creating illimitable energy out of equations, his son, George Dyson, has written an essay (published at Edge.org) warning about a different strain of technical arrogance that has brought the entire planet to the brink of financial destruction. George Dyson is an historian of technology and the author of “Darwin Among the Machines,” a book that warned us a decade ago that it was only a matter of time before technology out-evolves us and takes over.
His essay — Economic Dis-Equilibrium: Can You Have Your House and Spend It Too? — begins with a history of “stock,” originally a stick of hazel, willow or alder wood, inscribed with notches indicating monetary amounts and dates. When funds were transferred, the stick was split into identical halves — with one side going to the depositor and the other to the party safeguarding the money — and represented proof positive that gold had been deposited somewhere to back it up. That was good enough for 600 years, until we decided that we needed more speed and efficiency.
Making money, it seems, is all about the velocity of moving it around, so that it can exist in Hong Kong one moment and Wall Street a split second later. “The unlimited replication of information is generally a public good,” George Dyson writes. “The problem starts, as the current crisis demonstrates, when unregulated replication is applied to money itself. Highly complex computer-generated financial instruments (known as derivatives) are being produced, not from natural factors of production or other goods, but purely from other financial instruments.”
It was easy enough for us humans to understand a stick or a dollar bill when it was backed by something tangible somewhere, but only computers can understand and derive a correlation structure from observed collateralized debt obligation tranche spreads. Which leads us to the next question: Just how much of the world’s financial stability now lies in the “hands” of computerized trading algorithms?
Here’s a frightening party trick that I learned from the futurist Ray Kurzweil. Read this excerpt and then I’ll tell you who wrote it:
But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. … Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
Brace yourself. It comes from the Unabomber’s manifesto.
Yes, Theodore Kaczynski was a homicidal psychopath and a paranoid kook, but he was also a bloodhound when it came to scenting all of the horrors technology holds in store for us. Hence his mission to kill technologists before machines commenced what he believed would be their inevitable reign of terror.
We are living, we have long been told, in the Information Age. Yet now we are faced with the sickening suspicion that technology has run ahead of us. Man is a fire-stealing animal, and we can’t help building machines and machine intelligences, even if, from time to time, we use them not only to outsmart ourselves but to bring us right up to the doorstep of Doom.
We are still fearful, superstitious and all-too-human creatures. At times, we forget the magnitude of the havoc we can wreak by off-loading our minds onto super-intelligent machines, that is, until they run away from us, like mad sorcerers’ apprentices, and drag us up to the precipice for a look down into the abyss.
As the financial experts all over the world use machines to unwind Gordian knots of financial arrangements so complex that only machines can make — “derive” — and trade them, we have to wonder: Are we living in a bad sci-fi movie? Is the Matrix made of credit default swaps?
When Treasury Secretary Paulson (looking very much like a frightened primate) came to Congress seeking an emergency loan, Senator Jon Tester of Montana, a Democrat still living on his family homestead, asked him: “I’m a dirt farmer. Why do we have one week to determine that $700 billion has to be appropriated or this country’s financial system goes down the pipes?”
“Well, sir,” Mr. Paulson could well have responded, “the computers have demanded it.”
Richard Dooling is the author of “Rapture for the Geeks: When A.I. Outsmarts I.Q.”
Law students spend the better part of three years beetling their brows over the study of constitutional law—a mercurial, opaque, highly theoretical system of textual exegesis, which nobody but the tenured and long-winded professor pretends to understand. And the capsheaf of con-law contwistification is First Amendment law. The First Amendment protects “the freedom of speech” and has spawned an absorbing delusional system of case law, because the harder you work to understand it, the more complex and inscrutable it becomes, until its tracts and tiers and modes of analyses, its time, place, and manner restrictions, its public and private figures and forums, its symbolic expressions and invasions of privacy–all evanesce into vaporous metaphysics.
The average citizen knows only that the First Amendment does not mean what it says (i.e., “Congress shall make NO law . . .”), because Congress in fact makes laws abridging the freedom of speech (laws against child pornography, obscenity, fraud, so-called “fighting words,” and so on). To the layperson, the First Amendment must mean whatever nine robed Platonic Guardians say it means: “This political speech is good, we’ll protect it. This obscene speech is bad, we’ll call it ‘unprotected speech’ and let governments ban it.” Voila! And if that’s the case, why can’t The Nine read the First Amendment to ban the speech of the Westboro Baptist Church and protect the grieving families of fallen soldiers from lunatic religious delusions on the most painful day of their lives? Or why can’t the Court allow Congress to regulate campaign financing to protect us from wealthy Wall Street corporations and moneyed special interests?
When the Supreme Court addresses these questions, they steer between two competing principles of First Amendment jurisprudence that will never be reconciled (think Scylla and Charybdis, a rock and a hard place, the horns of a dilemma).
One principle is called the marketplace of ideas, first formulated by Justice Oliver Wendell Holmes, who wrote:
“The best test of truth is the power of the thought to get itself accepted in the competition of the market. –Abrams v. United States (1919)
To Justice Holmes, there’s no such thing as good or bad speech, only speech that competes in the Darwinian marketplace and lives or dies. The Court’s job is to strike down any attempts by the government to regulate the marketplace of ideas. Of course, there are the inevitable exceptions, because Holmes was also the Justice who wrote: “The most stringent protection of free speech would not protect a man falsely shouting fire in a theater and causing a panic.”
Justice Louis Brandeis offered a parallel and sometimes competing principle of First Amendment theory eight years later, when he articulated what might be called the purpose of the First Amendment:
“Those who won our independence . . . believed that freedom to think as you will and to speak as you think are means indispensable to the discovery and spread of political truth; that without free speech and assembly discussion would be futile . . . ” –Whitney v. California (1927)
See how Justice Brandeis suggests that the First Amendment does not exist to protect all speech; it exists to protect speech that leads to the discovery and spread of “political truth.” Brandeis wants to protect speech, not because it’s an absolute value or good for the soul, but because it’s essential for democracy and civic republicanism. Quite a different concern than Justice Holmes’ marketplace where it’s every idea for itself and survival of the fittest. In fact, Holmes would probably say that there is no such thing as “political truth.” Truth is whatever the marketplace says it is. If the Gangnam Style YouTube video beats out President Obama’s State of the Union address, so be it.
And the best description of these two ideas as applied to a single case is Stanley Fish’s Opinionator column What is the First Amendment For? published in the New York Times shortly after the Supreme Court’s opinion in Citizens United v. Federal Election Commission. Fish sees Justice Kennedy’s majority opinion and Justice Steven’s dissent in Citizens United as a replay of the age-old war between the marketplace of ideas and the protection of “political truth”:
Kennedy, along with Justices Roberts, Alito, Thomas and Scalia (the usual suspects), is worried that the restrictions on campaign expenditures imposed by the statute he strikes down will “chill” speech, that is, prevent some of it from entering the marketplace of ideas that must, he believes, be open to all voices if the First Amendment’s stricture against the abridging of speech is to be honored. (“[A] statute which chills speech can and must be invalidated.”) Stevens is worried — no, he is certain — that the form of speech Kennedy celebrates will corrupt the free flow of information so crucial to the health of a democratic society. “[T]he distinctive potential of corporations to corrupt the electoral process [has] long been recognized.”
So who wins? Well for now, it’s the marketplace of ideas. The Roberts Court is poised to strike down another Arizona campaign finance law. The idea behind the Arizona statute is to “level the playing field” by providing matching funds to candidates who accept public financing. But to chief justice Roberts, leveling the playing field is government meddling in the marketplace of ideas, as evidenced by his questions at oral argument.
“I checked the Citizens Clean Elections Commission Web site this morning,” the chief justice said, “and it says that this act was passed to, quote, ‘level the playing field’ when it comes to running for office. Why isn’t that clear evidence that it’s unconstitutional?”
For the moment, at least five justices on the Court list toward the marketplace of ideas when it comes to First Amendment analysis, but when new Platonic Guardians arrive on the bench, they may have a different opinion.