Daniel Suarez books use today's world as a background for a multi-faceted story that centers around the heritage of a computer genius, who uses relatively dumb (also called „narrow“) artificial intelligence systems derived from the „learning“ algorithms who control characters in computer games to completely change the world.
Frank Rieger: When I started reading „Daemon“ I expected the occasional uncomfortable cringe that seems inevitable for me as someone deeply immersed in the development of technology, when reading technology-heavy near-future SciFi. Many authors seem to have only a quasi-magical grasp of technology, for instance how computer hacking is actually done. Your books are entirely different, as there are very well written for a general audience and still contain nearly no inaccuracies or grossly implausible technology details.
However, in your books, the main „non-character“ is a vastly complex and implausibly accurate conglomerate of artifical intelligence systems, which seems way beyond what is doable today in software development. Sobol, the dead computer game programmer genius who left them behind, must have possessed supernatural powers to be able to write all that software, test it, model the strands of possible outcomes and happenings. Do you expect major breakthroughs in productivity and quality of software design, that would make such a hyper-complex system even remotely possible to design and program? Or do you think that self-improving algorithms can gain traction within the next years also in what until now is hard intellectual labor in software / algorithm development?
Daniel Suarez: The Daemon is, of course, fiction, but our world is increasingly automated, interconnected, and data-driven. Narrow artifical intelligence bots already make life-changing decisions about and for large segments of the human population. Be they high-frequency stock trading bots, or the blackbox algorithms that determine individual credit scores. These proprietary systems alter human behavior as we strive to improve or maintain our scores within their framework -- in much the same way players are driven to reach higher levels in games. And when these systems err, it is very often humans (not the bots) who suffer. As long as they are profitable, these systems eventually become institutions unto themselves, attended by a caste of high-tech priests who alone know their dark mysteries. Not unlike the Daemon in my books.
So no, I don't think a *major* breakthrough would be required - just an incremental one. The Daemon is a transmedia news-reading, human-manipulation engine. At its heart the Daemon is a logic tree -- albeit a distributed and complex one. In its initial, non-crowd-sourced incarnation, the Daemon had a short list of goals: 1.) to infect corporate networks, 2.) to attain human followers (using consumer data and social networks as a map), and 3.) manage the activities of those human followers to achieve tasks. The Daemon posits tasks for humans to achieve, provides incentives (or delivers threats) to achieve them, and then scans multiple public newsfeeds to determine when/if those tasks are completed. In no case does the Daemon actually ‘understand' the events it's monitoring; it relies instead on its human network to serve as its eyes and ears -- people invested in the survival of the system.
Certainly development of a distributed narrow artifical intelligence like the Daemon is beyond the capabilities of a single human, but so is most software. The fictional antagonist in my books, Matthew Sobol, was the CTO of a successful game company and harnessed the skills of numerous developers who were unaware of the real purpose behind their work (e.g., Joseph Pavlos, whom we meet in the very first chapter). Sobol initially seeded the system with tasks, but as new Daemon members gained power, they too began to issue and manage tasks of their own -- likewise expanding upon the Daemon's logic tree. By the time the Daemon had millions of followers (some at high levels), it was no longer the same construct that Sobol envisioned. It was evolving through the auspices of its own priestly caste.
The major breakthrough in productivity and (arguably) quality you're thinking of, then, is already here: crowd-sourcing. The 'self-improving algorithm' I would point to is the constantly evolving labor of individual humans working within a reputation system -- in the case of the Daemon, one that will kill you if you try to wreck the system but which will richly reward you for improving it. Rather than decide on its own, the Daemon gauges success by the upvotes/downvotes of millions of its human members.
Would such a system work without all sorts of errors? No, but then again the Daemon has no central core. Sobol strived to remove single points of failure. So while failures might be fatal for one thread (or human user for that matter), they would not affect the overall organism. I did, in fact, write scenes where the Daemon erred-out or left an individual followers marooned in a hopeless logic loop. However, these lapses (accurate though they may be) detracted from the thrust of the overall storyline, and I edited them out of the final manuscript.
Frank Rieger: In your books, humans that submit to the Daemon as well as those that resist it become part of a pre-determined story. They move through scripted sequences of pre-arranged / pre-conceived events where their choices are just branches in the modeling trees of Sobols software. If taken to the end, who is the storyteller in this universe, if the reality itself has become the narrative? Is everything becoming part of the novel, is poetic logic in the end the same as algorithmic design? So who is telling the new stories in Sobols universe if everything has become part of his game-narrative?
Daniel Suarez: Humankind has always told stories -- these are the recurring myths that depict our hopes and fears, and it's only fitting that these myths inform the worlds we create as well. In the case of the Daemon, Sobol created an epic cycle where the heroes and villains vie for supremacy of a virtual world overlaid upon the GPS grid of the real world. And yet, it's also a world where once mastery is attained, one can change the story - to add or subtract from it, or create entirely new myths (as the darknet operatives in fact do with one character).
So the Daemon's darknet quickly outgrows even its creator. Sobol could not have predicted what the users would do with the world he set in motion. In a very real sense, not even Sobol knew how the story would end. In fact, he hoped it would never end.
Frank Rieger: I first encountered algorithms that „learn“ from human behavior when looking at companies like Google and Amazon. They treat as an answer your every search request combined with the information on which link in the search result list you actually click. Humans get classified based on their actions and reactions, sorted into virtual drawers with individuals who seem to be similar. Where do you think are the limits to human behavior prediction based on these techniques, given that we interact more and more with these systems?
Daniel Suarez: I don't think something along the lines of Asimov's 'psychohistory' from his Foundation Trilogy is looming on the horizon, but I will say that human behavior in the modern world is on the whole surprisingly predictable. And where prediction fails, there's always manipulation. Advertising is, in the final analysis, manipulation, and we are steeped in advertising from the moment we wake up until the moment we close our eyes each night.
The human brain evolved over hundreds of thousands of years to cope with its environment, and we can't fundamentally change our 'wiring' overnight. However, in a rapidly evolving technological world our slow, biological version cycle puts us at a disadvantage against those who'd like to push our mental buttons. We're a stationary target. In some ways this is akin to being forced to run an unpatched version of Windows even as malware authors are scanning our source code for flaws. What drives humans? What are our weaknesses, predilections, and passions? As marketers and others delve into the oceans of consumer and social media data now at their disposal, fundamental knowledge about ourselves that even *we* don't know will be bought and sold on a daily basis.
Likewise, functional magnetic resonance imaging (fMRI) is being used to map the activity centers of the brain in real-time. This is meaningful scientific inquiry, but we should also be aware that we're charting the operating sytem of the human brain, and odds are that the early adopters in this arena will be folks who want to sell you stuff -- whether it's skin cream or political ideology.
So predictions about human behavior will improve, but they will never be completely accurate because of the variations in individual humans. However, the fat part of the bell curve where predictions match reality will nonetheless make them a profitable field of study. My concern is that, as with other widely adopted systems for measuring human performance, human outliers may eventually find themselves viewed as 'suspect' because they don't fit the model -- rather than the other way around.
Incidentally, for everyone who lists their kids' or pets' names on their Facebook page, and then uses those names (possibly combined with the current year) in any of their online banking or other passwords -- all I can say is: there are malware bots out there that will make you a believer about prediction. And if I just convinced you to change your passwords -- that's an example of manipulation...
Frank Rieger: The society you describe in your books is essentially that of today's world. A shadowy caste of super-rich use whatever means they deem necessary to keep the status quo, that is designed to channel money and power only one way: upwards. The ruthlessness of the ruling circles is breathtaking, but no longer surprising in the context of what happened to our world in the last ten years. Do you think that the majority of „security laws“ enacted in our time is just preparation for the day when our current system is no longer stable? Where do you think this growing divide between ultra-rich and really-poor will eventually lead?
Daniel Suarez: I'm convinced the sudden fixation with 'security laws' (both the cyber and physical variety) stems from a realization of how vulnerable the modern world - and specifically the global economy - has become to even minor disruptions. Hand in hand with this trend is the swift abandonment of rule of law itself in the name of ‘national security' (e.g., the loss of habeas corpus here in the United States, and the ongoing warrant-less wiretapping, etc.). The swift abandonment of cherished and hard-won rights points to something seriously wrong.
Historically a society abandons or perverts the rule of law when the sovereign authority finds itself under threat, but I would argue that in many nations government itself is no longer the supreme political authority, but merely an agent of the real authority - which is the multinational corporate ecosystem. And it is that system which is under threat.
How? In a word: efficiency.
In the past century corporatism has grown to dominate every industry throughout the world, and the guiding principle of the corporate form is efficiency. Corporate consolidations in agriculture, media, finance, telecom, energy, etc. have eliminated large amounts of 'fat' from our infrastructure, increasing profits and centralizing management. But a reasonable amount of 'fat' actually serves a purpose in the natural world -- it helps an organism survive sudden disruptions. And sudden disruptions come sooner or later -- whether a disruption of raw material inputs (e.g., oil, fresh water, capital etc.), major natural and manmade disasters, subversion by outsiders (a terrorist bombing), or unscrupulous insiders (Wall Street bankers).
A truly competitive marketplace served by diverse, competing concerns - reasonably regulated to ensure social standards for workers and consumers - once supplied this ‘fat.' It was not supremely efficient, but neither could a single corrupt company nearly bring down the global economy, or a malware infection paralyze an entire society's supply chain or financial markets. Rampant deregulation in the U.S. and elsewhere removed limitations to monopoly ownership of critical industries, gutted the tax base used to maintain infrastructure, and shredded the social contract between employees and their employer. The result was a precision system for transferring wealth upward that has made itself vulnerable to subversion from within and without.
There are numerous single points of failure in our infrastructure that no one in power wants to admit to, and no one wants to pay to fix. So I think the powers-that-be have begun to bunker down in anticipation of a social disruption that will occur should the public infrastructure or the economy fail. Obviously, the authorities would like to delay this failure as long as possible, but in the absence of a concerted effort to address the structural flaws, the global economy will collapse sooner rather than later.
What follows will be both a time of danger and opportunity. Demagogues will likely rise seeking to capitalize on popular anger and resentment, and they are likely to scapegoat one of more social groups for the collapse. But centralized control of media and Internet makes it likely that the plutocratic class will not be the focus of popular anger. Instead, look to illegal immigration, Islamic radicalism, secular humanism, or some other demographic segment to take the blame.
Will this create a series of high-tech neofeudalist city-states where the wealthy hop from walled lily-pad to walled-lily pad via NetJet subscriptions? Only time will tell.
To clarify: I am not against 'wealth'. If someone invents something or does something innovative or inspiring, they deserve to be rewarded. But there is something seriously wrong when the most common source of great wealth today involves ‘gaming' the financial system, destroying middle class jobs, and creating nothing of physical value.
For those interested in an excellent book on the trajectory of civilizations, I‘d recommend: 'The Evolution of Civilizations' written by U.S. historian, Carroll Quigley, back in 1961. In it, he posits that civilizations arise from a unique advantage -- be it political, environmental, economic, or military in nature. This is termed a social 'instrument'. Over time the 'instrument' becomes an 'institution' with a whole caste dedicated to maintaining and preserving that unique advantage. This group does everything to preserve the status quo -- from which they draw all of their power and prestige. However, the world always changes, and it is this resistance to change among those who wield power in a society that helps to topple civilizations by preventing nimble adaptation to new conditions.
Frank Rieger: I fully agree that the mad drive for efficiency is at the core of the problems that have beset our planet in the last years. I have come to a similar conclusion coming from the question of what made our western societies so increasingly inhumane and non-caring. One of the answers for me is, that all types of human labor that can be standardized so they become digitally quantifiable, analyzable, optimizable and finally structured as parameters into algorithmic frameworks are bound to become mindless low-wage jobs, with very few exceptions. The employees in franchise „restaurants“ are one of the more visible examples. Only if the human element, the randomness, the creativity is as much contained or eliminated as possible, reliable predictions can be made and business process optimization can be run to its full extent. When once asked „what is the most damaging piece of software written so far?“, I answered: Excel, because it encourages the de-humanization of humans into parameters in abstract profitability models - „human resources“.
You start with a variant of that thought into the first pages of Daemon, where someone is killed by an electronic work-order issued by a narrow artificial intelligence construct. The order is dutifully executed by a janitor who does not even know what the ultimate effect of his fulfilling of the order will be - the execution of a human being. Later on, Sobol's Daemon is using the same technique time and again to perform his interactions with the real world. Stanislaw Lem once advocated the idea, that any work that could be done by a machine should be done by a machine, to free the human for more interesting and creative work. What actually happened seems to be something entirely different, with even „think-work“ getting shaped under assembly line-paradigms, based on text-mining algorithms, semantic analysis and machine-learning artificial intelligence techniques. Do you see a way out of this de-humanization tendency, short of an all-out collapse of current civilization? Is there for instance sensible regulation that could realistically stop society and economy crossing the boundary from healthy-lean to vulnerably skinny even further (or even roll back the trend) without massive disruptions?
Daniel Suarez: I do see a way out short of collapse, but entrenched interests will not willingly make the leap to a less efficient, more resilient society. Even if the officers of multinational corporations recognize the risks inherent in running too lean, it's likely the market would punish their stock price if they pulled back from hyper-efficiency. In response shareholders will sue, and their board will eject them from the corner suite before meaningful change had been implemented.
Instead, the lead must be taken by populist effort -- not mere protest, but building and experimenting with new economies, digital currencies, augmented reality, and open-source mesh networks to weave a new economic and social fabric that doesn't topple so much as *circumvent* self-appointed gatekeepers, lobbyists, and legacy power-centers. Such a system would be launched first in embryonic form, initially attracting adherents as they fall out of the existing economy, but then catching on as a critical mass joined the new system. One might imagine a transitional phase where people keep one foot in the old economy and one in the new, providing a chance for a smoother transition. Think how many skilled unemployed people there are in the world who would desire a fresh start in a world where their debt -- the original sin of free markets -- is washed away.
Critically, the construction and maintenance of network nodes must be a responsibility of individual communities. The 'sensible regulation' then is achieved by a society whose citizens physically control their network infrastructure -- an infrastructure that resists over-centralization by its very design.
Frank Rieger: In your second book „Freedom™“, you introduce the thought of self-sustaining communities, based on science and tech breakthroughs, that are already viable but not used, because they would not be profitable in the current system. I was strongly reminded of the settler communities in Europe that formed (mostly around religious ideas) between the wars in Europe, precisely to provide an exit from the deep crisis that had befallen society. You seem to have invested quite a bit of time into looking into promising technologies like the CR5-process to create liquid fuel from thin air and solar energy.
Do you think this will actually happen? Building alternatives to the current system, communities based on the idea of sustainable decentralized high-tech economies? Does mankind have enough unused or unpublished scientific advance accumulated that could be used to sustain such a system?
Daniel Suarez: The CR5 technology (short for Counter Rotating Ring Receiver Reactor Recuperator) that I depict in „Freedom™“ uses a ferrite material and solar energy to chemically reenergize carbon dioxide into carbon monoxide (essentially reversing combustion). This creates the building blocks necessary to synthesize a liquid combustible fuel like methanol or other petrochemicals from the air.
Whether there are enough of these little-known technologies to support alternative communities is a fair question. There is already a revolution underway in micro-manufacturing -- so-called 'fab (or fabrication) labs'. These computer-controlled milling and deposition systems can create custom parts from computer models, and when linked to wider networks give a community the ability to produce custom equipment and electrical components beyond the sophistication of local designers. It is definitely *not* cost-effective in a mass-production market, but in a world of $350/barrel oil and political upheaval, shipping tools 10,000 miles from low-wage production centers becomes impractical.
Whether micro-manufacturing and local agriculture is viable really comes down to a question of local energy supplies -- because energy lies at the root of modern societies, and with enough energy even matter itself can be transformed (as with the CR5). What's required then is more local, sustainable power generation. Toward that end the free market is actively examining any and all clean energy alternatives able to compete with fossil fuels on price and yield, but so far fossil fuels outperform just about everything except nuclear power in cost/energy-unit (unless, of course, you factor in the astronomical costs of a nuclear accident which might render broad swaths of land permanently uninhabitable).
However, we should be careful about using the metrics of the current economic system to weigh cost/benefits of alternative energy sources. Powerful financial interests have effective lobbying campaigns worldwide, ensuring tax breaks, waivers of environmental regulations, and heaping burdensome restrictions on potential competitors. Likewise a number of 'costs' from fossil fuel production are externalized onto the public at large and don't factor in to the price per unit of energy. For example, the cost of climate and environmental damage (like the BP spill in the Gulf of Mexico). Likewise, the prohibitive cost of global military operations to secure distant fuel sources -- with the attendant loss of life, torn social fabric, etc. When these are factored in, it becomes apparent there are hidden costs with the status quo, and the cost/kilowatt for wind, solar, etc. looks less outlandish.
Finally, if popular action restructures our society into local economies (so that a head of lettuce doesn't travel on average 1100 miles to get to market), and people begin to live and work locally again -- connecting to the world at large via mesh networks -- then energy consumption from transportation declines further, and the increased cost per unit of clean energy is less of a factor.
Will we be able to maintain our current living standard without some miraculous new technology to save the day? We're about to find out. However, you need to be *living* to have a chance at a living standard.
Frank Rieger: The question what system of government is adequate to guide mankind through the coming crisis is running through Freedom (TM) even as a literal thread. You introduce the „Scale of Themis“, a measure of power distribution in the darknet communities, showing the degree to which power is held by individuals vs. the populace as a measure of political climate in these communities. All the while, the Daemon assumes the role of an automated benevolent dictator that enforces the basic rules initially set by Sobol, later set by the collective will of its subjects. To eradicate any doubt of effectiveness and justness of this basic-rule-enforcement you introduce the fMRI-brainscanner-system as ultimate investigator and judge of people's intentions. The basic-rule enforcement is centered around the concept that privacy has become virtually nonexistent for the citizens of the Daemon's darknet, all voluntarily in the name of survival - and because they realize the extent they are made transparent by the data about them anyway. For me as a firm believer in the role of privacy as a safeguard of the individual, this concept is rather dystopian.
Your vision is seemingly inspired by the game-masters in today's multiplayer-online-games, like World of Warcraft, who wield god-like powers, can look into any interaction and are the ultimate arbiter in any conflict.
Do you really think, mankind is in such dire straits that we need to relinquish the enforcement of our basic social rules to algorithms, in the hope that they will be better than us in keeping things just and sane? Is your mode of thinking that if we are subservient to machines anyway, they had better be good machines?
Daniel Suarez: I certainly don't think we should be subservient to machines (or the software algorithms that power them) -- but that transformation is already well underway. The computer network and the data moving across it have quite suddenly *become* the platform for human society, and our society cannot help but reflect the structure of this network. It is not a democratic design, for there are vast power imbalances within it and invisible, unaccountable powers lurking there as well (be they corporate, government, or criminal in nature). And it is within this ocean of data that software bots 'swim', as in the primordial sea. They hold power over the data (sorting, changing, and filtering it) -- and that data holds power over us. For society views us as the sum of our data.
In this sense algorithms are the laws of the 21st century -- the numberless, obedient enforcers of a burgeoning, hierarchical social system. They are similar to our official legal framework and increasingly have much the same authority over us (although their rise to power went largely unnoticed). Algorithms codify the will of their creators (the authority), and by establishing rules and parameters they limit human initiative, structuring our interactions, channeling the spectrum of possibilities into finite pathways. If our input doesn't secure a satisfactory result in this new system we have no choice but to change our input because it's *us*, not the algorithm, that must change.
Laws and algorithms share some de-humanizing similarities in that respect. But there is one fundamental difference between judicial and 'algorithmic' law: The human is not presumed *innocent* in this new algorithmic framework. Instead, algorithms are presumed benign (or 'innocent') until proven wrong ('guilty') -- and even then, the tendency is to believe that the system has factored in some subtle complexity those viewing it from outside don't comphrehend. In my seventeen-year career in IT, I've seen a great deal of human decision-making already being ceded to algorithms, and I expect that trend will continue.
In both Daemon and its sequel, Freedom(TM) (or 'Darknet' in Germany), I postulate a world where software bots enforce the social order, up to and including the power of 'high justice.' However, the framework exists in that world for humanity to pick up the mantle of justice from these bots (thus, the symbolic form of Themis -- the blind goddess of justice). The charge is laid upon humanity to justify its freedom, and the path is quite literally set before them.
In real life that path is not so clear, but the looming threat of techno-empowered despotism is very real indeed.
I think the fundamental question of our time is whether technology will liberate us or enslave us. We need to get busy answering that question and not leave the outcome to chance. Countless future generations are depending on us to take the right path at this crossroads. As with all complex systems, the decisions we make early on will create inertia that makes later revisions more difficult. Thus, if we continue to erect a system that centralizes authority and decision-making, relies upon ever-increasing economies of scale and uniformity to maximize yield and ensure 'security', then we're building something at odds with democracy and which resists necessary change. It's vital that we design systems that compliment and support democratic structures -- that we hard-code our values into the very DNA of this new technological world to ensure that both incremental change, diversity, and distributed decision-making thrive into the 21st century.
With regard to the dystopian aspects of functional magnetic resonance imaging (fMRI) brain-scanners in both of my books, I should point out that these are not *blueprints* for a society. They are a cautionary tale intended to get people thinking about both the consequences and opportunities now before us. fMRI as a means of reading human intent is something that is *coming*, and no amount of wishful thinking can stop a technology once its time has come.
We can only influence how that technology is used. Should we then contemplate a Bill of Rights for the 21st century (a Bill of Rights 2.0)? Would such a document assert the right that a human brain be secure against unlawful search without a court ordered search warrant, specifically detailing the items to be 'asked' under fMRI interrogation? Likewise, do we add a clause that asserts that all living organisms are intrinsically self-owning -- that is, their genes cannot be copyrighted and exclusively controlled by biotech firms (thus hinting at a future 'legal' basis for slavery, etc.)?
While we're at it, we should probably re-issue the original Bill of Rights, since some folks in power seem to have lost their copy...
Frank Rieger: Today, even data driven algorithms are still controlled by humans. Sometimes - like with automated trading mechanisms - the control only extends to the question if they fulfill their masters general goal or not. Sort of a fire-and-forget-system where due to overall complexity the human master only has a kill-switch as its ultimate means of control. But still, the human intent and the human programmer are at the core of these algorithms. Isn´t it necessary to focus precisely on these humans, on the implicit power that they wield by either programming them or (more often) setting the goals for the programmers and designing business systems and rules around them? Aren´t the ethical and moral guidelines these humans have (or not) getting supremely important due to the huge amplification of power that the artificial intelligence techniques provide?
Daniel Suarez: Talented programmers theoretically wield great power, but software of any complexity is developed by teams, and those teams have timelines and budgets. Where most of the danger arises, in my opinion, is not from malicious intent but recklessness. Free market pressures cause companies to announce overly aggressive development timelines and low-ball budgets that reduce code quality and unit testing. High staff turn-over, spaghetti code -- all of these things tend to introduce gaping security holes into modern software without any nefarious plot required. That plot comes later when a zero-day exploit uncovered by anyone from a blackhat hacker to a government intelligence agent is used to penetrate ten million machines running a popular, though flawed program.
Whom do you focus on in that scenario? Certainly a focus on quality software development and ethics would include programmers, but it would necessarily have to include IT executives as well as company leadership -- and while you're at it, include Wall Street analysts and investors who need to think twice before punishing a company's stock price on news of a delayed software delivery date (which might have been the rational decision, after all). The list of potential culprits soon gets so long that it's no longer a focus.
And then there's the issue of *emergent behavior* -- unanticipated software interactions that cause behavior no one anticipated (there have been several such 'hiccups' on Wall Street that vaporized billions of dollars in minutes). It raises the question whether algorithms are consciously controlled by humans after all. Is it even possible for us to comprehend this level of complexity? I've noticed a tendency of humans to accept what machines tell them -- 'black-box' thinking. When a given algorithm or system of algorithms delivers a benefit, over time, their output becomes accepted as 'fact', and worse still, their purview tends to expand beyond the realm originally envisioned by the development team. This expansion is often initiated by factions outside of the development team in response to business or political pressure to capitalize on success. When you have the world's best hammer, after all, everything starts resembling a nail.
As one example, many of the quantitative analysts (or 'quants') on Wall Street designed their equity risk-assessment models for in-house research, but in some cases the sales or marketing divisions pushed to have these models assign public 'risk' values for equities to differentiate themselves with investors and garner more business -- as if the numbers themselves were a fact based on thinking too sophisticated for mere mortals to contemplate. In such a scenario the programmers might not have any knowledge of how their systems are later used (or misused). They may no longer even be with the firm, but their logic might live on with their system used in a way for which it was not originally intended. In the case of equity risk analysis, the result damn near brought down the global economy. It might still.
Rather than rely upon ever-tighter ethical vetting of programmers (or for that matter, fMRI brain scans!), we need to do what nature does -- compartmentalize and diversify our systems. Nature punishes single points of failure because some level of failure is inevitable. Our focus needs to be on containing cascades of failure, and increasing our ability to swiftly recover from those failures. That limits the damage when we're wrong, and we need to acknowledge the fact that we will sometimes be wrong.