61 pages 2 hours read

Yuval Noah Harari

Nexus: A Brief History of Information Networks from the Stone Age to AI

Nonfiction | Book | Adult | Published in 2024

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Summary and Study Guide

Overview

Nexus: A Brief History of Information Networks from the Stone Age to AI (2024) by Yuval Noah Harari is a nonfiction book that explores the profound impact of information networks throughout human history, tracing their development from the Stone Age to the present AI era. Nexus examines how societies have utilized information to shape civilizations, influence public opinion, and maintain power. 

This guide uses the 2024 Fern Press edition of Nexus.

Summary

In the Prologue to Nexus, Harari presents his concern that the rise of artificial intelligence (AI) will create an existential crisis for humanity. Myths and legends have long warned humans against summoning powers that they cannot understand, and AI might be the latest iteration in this destructive battle. However, Harari takes apart the received wisdom about such tales. Information is fundamentally a good thing, he believes, and more information is fundamentally good, even if humanity’s self-destructive tendencies are more pronounced than ever.

In Part 1, Harari explores the fundamental concept of information. He posits that information is a building block of reality, so the use of information can be controversial, as people compete to establish competing versions of reality. Information does not always reflect reality, Harari suggests, and it is not inherently the same thing as the truth. The primary purpose of information is connection. Regardless of whether it is true or false, information connects people. The networks built by connected people are the foundation of society and have shaped the course of human history. Change and progress in human society rely more on connectivity than reality or truth, even if this does not automatically lead to greater wisdom or truthfulness.

Harari believes that humanity is so successful due to our ability to cooperate in large numbers. This cooperation is influenced by the stories we tell one another, from myths to fictions to religions. By agreeing on these narratives, humans can connect and organize. As such, Harari believes that stories are ways in which humans organize information. He cites religions, totalitarian personality cults, and historical narratives as examples. Harari believes that these narratives should be imbued with self-correcting mechanisms, otherwise they cannot be rewritten according to contemporary needs. He points to the self-correcting role of amendments in the United States’ constitution in contrast to the infallible rules established by holy books like the Bible.

Harari examines the role of the document throughout human history. Like AI, the document was an important technological leap forward that changed the way humans process information. Bureaucracy was a similar leap forward, and many of the problems that have plagued bureaucracy are likely to be extended into the future with regards to AI. Harari cites examples from his own family history, in which bureaucracy forced his grandfather to flee his homeland due to the persecution of Jewish people by the bureaucratic Romanian state.

The fallibility of humans demonstrates the need for self-correcting mechanisms in many parts of society. While humans have often turned to God as a way to excuse their own responsibility, such infallible narratives cause problems. The infallible word of God is often related and interpreted through priests and other figures, who appropriate power for themselves and manipulate religious followers. This truth-seeking behavior among humans, Harari suggests, heralds the potential for a reliance on AI in the future. He believes that this reliance should be avoided. 

He cites the way in which holy books were assembled by people, then presented as the word of God, as evidence of the way in which seemingly infallible systems can be strewn with issues. Harari points to the peer-review system in academia and science as an example of how self-correcting mechanisms can move humanity closer to the truth. Harari also examines the contrast between democracies and dictatorships. He compares the two systems to competing types of information network: Dictators seek to centralize the flow of information through themselves, and this often has disastrous results. He also criticizes populist politicians who offer simple narratives about complex issues. The public veers toward these simple narratives, even when they are incorrect. The self-correcting mechanisms of democracy should protect against such populist lies.

In Part 2, Harari explores the information revolution of the present day. Computers have transformed society in many ways, but Harari sees this as a gradual social evolution from stone tablets to bureaucracy to AI. However, Harari is concerned about AI’s potential power to act independently of human input. This may undermine democracy and even the fundamental understanding of human interaction, if people cannot trust the world around them to be a product of genuine human (rather than computer) action. This change in society has also changed the possibilities for the surveillance of people. Previously, dictatorships faced a hard limit in their capacity to surveil people, because human action was required. Now, unsleeping computers can monitor the public every second of the day.

Similarly, social credit systems may change human behavior in negative ways and eliminate any remaining privacy in the modern age. Harari compares this possible surveillance to the totalitarianism of the past, particularly the Soviet Union under Joseph Stalin. Computers need to be carefully controlled and monitored so that they do not inadvertently develop totalitarian or discriminatory biases. Harari cites examples of recent uses of AI and algorithms, which have fueled outrage and hate. In particular, he points to Facebook’s role in the ethnic cleansing of the Rohingya people in Myanmar as an example of how violent lies were amplified by unthinking computers, leading to suffering.

In Part 3, Harari looks to the ways in which AI will affect society in the future. He compares the potential social upheaval to the Industrial Revolution of the 19th century, in that AI offers benefits and threats. Democracy itself could be threatened by the relentless pace of AI, he suggests, and he criticizes the rise of politicians like Donald Trump, who attach little importance to truth or democracy. Harari advocates for transparency in the creation of algorithms that could potentially govern the lives of many millions of people. He also compares the potential outlawing of fake humans to the outlawing of fake currency.

If all power is entrusted to algorithms, Harari warns, then totalitarianism may be the result. Modern dictators pose a particular threat, especially as there is a need for global unity in the governance of emerging AI technology. The threats of AI transcend national boundaries and borders, so cooperation is essential. Harari notes that change is the only constant in human history, so the changes brought about by AI should be nothing new. In a short epilogue, Harari urges humanity to be vigilant so that the development of AI will be handled with due care.

Related Titles

By Yuval Noah Harari

SuperSummary Logo
Study Guide
Yuval Noah Harari
Guide cover image
SuperSummary Logo
Study Guide
Yuval Noah Harari
Guide cover image
SuperSummary Logo
STUDY + TEACHING GUIDE
Yuval Noah Harari
Guide cover image
SuperSummary Logo
Study Guide
Yuval Noah Harari
Guide cover image