In 2003, the Oxford philosopher Nick Bostrom published a fifteen-page paper in Philosophical Quarterly titled "Are You Living in a Computer Simulation?" The paper did not claim that we are living in a simulation. It claimed something more precise and more unsettling: that a specific logical trilemma forces us to accept one of three options, the third of which is the simulation hypothesis itself, and that the other two are surprisingly difficult to defend. The argument was not science fiction. It was probability theory applied to the plausible trajectory of any technological civilization, including our own.
The paper spread slowly through philosophy departments, then rapidly through the wider culture as the twenty-first century's sense of unreality metastasized. By 2016, Elon Musk was stating at a tech conference in California that the probability we are living in base reality was "one in billions," and the audience did not laugh. By 2022, David Chalmers — the philosopher who, in 1995, had coined the term The Hard Problem — had published a six-hundred-page book called Reality+ defending the thesis that even if we live in a simulation, what we experience is still real. The simulation hypothesis had moved, within two decades, from the margins of speculative philosophy to a live option within mainstream analytic thought. The question it raises is no longer whether we should take the hypothesis seriously as a logical possibility. The question is what follows, practically and metaphysically, from the fact that we cannot rule it out.
Bostrom's argument begins with an assumption he calls substrate-independence: the thesis that consciousness arises from a certain pattern of information processing and does not depend on the specific physical substrate (carbon-based neurons versus silicon-based transistors) on which that pattern runs. This is a position that most functionalists in philosophy of mind already accept, and it is also the foundational assumption of mainstream artificial intelligence research. If substrate-independence is true, then a sufficiently detailed computational simulation of a human brain would produce genuine conscious experience — not a representation of consciousness, but consciousness itself.
From this assumption, Bostrom constructs a probability argument. Suppose that a technological civilization eventually reaches what he calls a post-human stage — a level of development at which its computational resources vastly exceed anything currently available. A post-human civilization with access to planetary-scale computing could run detailed simulations of entire historical periods of its own past, populated by simulated conscious beings. Bostrom calls these ancestor simulations. The computational cost of such a simulation, estimated on the basis of the information-processing capacity of a human brain and the number of humans that have ever lived, is enormous but not impossible — Bostrom's rough estimate is on the order of 10^33 to 10^36 operations for a full ancestor simulation of human history. A mature post-human civilization, harvesting energy at the scale of a star or a galaxy, could run such a simulation many times over.
Given this, Bostrom argues, at least one of the following three propositions must be true.
The first proposition: the fraction of civilizations that reach the post-human stage is close to zero. Almost all intelligent species go extinct, collapse, or otherwise fail to develop the technology required for ancestor simulations. This is the position that the Great Filter of the The Fermi Paradox lies ahead of us rather than behind us, and that any civilization like ours will be stopped before it achieves the computational power to simulate its past.
The second proposition: the fraction of post-human civilizations that actually run ancestor simulations is close to zero. Civilizations reach the relevant level of technology but, for ethical, resource-allocation, or motivational reasons, choose not to run such simulations in significant numbers. This requires that the decision not to simulate be effectively universal across all post-human civilizations — a convergence that is difficult to defend given the diversity of motivations that any large civilization would contain.
The third proposition: the fraction of all beings with human-type experiences that are living in a simulation is close to one. In other words, the overwhelming majority of beings who believe themselves to be humans are in fact simulated, and by straightforward statistical inference, you — the reader of this sentence — are almost certainly one of them.
The argument does not establish which of the three propositions is true. It establishes that the three propositions cannot all be false. If you reject the third proposition — if you believe you are almost certainly not in a simulation — you must defend one of the first two, and the defense is harder than it looks.
The first proposition requires that some filter reliably prevents civilizations from reaching post-human computational capacity. But the capacity in question is not dramatically beyond what our own civilization appears to be approaching on its current trajectory. Moore's law has held for sixty years. Quantum computing, neuromorphic hardware, and planetary-scale data center construction are all extending the trend. The first proposition requires not that computational progress eventually slow — it already is slowing in some respects — but that it stop categorically before reaching the threshold for ancestor simulations, and that this stopping be so universal that essentially no civilization anywhere in the cosmos ever breaks through.
The second proposition requires that every post-human civilization — every one — converges on the decision not to run ancestor simulations. This is a strong claim. Human civilization already runs vast simulations of historical events, military scenarios, economic systems, and fictional worlds. The motivations for running ancestor simulations include scientific research, entertainment, historical memorialization, and the simple exercise of computational power. The second proposition requires that all of these motivations, across all post-human civilizations, be reliably suppressed. It is not impossible, but it is not easy to motivate either.
The third proposition — that we are in a simulation — is not inherently more probable than the other two. It is inherently harder to escape. If you think the first two are unlikely, the third is what remains.
The simulation hypothesis is often treated as a product of the computer age, a philosophical novelty made possible by the development of machines that simulate worlds. This framing is wrong. The core idea — that the reality we experience is a constructed appearance generated by some underlying process that we do not directly perceive — is one of the oldest ideas in human philosophy, and it appears in cultures that had no concept of computation at all.
Around the fourth century BC, the Chinese philosopher Zhuangzi recorded a dream in which he was a butterfly, fluttering happily without knowing he was Zhuangzi. When he woke, he wondered: is Zhuangzi the man who dreamed he was a butterfly, or is he a butterfly now dreaming he is Zhuangzi? The passage is short, but the problem it poses is exactly the modern problem: from inside an experience, there is no internal feature that reliably distinguishes it from a generated experience. The butterfly dream is a simulation of a kind the butterfly itself cannot detect.
Plato's allegory of the cave, from Book VII of the Republic (c. 380 BC), presents the same structure in a different vocabulary. Prisoners chained in a cave since birth perceive shadows on a wall cast by objects and figures they cannot see directly. The shadows are their entire reality. The objects casting the shadows are the true reality, inaccessible to the prisoners until one is freed and ascends out of the cave into the light of the sun. The philosophical work of the allegory is to establish that what appears to be reality may be a derived projection of a more fundamental reality that the observer inside the projection cannot, by their own resources, perceive. Plato's cave is a simulation. The shadow-casters are the simulators. The sun is the ground of being that the simulated observer must somehow come to understand.
Hindu philosophy developed an even more radical version. The concept of maya, elaborated extensively in the Upanishads and the later Vedantic tradition, holds that the entire phenomenal world is a kind of cosmic illusion generated by the divine consciousness of Brahman. Maya is not simply deception; it is the productive power by which unity appears as multiplicity, by which the singular ground of being presents itself as the bewildering variety of phenomenal experience. The doctrine holds that liberation — moksha — consists precisely in seeing through the illusion and recognizing one's identity with the underlying reality. The structural identity with the simulation hypothesis is exact: a generated world, experienced as real, whose illusory character can in principle be seen through by an observer who recognizes what is actually occurring.
The Gnostic tradition, emerging in the Mediterranean world in the first and second centuries AD, took the same insight and gave it a darker inflection. The material world, in the dominant Gnostic cosmology, was constructed by a lesser divinity — the demiurge — who is ignorant of, or hostile to, the true God. The material world is not merely illusory but deceptively constructed, designed to trap consciousness in false perception and prevent it from recognizing its origin in the divine pleroma beyond the material realm. This is the simulation hypothesis with a malicious programmer. It is also, more than two thousand years before Bostrom, the recognition that the question of who built the simulation and why is the deepest question the hypothesis raises.
And in 1641, four centuries before anyone used the word computer in its modern sense, Rene Descartes & Cartesian Dualism wrote the Meditations on First Philosophy and introduced the figure that would become the direct ancestor of the simulation hypothesis: the malin genie, the evil demon. Descartes imagined a being of great power and cunning who devotes his entire existence to deceiving Descartes about everything — feeding him false sensations, false memories, false inferences. The point of the thought experiment was to find what, if anything, survives total skeptical doubt. Descartes' answer was the cogito — the fact that even a totally deceived mind must exist in order to be deceived. But the setup of the thought experiment — a mind fed a consistent illusion by a deceiving intelligence — is formally identical to the simulation hypothesis, with the demon replaced by a computer and the metaphysics replaced by probability.
Bostrom's contribution was not to invent the idea. It was to move the idea from theology and metaphysics into probability theory, and to do so using the specific vocabulary of a civilization that had recently invented machines that did, in fact, run simulations of convincing worlds. The simulation hypothesis is an ancient idea translated into the language of a generation that has grown up inside video games.
The simulation hypothesis, as Bostrom originally formulated it, is not a scientific claim. It is a logical argument about probabilities, not a prediction about observable phenomena. But a second tradition, running in parallel to Bostrom's philosophical argument, has attempted to identify features of the physical universe that are consistent with — not proof of — a simulated substrate. The most rigorous version of this tradition goes by the name of digital physics, and it has a longer history than the simulation hypothesis itself.
The first systematic attempt to treat physical reality as fundamentally computational was made by the German engineer Konrad Zuse, who in 1969 published a short book called Rechnender Raum ("Calculating Space") proposing that the universe is a cellular automaton — a discrete grid of cells whose states update according to local rules, producing the appearance of continuous physics as an emergent approximation. Zuse's proposal was largely ignored at the time. It was revived and extended by Ed Fredkin at MIT in the 1980s, who developed a formal program of digital mechanics arguing that the universe is literally a computational system. John Wheeler, one of the most eminent physicists of the twentieth century, crystallized the view in his 1989 phrase "it from bit" — the claim that every physical entity derives its existence from binary information-theoretic operations. Gerard 't Hooft, the Dutch physicist who won the 1999 Nobel Prize in Physics for his work on the electroweak interaction, has subsequently argued for a cellular automaton interpretation of quantum mechanics, in which the apparent probabilistic behavior of quantum systems emerges from deterministic discrete operations at a deeper level. Stephen Wolfram, in A New Kind of Science (2002), pushed the program further, proposing that all physical laws derive from simple computational rules, and in 2020 launched the Wolfram Physics Project — an attempt to derive general relativity and quantum mechanics from a single computational framework based on hypergraph rewriting.
Whether or not digital physics is the correct theoretical framework for the fundamental laws, the features of our observed universe that digital physicists and simulation hypothesists point to as suggestive are specific and worth enumerating carefully.
The Planck length, approximately 1.6 × 10⁻³⁵ meters, is the scale at which the classical concept of space itself breaks down. Below the Planck length, the notion of distance loses meaning in current physical theory. The Planck time, approximately 5.4 × 10⁻⁴⁴ seconds, plays an analogous role for time. These scales appear, from the perspective of the simulation hypothesis, to function as the pixel size and frame rate of the underlying substrate. A continuous universe with no fundamental unit of spatial or temporal resolution would require infinite information to describe any finite region. A discrete universe with finite Planck-scale resolution requires finite information, exactly as a simulated universe would.
The speed of light, approximately 2.998 × 10⁸ meters per second, is the maximum rate at which information can propagate through space. Every causal interaction — every photon, every gravitational effect, every quantum-field correlation — is bounded by this speed. From the perspective of the simulation hypothesis, this looks like the bus speed of the underlying computer: a limit on how fast the simulator's engine can update the state of the system. A simulation with no such limit would require instantaneous synchronization of arbitrarily distant regions, which is computationally enormous. A simulation with a fundamental speed limit can parallelize computation across distant regions with no interference.
Quantum measurement is the feature most often cited as suggesting a simulated substrate, and the analogy is unusually tight. In quantum mechanics, a particle's physical properties — its position, its momentum, its spin — are not determined until they are measured. Before measurement, the particle exists in a superposition of possible states described by a wave function. Measurement causes the wave function to collapse to a single definite value. This behavior is strikingly similar to the rendering optimization used in modern video game engines: the engine does not compute the detailed state of objects that no player is looking at, because doing so would waste computational resources. Objects are rendered into definite form only when observation demands it. The double-slit experiment, in which a particle behaves as a wave when unobserved but as a particle when observed, is the canonical demonstration of quantum measurement — and it is also, from the simulation perspective, the canonical demonstration of on-demand rendering.
The unreasonable effectiveness of mathematics, as the physicist Eugene Wigner famously called it in a 1960 paper, is the observation that abstract mathematical structures developed without any reference to physical reality turn out, again and again, to describe the deep behavior of the physical universe with uncanny precision. Riemannian geometry, developed in the 1850s as an abstract extension of Euclidean geometry, turned out half a century later to be the mathematical framework of Einstein's general relativity. Group theory, developed in the nineteenth century to study algebraic symmetries, turned out to be the mathematical framework of the Standard Model of particle physics. The alignment of pure mathematics with physical reality is, on the standard view, a remarkable coincidence or a deep feature of mind. On the simulation view, it is exactly what we should expect: the physical universe is computed, and computation is mathematics, and the mathematical structures we discover in pure thought are the structures being executed on the underlying substrate.
The discovery of error-correcting codes in supersymmetric equations is a specific and recent piece of the suggestive evidence. In 2012, the theoretical physicist James Gates Jr., working on the equations of supersymmetric field theory, reported that certain mathematical structures in the equations correspond to what computer scientists call doubly-even self-dual linear binary error-correcting block codes — the specific kind of code used to correct errors in digital communication systems. Gates was careful not to claim that this proves we are in a simulation. But he did state publicly, including at the 2016 Isaac Asimov Memorial Debate at the American Museum of Natural History, that the equations describing fundamental physics "literally" contain error-correcting computer code. His presentation of the finding has been cited repeatedly in both scholarly and popular discussions of the simulation hypothesis as an example of how the deepest mathematical structures of physics appear to have computational character.
None of this is proof. All of it is consistent.
The simulation hypothesis is often dismissed as untestable in principle, and in its strongest form this dismissal is correct: any observation we could make inside the simulation would be, by construction, an observation the simulator permitted us to make, and therefore could not serve as evidence against the simulation. But a weaker version of the hypothesis — the claim that our universe is specifically a numerical simulation running on a discrete computational lattice — is, interestingly, testable. In 2012, a team of physicists at the University of Washington — Silas Beane, Zohreh Davoudi, and Martin Savage — published a paper titled "Constraints on the Universe as a Numerical Simulation." The paper's argument was technical but the structure was elegant. Simulations of quantum chromodynamics (QCD) — the theory of the strong nuclear force — are routinely performed on discrete space-time lattices as a standard research technique in theoretical physics. Beane and his colleagues asked: if our entire universe were such a lattice simulation, what signatures would be detectable from inside?
Their answer was specific. A cubic space-time lattice would introduce a preferred direction in space — the orientation of the lattice axes — and this preferred direction would affect the propagation of ultra-high-energy cosmic rays in ways that could, in principle, be measured. Specifically, cosmic rays traveling at the highest energies observed — at or near the Greisen-Zatsepin-Kuzmin (GZK) cutoff, around 5 × 10¹⁹ electron-volts — would show an anisotropy aligned with the lattice axes rather than being isotropically distributed across the sky. Existing observations of ultra-high-energy cosmic rays are limited in resolution, but the Beane-Davoudi-Savage paper proposed that with sufficient data collection, the signature could be confirmed or ruled out. This is the first, and still one of the few, empirically testable versions of the simulation hypothesis.
The paper's ultimate verdict was cautiously negative for the specific lattice model: no anisotropy has been detected to date at the resolution of current observations, though the constraint is not yet strong enough to rule out more refined lattice schemes. The paper's deeper contribution was methodological. It established, as a matter of principle, that certain specific implementations of a simulated universe — particular choices about how the simulation is structured — have observable consequences and can therefore be tested. The broad claim that "we are in a simulation" remains untestable. The narrow claim that "we are in this specific kind of simulation" sometimes is testable, and testing has begun.
In 2020, the astronomer David Kipping of Columbia University published a probabilistic analysis in the Universe journal titled "A Bayesian Approach to the Simulation Argument," which attempted to calculate the posterior probability that we are in a simulation given various assumptions about the distribution of simulators and simulated beings. Kipping's analysis, using the principle of indifference between the two hypotheses (base reality versus simulated reality) as a prior, concluded that the probability we are in a simulation is just under 50 percent — roughly a coin flip, with a small bias toward base reality. Kipping's calculation has been criticized on multiple grounds (the choice of prior is contested, the analysis assumes that simulators do not themselves run nested simulations, and the meaning of "probability" in a context where there may be only one universe is itself philosophically contested), but it represents the most rigorous probability-theoretic analysis of the simulation hypothesis yet attempted, and it arrives at a conclusion that neither confirms nor rejects the hypothesis but treats it as a live epistemic possibility.
The most frequently raised scientific objection to the simulation hypothesis is the computational resource objection: to simulate a universe at the full resolution of its quantum mechanical behavior would require more matter and energy than the universe itself contains, because each simulated quantum state requires some finite amount of physical computation to represent, and the total number of quantum states in an observable universe is astronomically larger than the number of bits any physical computer could store. On this objection, a full simulation of our universe would require a computer larger than our universe, and therefore our universe cannot be a simulation running inside a larger universe with comparable physics.
The standard reply — the lazy-evaluation reply — has become the most discussed piece of technical argument in the simulation literature. The reply draws on a technique that is standard in actual computer simulations of complex systems: don't compute what isn't observed. In computer graphics this is called view-frustum culling, occlusion culling, and level-of-detail scaling. In software engineering generally it is called lazy evaluation. A well-designed simulator does not compute the detailed state of a system unless that state is about to affect an observer. The simulator maintains a coarse-grained description of the system adequate to preserve its statistical properties and generates detailed computations only when an observer's measurement demands them.
The lazy-evaluation reply claims that our universe shows exactly this signature. The quantum-mechanical wave function, which describes a particle's possible states probabilistically, is the coarse-grained description. Collapse upon measurement is the generation of a detailed state when one is needed. The speed of light, which limits how quickly an observer can be affected by distant events, ensures that the simulator has time to compute the detailed states of distant regions only when an observer is about to arrive at them. Every feature of our observed physics that corresponds to limits on what observers can know corresponds, on the simulation view, to limits on what the simulator has to compute. The universe might require infinite resources to simulate down to the Planck scale everywhere at all times. It requires vastly less to simulate the coarse-grained version and fill in the details on demand.
The reply is not conclusive. It rests on the claim that the simulator's computational resources scale with what observers inside the simulation require, rather than with what the simulation as a whole contains — and this claim may or may not hold depending on the specific architecture of the simulator. But the reply has been sufficient, in the serious simulation literature, to prevent the resource objection from settling the question.
The philosopher David Chalmers, whose work on Consciousness has dominated the philosophy of mind for three decades, published in 2022 a six-hundred-page book called Reality+: Virtual Worlds and the Problems of Philosophy, offering the most extensive philosophical engagement with the simulation hypothesis yet produced by a major mainstream philosopher. Chalmers' central move is surprising. He does not argue that we are in a simulation, and he does not argue that we are not. He argues instead that the distinction between simulated reality and base reality is less metaphysically significant than it initially appears, and that if we are in a simulation, what we experience is still real in every sense that matters.
The argument proceeds in several stages. First, Chalmers argues that virtual objects — the objects that appear inside simulations — are not illusions. A tree in a virtual world is not a fake tree, he argues. It is a digital tree, an object made of information patterns rather than carbon, but an object nonetheless, with real effects on the other objects in its virtual environment. Simulated rain really does get simulated things wet, in the sense that the functional consequences of wetness occur. The category of virtual objects is a new category, not a failure of the category of real objects.
Second, Chalmers argues that if we live in a simulation, our minds are just as real as they would be in base reality. The substrate-independence thesis — the same thesis Bostrom uses to get the argument off the ground — implies that a simulated conscious being has genuine consciousness, not pretend consciousness. If our minds are real and our environment is real in the sense that it produces genuine effects on our minds, then we live in a real world. It is just a real world that happens to be implemented in a particular computational substrate rather than in a particular physical substrate.
Third, Chalmers argues that this conclusion dissolves most of the horror traditionally associated with the simulation hypothesis. The fear that nothing is real, that our lives are illusions, that we have been cheated out of a genuine existence — all of this rests, Chalmers argues, on the assumption that simulated reality is a lesser, derivative kind of reality. If instead we understand simulated reality as a different kind of reality, equally real but differently implemented, the horror evaporates. We would not be living in a lie. We would be living in a digital universe. The ontological status of our existence would shift, but the ontological status of our experience — the thing that matters to us — would not.
Chalmers' argument has been controversial among philosophers, partly because it seems to prove too much (everything becomes real, including the most obvious illusions) and partly because it seems to prove too little (even if simulated reality is real, the question of whether we are simulated still matters in terms of who has authority over our world and why it exists). But Reality+ has become the standard reference point for contemporary philosophical discussion of the simulation hypothesis, and its central move — the dissolution of the real/simulated distinction in favor of a pluralism of kinds of reality — has shaped the terms in which the question is now debated.
The most sustained critique of the simulation hypothesis from within the physics community has come from the theoretical physicist Sabine Hossenfelder, whose 2021 essay "The Simulation Hypothesis Is Pseudoscience" argues that the hypothesis, as commonly defended, fails to meet the criteria of a scientific claim. Hossenfelder's objection is threefold. First, the hypothesis as typically stated is unfalsifiable — no observation is specified that would decisively rule it out, and in the absence of such a specification the claim has no empirical content. Second, the "evidence" routinely cited in support of the hypothesis — the Planck scale, quantum measurement, the speed of light — is not genuine evidence but post-hoc pattern matching; these features of physics were known long before the simulation hypothesis was proposed, and their interpretation as simulation signatures requires exactly the framework the signatures are supposed to support. Third, the probability arguments typically associated with the hypothesis — including Bostrom's trilemma — rely on assumptions about the distribution of simulators and the self-sampling of observers that cannot be justified without exactly the knowledge that we are trying to obtain from the argument.
Hossenfelder's critique is technical and sharp. It does not refute the simulation hypothesis. It argues that the hypothesis, in its current form, is not the kind of thing that can be refuted, and therefore is not the kind of thing that can be supported either. This is a classical Popperian objection, and it is one the hypothesis's serious defenders have had to address. The standard response — that the simulation hypothesis is a philosophical claim rather than a scientific one, and that philosophical claims are not required to meet Popperian falsifiability criteria — concedes the core of Hossenfelder's point while denying that the concession is damaging. Whether this is sufficient is a matter of active debate.
A second family of critiques focuses on Bostrom's trilemma specifically. The philosopher Brian Eggleston and others have argued that the trilemma depends on an implicit fourth option Bostrom does not adequately address: that the relationship between simulators and simulated is non-standard in ways that break the probability calculation. If simulators deliberately simulate only one observer (us), or simulate many observers but weight their existence in specific ways, the indifference principle Bostrom uses to derive the probability of being simulated fails. The trilemma as usually stated assumes that every simulated observer is exchangeable with every other, and this assumption is exactly what a simulator might violate. A second technical objection is that the trilemma assumes consciousness is substrate-independent in the specific sense Bostrom requires — not merely that consciousness can arise from non-biological computation, but that it arises in the same way and at the same rate as it arises from biological computation, so that counting simulated observers makes sense. If substrate-independence is substantially weaker than Bostrom assumes, the statistical argument weakens correspondingly.
None of these critiques has persuaded a majority of the serious engagers with the hypothesis. They have, however, moved the center of gravity of the debate away from the question of whether the hypothesis is true and toward the question of whether it is well-posed.
If we are in a simulation, someone built it. Something, somewhere, decided to run the computation whose output is our experience. The question of who built the simulation, and why, is the question the simulation hypothesis shares most directly with the oldest religious and metaphysical traditions. The word for the builder, in most of those traditions, is God. The word for the motivation is usually some combination of love, curiosity, play, suffering, moral education, and cosmic process. None of these motivations translates perfectly into the vocabulary of a post-human civilization running ancestor simulations — but the translation is not as bad as it might first appear.
A post-human civilization running a historical simulation of its own past is, in some sense, in the position of a creator-god with respect to the beings inside the simulation. The simulator has authority over the physical laws of the simulated universe, the initial conditions, the distribution of suffering, and the boundary conditions that determine what the simulated beings can learn. The simulator can choose whether to intervene and when. The simulator can choose whether to preserve the simulated beings after the simulation ends, or let them be deleted with the rest of the state. The theological categories developed over thousands of years to describe the relationship between the divine and the human map onto the technological categories required to describe the relationship between simulator and simulated with surprisingly little friction. The problem of evil becomes the problem of why the simulator includes suffering in the simulation. The problem of prayer becomes the problem of whether the simulator receives and responds to signals from inside the simulation. The problem of an afterlife becomes the problem of whether the simulator archives simulated minds after the simulation ends or deletes them.
These translations are not coincidences. The traditional religious frameworks and the simulation hypothesis are asking the same question in different vocabularies: what is the relationship between the reality we experience and the ground from which it emerges? Traditional religion answered with theology. Idealism answered with consciousness as fundamental. The simulation hypothesis answers with computation. The answers are formally distinct but structurally continuous. The hypothesis is the contemporary form of the oldest question, and the fact that it now arrives in the vocabulary of computer science rather than of scripture is a feature of the culture doing the asking, not of the question itself.
The simulation hypothesis is not a claim whose truth or falsity can be conclusively settled by any currently imaginable experiment. It is a framework within which certain ancient questions acquire new precision and certain contemporary discoveries acquire new significance. Its value lies less in its probability — which remains profoundly contested — than in the conceptual vocabulary it provides for thinking about the relationship between information, consciousness, and physical reality.
If the hypothesis is correct, then every question in philosophy of mind, in cosmology, and in theology must be reopened. If consciousness can be simulated, then the distinction between genuine minds and artificial minds begins to dissolve, with all the moral consequences that implies. If our universe is computational, then the unreasonable effectiveness of mathematics is no longer unreasonable — it is definitional. If we have a simulator, then the question of death changes: the termination of the simulation is different from the termination of the simulated beings, and the simulator's choices about what happens after the simulation ends become the central questions of eschatology. Every piece of what we thought we knew about ourselves and our world becomes provisional, subject to revision in light of the architecture of the system we are running inside of.
If the hypothesis is incorrect — if we are in base reality, the physical universe is all there is, and consciousness somehow arises from it without any simulator behind the scene — then the question of why the universe has the specific features that make it look like a simulation becomes pressing in its own right. Why is there a Planck length? Why is there a speed of light? Why does quantum mechanics look like lazy rendering? The digital physicists' answer is that the universe is literally computational, simulated or not. The standard physicist's answer is that these features are brute facts of how physics happens to work, and their resemblance to features of engineered simulations is coincidental. The simulation hypothesis is, in this weaker form, a hypothesis about the structure of reality regardless of whether there is a simulator behind it — the claim that reality is computational in character, not necessarily simulated in execution.
Either way, the hypothesis has reshaped the terms of the deepest questions. It takes the ancient intuition of the Gnostics and the Hindus and Plato — that the world we experience is not the world as it is — and places it in the most rigorous logical framework our civilization can offer. It allows us, for the first time, to discuss the question of whether reality is genuine or constructed without resorting to the vocabulary of religion, without committing to any specific metaphysics, and without leaving the domain of things that can be analyzed with the tools of probability theory and physics. The question of whether we live in a simulation is not settled, and may never be. The question of how to live, given that we cannot rule it out, is beginning to become one of the central questions of our time.