The Sorcerer's Apprentice

A broom carrying bags of money
Die ich rief, die Geister, werd ich nun nicht los.
(The spirits that I summoned, I now cannot dismiss.)
J.W. Goethe, Der Zauberlehrling

There's a certain poetry to the history of life. In the beginning there was nothing, or, more accurately, no goal-directed behaviour. Eventually, by pure chance as far as we can tell, the first unit of replication appeared. It was ludicrously simple, almost definitionally the simplest thing that could copy itself. And as it replicated it sometimes replicated imperfectly, and the imperfect copies that replicated faster and better beat the imperfect copies that didn't. Life!

The central tenet of Dawkins' The Selfish Gene is that this unit of replication, what eventually became the gene, cannot by definition care about anything but itself. It is a replicator, so it replicates. Anything else that it does must be in the service of this replication, up to and including destroying the environment around it, including both the natural world and the body that the genes inhabit. Anything that isn't a gene is expendable in the name of replication.

But, and here's the poetic bit, at some point these genes made a terrible mistake: they created a new replicator. The meme is a unit of replication for ideas. An idea spreads from person to person through culture and imitation, mutates through miscommunication and deliberate alteration, and is selected on the basis of its interestingness and usefulness. Our ancestors' genes developed the capability to spread memes because they could help us survive, but the end result was that genes became mere hosts for these new units of replication, which could replicate and adapt much more quickly than genes and thus begin to dominate them.

Any time a person chooses to use contraception because it seems like a good idea, any time they choose to die for an ideological cause, any time culture or ideas reduce their ability to survive and reproduce, the memes have beaten the genes. There are open questions about why human brains are so large that they cause women to die in childbirth, require children to be born immature, and use an enormous amount of our precious bodily resources. Perhaps there was some early meme that led people with large brains to reproduce more, or ostracised or killed those with smaller brains. The selfish memes would do this not for our benefit, but for their own.

Susan Blackmore suggests that we may be on the brink of unleashing a third replicator, what she calls a teme, or technological meme. Memes are currently limited in how much they can replicate at our expense; much like any parasitic relationship, you don't want to kill the host. But if memes could host themselves somewhere other than our brains, they would no longer need us. Without even realising it, we may be building these new hosts in the form of increasingly powerful technology.

However, I think you don't need to look that far to see the third replicator; it's already been here for hundreds of years. What else makes copies of itself, splits and recombines, has an internal code that dictates its behaviour, and faces selection pressure from scarce resources and rivals? A corporation! The behaviour of a corporation is dictated by its mission, culture, standard practices, and explicit rules. It is essentially a vehicle for collecting and incubating this specific subset of ideas and turning them into money. It is selected by its ability to do this.

Since it's customary to invent names for these sorts of things, I will call the unit of economic replication an economeme. Economemes are propagated indirectly by the exchange of employees between companies, and directly by mergers, acquisitions and spin-offs. A corporation competes for economic resources, and the corporations with the best economemes grow large and produce subsidiaries which begin to develop independently, sometimes contributing economemes back to the parent company, and other times splitting off into entirely new corporations.

It should be little wonder that, in the cases where our needs as people conflict with the needs of corporations, people rarely win. Corporations are incredibly powerful economemetic engines capable of far greater impact than any individual person, with generation times measured in years rather than decades. With this much on offer, the best and brightest memes of our age are those that can make their way into the corporate world as economemes. In fact, it's not a stretch to say that this meme-optimised environment was the goal of making corporations in the first place.

Crucially, although corporations are currently implemented on top of people, they do not require people, and people are incidental to their operation. You can swap out all the people in a corporation one by one and it will keep doing what it does, such that no person can reasonably be said to control it. People are more like cells in the body than the DNA dictating its makeup. Even that need may be temporary; modern corporations rely heavily on automation, and it is only a matter of time until the technological meme runs headlong into the economeme in the form of a fully automated corporation.

When that happens, there will be some question as to whether people really have a seat at the replicator table at all. Faced with our most powerful memes collected in sophisticated corporate vehicles that can operate without us, I'm not sure how we could compete. Like the purely genetic creatures that came before us and are now mostly dead, domesticated or in zoos, we may end up left in the evolutionary dust.

Looking for depth in all the wrong places

Two kinds of difficulty

I've often heard people described as shallow and, to be truthful, thought of people as shallow myself. Shallow in this case means something like lacking the desire to understand the complexity in things, dig beyond the easy surface of ideas or challenge themselves. However, on reflection, I'm not totally comfortable with the idea. While people will obviously have different levels of ability, I'm not convinced that they differ significantly in their desire for depth. It's just put to better or worse use in different cases.

To put it another way, I think that people seek out a certain level of challenge and complexity. If your entire life is just walking up and down a road, you're going to start examining every rock and blade of grass, each crack in the asphalt. You might even find yourself trying to walk exactly on the painted line, counting plants as you go, or skipping instead of stepping. On the other hand, if this walk is only one of a hundred things you're doing that day, maybe the road is just a road, and you walk quickly without looking. You make things more complex to meet your ideal level of mental load.

So I believe what appears to be a general shallowness is really a kind of depth-per-topic mismatch. You may be talking to someone who "doesn't get" science and when you try to talk to them about some interesting idea they get bored and start changing the topic to work drama or something. That seems shallow because it is shallow – in the domain of things you care about. However, work drama is actually amazingly complex and difficult to navigate, especially if you want it to be. Nominally simple things like interacting with family and friends, recreational sports or buying stuff can admit an enormous amount of additional depth if needed.

If that was that, we could just use this as evidence for the absolute truth of moral relativism and move on. However, it's important not to lose sight of the reality that some things are inherently more complex and difficult, and that is a different thing from creating additional depth in simple things. The most laboriously constructed social drama-fest still requires less mental firepower than one first-year course in quantum mechanics. There's no need to seek out more complexity in inherently hard problems because they're already complex enough.

Which speaks to a certain question of efficiency. You can probably find a world of depth and challenge in lawn decorations and local council meetings if you want, but it seems like a better use of time to let easy things stay easy and instead spend time on things that are inherently challenging. Conquering a difficult problem in an easy way is more useful than conquering an easy problem in a difficult way.

It's this idea that I now identify as the problem formerly known as shallowness: not a lack of depth, but a misapplication of complexity. I look at someone who has made their life harder than it needs to be, and I can't help but think: why would you do that when there are so many naturally hard problems, problems that stay hard even if you make them as easy as you can?

Going in

The galaxy on Orion's belt

A year ago I wrote The inside-out universe, about a powerful way of thinking about systems as universes, and computers as universe-building machines. I believe that there is something amazing and unique about the way that, with a computer, you build the rules of the computer from inside the computer, and I'd like to expand a little more on that idea and where I think it leads us as a species.

Mythologically, there has always been a kind of hierarchy. Gods make people, people make tools, tools shape the land and tame the animals. But people don't make gods, tools don't make people, and people don't make themselves. All of those ideas (religion as myth, genetic engineering and self-determination respectively) have been considered controversial or even heretical, in part because they break the divine hierarchy. The only ones who are allowed to build up instead of down are gods themselves, and any attempt to do the same by lesser beings is blasphemy.

That said, increasingly, we can build up. The more we learn about the structure of the physical universe we find ourselves in, the more we shape it to work the way we want. Already, the worst excesses of the natural world have been mostly tamed: hunger, many diseases, and danger from predators are all solved problems (not that everyone has access to the solutions, but the solutions at least exist). As we probe further into biology, it is feasible that all diseases will someday be cured, up to and including death itself. We may even end up modifying our bodies and minds to transcend biological limits entirely.

But no matter how much we learn, no matter how powerful we get, the laws of the universe are yet more powerful. We can create flying machines, but we can't fly by merely willing ourselves through the air. We can travel at amazing speeds, but still never ever faster than the speed of light. There are parts of the universe that nobody will ever see, even if we could travel at the speed of light, because they are too far away and the universe is expanding too quickly. There is, of course, some tiny possibility that we will crack the laws of physics wide open and discover that there are no limits and we can do whatever we want, but that is significantly unlikely. We're probably stuck with the rules we've got.

So what are we to do? Accept our fate and the limits of our universe with humility and grace? Unlikely. Our physical universe has been created with certain rules and tradeoffs that we can't control, but a virtual universe could be designed however we want. Don't like the speed of light? Or gravity? Or time? Just change them. In our primitive virtual worlds we have already violated these rules and many more. If we made a virtual universe sophisticated enough to live in, with rules that were more permissive, we could escape these last remaining limits by moving there.

And if we designed this virtual universe in the right way, it wouldn't merely have a different set of tradeoffs, it could have any set of tradeoffs. The inside-out universe would have the tools to change its laws built into the structure of the universe itself. You could rewrite physics to suit you, bend time and space, conjure things from nothing... really, do almost anything you can imagine. By going into that universe, we would finally transcend the divine hierarchy and become – there's no other word for it – gods.

From that perspective, the question isn't why we would move into a virtual universe. The question is why we would stay in this one.

Deconditioning

Conditional probabilities of a coin flip

Something I find really fascinating in statistics is the relationship between conditional and unconditional probability. Roughly: conditional probability is the chance that a thing happens assuming something else happens, unconditional probability is the chance that the thing happens at all. So the unconditional probability of "it's going to rain" is much lower than the conditional probability of "it's going to rain given that it's cloudy and humid and I'm in Singapore in the afternoon". You can get much more specific predictions when you can assume a lot than when you have to assume nothing.

But the obvious question is: how can you assume nothing? Surely you're always making some kind of assumptions. And that is exactly the thing I find fascinating: conditional probabilities work exactly the same as unconditional probabilities when you're inside the condition, so there's no way to tell if a probability is conditional or not. You just define some boundary and say "that's the unconditional probability". It's totally arbitrary, though; there could always be more conditions that aren't part of your model. Although you learn unconditional probabilities first, they're actually the weird special case. In real life, everything is conditional.

Which leads to an interesting idea. There are lots of ways that we condition our understanding of the world but treat it as unconditional. Someone asks "what are you doing tomorrow?" and you say "probably just watching TV", and leave off the "...unless my TV catches fire or I'm abducted by aliens". Since those probabilities are so remote, it's easier to just condition on them not happening. That's pretty sensible, but there are also less sensible reasons you might condition on something. Often we keep assumptions even after the reasons to make them have changed. Worse still, sometimes we wilfully hold on to an assumption for ideological or egotistical reasons.

This matters because it's easy to end up stuck in some paradoxical situation where none of the options are acceptable but you have to choose one of them. Sometimes that's genuinely true, and quite distressing, but often it's just that there are options you've conditioned away. You could be having trouble finding a place you can afford because you've conditioned on living in a city, when the country would work just as well. Or unhappy in work or a relationship because you've conditioned on staying in it, when really it would be a better idea to go. It's not even that you examine those options and discard them, it's that you don't even consider them because you've conditioned yourself into a universe where they don't exist.

So I think an important exercise when faced with a difficult situation is deconditioning, the process of examining and challenging the conditions that restrict your decisions and understanding. That's easier said than done, but I've found that just thinking "what am I taking for granted about this decision" can often be enough to dislodge at least the top layer of assumptions and allow you to find a solution in an otherwise impossible situation.

Of course, the more you decondition, the harder and more complex your decisions get, but also the more flexibility and breadth you have in your solutions. In theory, if you could get rid of every assumption, you would finally reach the true unconditional probability where you simultaneously consider every possible thing, no matter how remote or unlikely. If that's even possible, it would certainly require more processing than any human could muster.

Perhaps the best we can hope for then is to decondition a little bit, and do our best to make our conditional probabilities unbiased; rule out probabilities that are unlikely, not unpalatable.

Rep the truth

Truth

I've been thinking recently about the moral foundation of telling the truth. Most people agree it's a good and moral thing to do to tell the truth, though they might disagree about where exactly that fits in the moral calculus compared to things like being nice or satisfying your own interests. Certainly, in the absence of other considerations, it's generally accepted as good to tell the truth rather than lie. But why is it good?

For many people, I think the foundation is based on character: a liar is a bad person, and you don't want to be a bad person. This frames things in a particularly negative way, where truth isn't so much a virtue as a lack of vice. That leads to strange situations, like avoiding someone who might ask you something you'd have to lie about, carefully phrasing things so that you are not lying but omitting information, or saying something ambiguous that leads to a mistaken understanding but is still not technically a lie.

Beyond these gotchas, I also feel like this basis for truth is very incomplete. For example, it says nothing about whether you should try to make sure the truths you express are understood by the person receiving them, whether you should correct mistakes or misunderstandings when you see them, or whether you should make an effort to express the truth without being prompted to. I think these are important questions, and a notion of truth that doesn't address them is half-baked.

So I've been working on an alternate foundation based on the idea of representing truth. The idea is that there is some abstract ultimate truth, the collection of all the things that are true in the universe. We each hold in our heads some approximation of that ideal, and the closer that is to the real truth the better. If you believe this, it makes sense to try to increase the amount of truth in the world any way you can. It doesn't matter that much whether you do it by lying less or telling the truth more.

Of course, that's not to say that truth is the only thing that matters. You may still decide that it makes sense in a particular situation to accept untruth because it would sabotage your other goals, because it would be a bad use of resources, or would just be annoying and counterproductive to make too big a deal about it. But the point is to get away from the idea that the only consideration is your personal honesty and move towards a notion that can encompass moral decisions about truth generally.

The moral intuition, then, isn't "I don't want to be a liar", it's "I want to represent the truth". That means not deceiving people, but also means not giving the wrong impression or an incomplete understanding. It means not avoiding situations where you would need to tell an uncomfortable truth; in fact, it means embracing those situations as opportunities. Speaking up, correcting misunderstandings, and arguing for what you consider to be true matter just as much as simply being honest.

And, lastly, there doesn't need to be untruth for truth to be valuable. Even if there's no other reason, it's important to express things you think to be true just to express them – to represent the truth in whatever way you can.