In the city it's quite common to see new cafes and restaurants pop up for 6 months to a year and then disappear again. This isn't terribly surprising if you know anything about the industry; the margins are low, rent and wages are expensive, and for many owners it's a lifestyle business. It starts as someone's dream of the perfect little cafe, runs on savings and free labour by the owners, and eventually when that runs out it closes down.
I find that process particularly interesting because it doesn't just affect that one business, but other businesses that compete with it. Let's say you're a well-run cafe that's been in business for years. You're competing not with other businesses but with this entire process by which a perpetual rotation of cafes appear and disappear. Each individual cafe may not be sustainable, but the metasystem of cafes sustains itself because each new cafe brings a new sucker with fresh capital. And in a weird way, consumers are actually better served by that metasystem because it can deliver cheaper coffee.
I've noticed a similar thing about representative democracy. In theory, a candidate who simply promised to change their mind in accordance with popular opinion would be the ultimate representative. However, candidates who change their minds are often considered inferior "flip-floppers" lacking in principle. Despite the questionable assertion that changing your mind is bad, it is perhaps rational to reject flexible representatives. If you have enough candidates, you can pick one who has always reflected your views. Instead of expecting representativeness at the per-politician level, you let the political metasystem select candidates who reflect current public opinion.
Evolution, also, seems to prefer this metasystemic level of operation. Evolutionarily stable strategies appear where, for example, 20% of the population will steal and 80% of the population will not. Those ratios are stable at the point where nobody has an incentive to change strategy: an additional thief gains less by thieving than earning an honest living, and one more honest person would do better if they turned to crime. Notice that it would be equivalent if the entire population acted honestly 80% of the time and stole 20% of the time, but that doesn't seem to happen.
And perhaps, though many transhumanists would never hear of it, there is a similar metasystem at play in society and the role of death. It is comparatively rare that a person will completely change their perspective or their ideas through the course of their lives. While it would be great if this wasn't the case, at the moment it doesn't matter much because society is a rolling metasystem much the same as cafes or democracy; the individuals aren't important, it's the general behaviour over time. Old people take their old ideas with them, while new people bring new ones in.
But, to agree with the transhumanists, we can't rely on this forever. I believe that we will eventually conquer death one way or another. And at that point the metasystem stops. If we haven't sorted out a way to bring that flexibility down into our own systems by then, perhaps we never will.
I recently wrote about a failure that was at least in part attributable to changing my goals on the fly. I've had flexible goals cause problems in the past too, and I've generally heard the advice that it's better to set clear goals in advance. But why is this the case? It seems like it would be good to be flexible, and indeed many situations do demand that you change your goals when they no longer make sense. However, there's definitely a limit to that flexibility, and nobody suggests changing your goals in real time.
I think the reason for this is that our brains are very good real-time optimisation machines. If you can give your brain a rapidly-changing output controlled by some set of inputs, it will optimise that output for you, no worries. And not necessarily one value, it's often possible to optimise over whole sets of outputs at once. The whole thing is really impressive. But, like any optimisation process, it can lead to weird results if you don't clearly specify the boundaries of the optimisation.
A perhaps apocryphal story I heard once involved a team using a computer optimisation process to design a particular circuit on an FPGA. The circuit the computer designed was smaller than the one designed by humans, but nobody could figure out how it worked. Anything they changed seemed to break it, and even changing to a different FPGA chip stopped it working. The running theory was that it was exploiting magnetic interference or physical quirks of the chip because nobody had thought to tell it not to do that.
In a similar way, I think it's easy for us to optimise too much. That's part of the reason creative constraints are so useful, because they stop us from trying to solve everything at once and make it easier to focus. But the other part is that sometimes you come up with a bad solution if you start going outside the bounds of the problem. The most optimal solution with no constraints might not actually be useful.
When you allow yourself to change your goals, you're letting moving the goalposts be one of the solutions you can have. And that's not automatically bad, but the easier it is to change your goals, the more your natural optimisation processes can use the goal as one of the inputs it can change. In the worst case, if changing your goals is easier than taking actions to achieve them, it will always be a clever optimisation to change the goal instead of the actions. That's not to say you'll always do that, but you'll be fighting against your optimisation process when you don't.
I suspect there's an important principle in there for optimisers in general: don't let the output feed back into the input. Otherwise your optimiser is just optimising itself, and presumably that ends with doing nothing at all.
A term I've heard a lot in software development is friction: the things that slow you down or make something more difficult but don't stop you. For example, having to sign up before you can add items to your cart is friction. Nothing's stopping you from signing up, but it just makes everything harder. Similarly, slow page loads are friction, more buttons to press is friction, and waiting for your online shopping to arrive is friction.
I think what makes friction such an important concept is how disproportionately it affects behaviour. Google found that increasing search latency by 400ms decreased the number of searches people made by 0.6%, and had a persistent effect even after the latency went away. The friction trained the users not to search as often! Akamai ran studies on web behaviour showing that 47% of users expected a site to load in two seconds or less, and 40% would abandon a site that took more than three.
And anecdotally, I've noticed my behaviour changes quite drastically depending on fairly minor incidental difficulties. Before purchasing an e-reader I was fairly skeptical that it would make much difference, and at the time I wasn't really reading many books. However, after I bought it I read about one book every week for years afterwards. Clearly the minor difficulty of going to a bookshop or library once in a while was enough to stop me from reading entirely. I've noticed that I also tend to be happy to pay substantially more when I'm buying something online if I can get it sooner.
I suspect all of this is explained fairly well by standard cognitive biases. Every bit of friction increases the time between when you want a thing and when you get it, and perhaps increases the risk that you won't get it at all. It's well-known that we massively discount the value of future rewards, so perhaps just the time increase due to friction is enough to cause radically different behaviour. Alternatively, there may also be a component of risk aversion; people would rather the certainty of a process with fewer and easier steps.
Either way, it seems like there are basically free wins to be made in taking an existing thing that people like and just making it faster and easier by removing incidental difficulty. And, conversely, some interesting applications in taking things that are too easy and adding friction back in.
There's been a big push in security-conscious projects, particularly Debian, to have what's called reproducible builds. The problem they solve is that open source lets you verify that the source code does what you expect and there's nothing nefarious in it, but how can you get the same assurances for the pre-built binaries that most people download? Only by having a set of steps that deterministically produce an identical set of binaries, so if you trust the source, and you trust the build process, you can trust the binaries.
The security of reproducible builds is mostly an open-source thing, but it occurs to me that it could also be relevant even when the source is closed. Sometimes closed-source projects want to make their source available in a limited way, for example as part of a security audit. However, even if you trust the auditor, this still leaves a problem in that there's nothing stopping a malicious company from adding or changing things before the audited code is built into the binaries that end-users run.
And, although I haven't seen it, it might be useful for a company to use a kind of deferred open-source license to make their own shorter-term copyright. A problem with that is how do you guarantee it would happen? The company would need to contribute its source regularly to a trusted third-party, but you'd still have the same issue as with the auditors: how do you trust that the code that they've been given is what you're getting when you download the application?
I think the answer to all of these is a kind of code escrow service. If your closed-source project uses reproducible builds, you provide access to the source code and the build chain to the code escrow. They audit the code, or just hold onto it, or whatever trusted source code thing you need done. Whenever you publish a new binary, the code escrow can certify that their version of the build generates the same binary.
For something like a security audit, that might mean that they only verify certain versions, or certain components, but for a deferred open source project it would mean that you can trust that the entire source code used to create that version will become available in the future.
There are a lot of things we believe, but only some of them we believe completely and unflinchingly, like that when we drop something it falls to the ground. Others we don't really believe, or we only believe in a kind of consequence-free way. Beliefs like "everything is connected" are unfalsifiable and thus consequence-free, but you can also find other less obvious beliefs like "I'm going to get in shape" or "I'll backpack around Europe someday". They look like real beliefs, but if you don't actually act on them and they don't have any consequences then they're not real.
The inimitable Kurt Vonnegut used the idea that an entire religion could be built out of "harmless untruths". The kinds of unreal beliefs, like ghosts or the Loch Ness Monster, that don't really have any consequences. Now that you believe that ghosts are real, what are you going to do differently? Are you going to buy special ghostbusting equipment? Unlikely. Probably you'll go through your daily life exactly as before but occasionally say "I believe in ghosts" and that's that.
In fact, you can tell if a belief isn't real because attempts to make it consequential are amazingly uncomfortable. If you act like someone's belief in gravity is real, for example by challenging them to drop some stuff, or betting them money that an object will fall upwards, they'll happily do it – who doesn't like free money? But ask someone who wants to get in shape to tell you their specific plan, or bet someone who wants to go to Europe a lot of money that they won't go by a certain time, you'll quickly get a picture of whether their belief is real.
Some unreal beliefs are better off discarded, but there can also be a lot of benefit in reforming them into real beliefs. Maybe you really do want to backpack around Europe, and the fact that you haven't made any plans, researched places or drawn up a budget would be quickly rectified if you felt like it was really happening. In fact, that's kind of the definition of real: something that actually causes you to act or decide differently. If your goals and aspirations aren't real, if you don't feel like what you're doing is real, there's no reason for you to try to succeed.
Paul Graham did a great bit on why startups die, where he points out that while on the surface startups die from running out of money or a founder leaving, the root cause is usually that they've given up. They don't go out in a blaze of glory, they just sort of shrivel up and disappear. My reading is that, for the founders, the startup stops seeming real. They might still say the words, but they stop acting as if it's going to succeed and, like an imaginary best friend, it just eventually vanishes.
This, I believe, is the secret sauce behind the famous Steve Jobs Reality Distortion Field. It seems like being out of touch with reality would be seriously maladaptive, but evidently it worked well for him in business. I think the clue is in the name: when Steve believed in something, it was reality. He acted like it was true and he convinced other people to do the same. And that shared illusion was necessary for the ideas to succeed.
Though notice I say necessary and not sufficient. Bluster isn't belief, and you can't make something real just by believing in it. But it is necessary to believe in what you're doing. More than that, it's necessary that what you're doing is real: not just the kind of thing you talk about, but something you act on and rely on as unwaveringly as gravity.