One of my favourite improv games is called association and dissociation. You start by free-associating: walk around the room, saying things you see or think of, and using those to come up with more things that relate to them. Broom, janitor, Scrubs, Turk, turkey, christmas, ham – that kind of thing. After you'd been doing that for a while you would switch over to dissociating: the same, but think of things that have nothing to do with the previous thing. Pretzels, corn – wait, actually those are both salty snacks you get at events.
Dissociation was enormously more difficult and, I suspect, not even really possible. Much like with paper-scissors-rock, I'm sure a decent analysis could fairly easily predict us even when we're trying to be unpredictable. The exercise isn't actually to make an un-association, but to find less and less obvious associations, to stretch our associative system to the point where it can come up with something that seems unrelated. That's also what happens when some idea comes at you out of thin air, and I think it's very important to cultivate for the sake of creativity.
It's previously been observed that there is a connection between creativity and unhappiness. One theory is the tendency of self-generated, spontaneous thoughts to lead to neuroticism and also creativity. Another is that unhappiness improves certain kinds of processing, particularly focus and attention to detail.
Those are both very interesting results, but I'd like to add my own speculation: perhaps there is a link between unhappiness, escapism, and the creative power of dissociation. Escapism is a common symptom of unhappiness, and losing interest in familiar things is a common symptom of depression. Perhaps, by making the familiar uncomfortable, unhappiness causes us to be more dissociative, and therefore more creative.
If that is true, it would be particularly good news for creativity. It would be a shame if being unhappy was a good strategy for improving creativity. However, if that dissociative mechanism can be learned separately, then happiness and creativity can go together just fine, and any time you would spend practising unhappiness could be better spent practising dissociation instead.
Things have been pretty busy the last few weeks, with my new prototype push and other new things I'm working on. I slipped up last week or so and, rather than writing a failure post and cutting my losses, I figured I would just make it up the next day. That didn't happen, and I just got used to being a day behind. Eventually it got to the point where I missed another one.
Of course, I should have seen this coming. I've previously had a very similar failure and I wrote about the general problem of ignoring minor failures after that. The problem isn't that you need to take minor failures too seriously, but that you need to treat failures of the system that corrects minor failures way more seriously than the minor failures themselves. I had thought about this already, of course, but I made the mistake all the same.
I think part of the reason that happened was that I hadn't really figured out what to put in my failure post. I normally try to go to the effort of analysing my failures so I can improve on them, but the original failure was basically "I was busy and slipped". I wanted to have more to say than that. In retrospect, it wasn't really worth putting it off until I had something better to say, because I ended up not doing it at all.
To stop this from happening again, I'm going to commit to writing a failure post if I miss my deadline no matter what. I think to some extent I still want to optimise that goalpost and be flexible with the deadline, but I have to fight that urge. Doing it just kicks the problem down the road and makes my life more complicated. Better to cut my losses as soon as it happens and move on.
I really enjoy reading books by or about scientists, not least of which the inimitable Richard Feynman. I think what is so appealing isn't necessarily the work they do, or any particular discovery or mannerism, but rather a kind of mindset that you don't see much outside of very good scientists: the testing mindset.
What I mean is that there are lots of times when you'll come across something unexpected. Many people won't even notice, because they're not interested or paying attention to something else. Some people will notice, and become curious about the unexpected thing and how it works. An even smaller number will set out to try to learn about or understand the thing. But the rarest response of all is to figure out how to trap this unexpected thing in a web of experiments so it has no choice but to reveal itself. Those people are the ones who get to be great scientists.
A great example is a chapter in Feynman's book called The Amateur Scientist where he mostly talks about ants. He wwas curious about how ants find food and know where to go. So he ran a series of simple experiments involving moving ants around on little paper ferries, setting up grids of glass slides and rearranging them, and graphing their trails on the ground with coloured pencils. He didn't sit around wondering about ants or ask an ant expert, he made specific tests he could run to figure out how they worked for himself. I suspect if ant behaviour had not already been extensively studied, and if he wasn't otherwise occupied with physics, Feynman would have made some significant contributions to the ant field.
I often run into things I don't understand, from the behaviour of some obscure piece of software to my singing tea strainer. I notice, though, that although I'm pretty good at noticing unknown things in the first place, my first instinct is usually to try to learn about them by looking for information somewhere else. That's usually works fine, but what about things nobody knows yet? Looking for answers only works when someone else has already done the work to find them.
It is, of course, way more efficient to learn from the experiments of others than to repeat everything yourself. But if you spend all your time relying on secondhand knowledge you might not build the skills necessary to make new knowledge. The testing mindset doesn't seem like something you turn on and off, but rather a way of looking at the world where you constantly want to poke and prod at the bits that feel funny. So perhaps it's best to do things the hard way sometimes and re-discover from scratch what you could easily learn from a book.
One of the hardest things for non-programmers to learn about programming is state. That is, the surrounding context that changes the meaning of what's in front of you. In the expression "x + 1", the meaning of 1 is obvious, but the x is state; you cannot know the value of x + 1 without finding out what x is. If the value of x is defined just before, it's not so bad, but what if x is defined a thousand lines of code away? And changes over time? And is defined in terms of other things that also change over time?
It might be more accurate to say that what is hard is simulating a computer. When a programmer reads through a program, they execute it in their own head. The more complex the program, the more difficult this is, but your internal computer also gets more sophisticated as you go along. Instead of reading one symbol at a time, you start to read whole blocks of code at a time, the same way proficient readers will scan whole sentences rather than individual letters or words. However, that only makes the immediate behaviour easier to read. The amount of state you can simulate is still limited by your own working memory, and it's very limited.
Perhaps a good analogy is how your operating system deals with your keyboard. Any time you press a key it gets sent to the current application, the one that is said to have "focus". So which key you pressed is the input, and the focus is the state. The same key in a different application does a totally different thing. Luckily, the focus state is visible; the active application is highlighted so you know where your keys will go. Most programming has invisible state, which is more like using your computer without looking at the screen. In theory you can figure out what will happen with every new key you press, but over time you're going to lose track of what's going on.
It's for this reason that you often try to avoid state in software development. However, it's not possible to avoid it completely. Even if you can use a (comparatively rare) programming language with no state, there are bits of state when your application interacts with the real world. Is it writing data to a disk? Communicating over a network? Operating on a computer with other applications? State state state. It's inescapable. So we must learn to simulate state, something that it would appear does not come at all naturally to us.
Interestingly, people have emotional state that behaves a lot like state in programming. The same events, words or actions can have wildly different consequences depending on someone's emotional state. A lesson we must learn early on is to simulate that state in others so that we don't end up totally surprised by people's crazy actions. Fortunately, emotional state is fairly visible, and our brain is particularly specialised for mirroring the emotional state of those around us. That said, people still manage to make a hash of it fairly frequently. It's interesting to wonder how well we would do without those benefits.
One area where we don't seem to get as much help is with ourselves; our own future states are not terribly visible to us, and we don't seem to have the same optimisations for future states as we do for present ones. The results should be fairly obvious: we are very bad at predicting our stateful behaviour. Not only do we have trouble predicting our future state, we also mis-predict our actions even assuming a given state. No wonder planning for the future is hard!
I think that stateful thinking can be a real advantage here. Once you learn that you can't just naively assume "x + 1" will mean the same thing everywhere, you start paying a lot more attention to the x. But for stateful thinking to be useful you need two things. Firstly, you need to learn how to reason about and simulate state. Secondly, you need to actually accept that you have state, and that your future actions can't be predicted without it.
Oversight is an important quality in any system. Your original system is designed to achieve some particular outcome or perform some action, but it's not enough to merely trust the design. Firstly, there may be flaws in the design that only show up later on, and secondly the needs of the system can change over time. It can't be the job of the system to evaluate and correct itself, because that would lead to an over-complicated system with a fairly substantial conflict of interest. Normally, you build a second system with oversight over the first one.
This pattern appears in software fairly frequently, where you will run some base service that is expected to work all the time, and then a second monitoring service to make sure. You don't want to build monitoring into the base service, because that would significantly increase its complexity. And, more importantly, if the service isn't working properly, chances are it won't be able to monitor itself properly either. This is the software equivalent of a conflict of interest. You always build a separate monitoring system.
In human systems, the same pattern appears in governance and management. You don't get employees to monitor their own performance, because then they would have to keep up with a lot of management-level knowledge and skills that would make their job a lot harder. Additionally, a worker who is performing badly might also be bad at evaluating their performance, or hide problems out of self-interest. So too in politics, where politicians are given oversight over the operation of society. One system to do things, another to oversee the first.
But, an important question: which system is more important? If you think the answer is management, or they're equally important, I'd encourage you to consider the utility of a monitoring system with nothing to monitor, a government with no society to govern, or a manager with no employees to manage. The oversight system is important, yes, perhaps even the biggest contributor to the success of the system it monitors, but without that underlying system it is completely useless, just dead weight.
However, that's not how things end up working in practice. We consider management to be the most important and powerful part of a company, and politicians the most powerful part of the citizenry. They're not just an important support structure for the system, they are in charge of the system. The original system, the one doing the work, can even start to seem like a minor implementation detail of the management system. After all, when you want to change direction it's the management system you talk to, and the management system that tells you whether the underlying system is working properly. It's so, so easy to think that the management system is the system itself.
I call this the manager's coup, and I think it's essentially historical in origin. The first managers were actually owners, and the first governor was a king. We began with a divine hierarchy starting at the big G and going all the way down to serfs and slaves. That system wasn't very efficient or well-organised, of course, but it flowed neatly from the power structure of the day. Only much later did we start to believe in individual freedom and optimising for efficient delivery of outcomes rather than upholding the universal pecking order.
Even though we no longer believe in that hierarchical social order, we still seem to look to it for inspiration. In some way, we still instinctively feel that oversight is ownership, and that management is power. These ideas perpetuate themselves by mimicry and resistance to change. But there is an essential tension between the manager's coup and the reality as represented by outcomes. Sure, you can believe that the managers are the most important part of your organisation, but you're going to lose to companies that don't. In the end, oversight doesn't pay the bills.
It's harder to imagine systems where the people doing the work are in charge, and management takes a supporting role, but there are examples out there. One good one is the relationship between a writer, actor or musician and their agent. Instead of the agent hiring an actor and telling them what to do, the actor hires an agent to ask them what to do. The agent still exercises oversight, still makes decisions, and good ones are still very well compensated, but they serve the system they're overseeing, not the other way around.