Any time you make something, there's an important question: is it any good? When it's something that you've put out into the world you can usually determine based on some combination of whether you like it, whether other people like it, and whether it works properly. But what about before you've released it? Some things don't have an easy objective measure of goodness, nobody else has had a chance to judge it yet, and your own opinions can sometimes be a bit clouded on the subject.
One of my favourite videos is of Ira Glass, host of This American Life, talking about The Gap. Not the American clothes retailer, but the distance between your taste and your ability. When you're first doing something that you like, you're often not able to produce things that live up to your own standards. I would say that even if you're quite skilled, you still don't usually have the ability to impress yourself: you've heard all your jokes before, you've seen the sausage being made, you know where all the rough edges are. It's hard to know what the reception will be until you try it for real.
But then you run the serious risk of releasing crap. I've previously written about professional responsibility, the right kind of perfection and crap that comes back to haunt you, so you could say I have some skin in the "don't release crap" game. And there's the rub: do you release something when you're not sure it's good and risk releasing crap, or do you wait and risk not releasing it ever?
I'd like to make the argument that responsibility comes with power: to the extent that you're able to determine if something is good or bad, you're responsible for not making it bad. What I mean is that as a novice programmer you shouldn't feel afraid to put out bad code, or as a novice writer you shouldn't be afraid to write complete dreck, because you haven't earned that responsibility yet. There's no excuse for making mistakes that you know are mistakes, but when you're starting out you don't have those instincts yet.
Similarly, even when you generally know what you're doing there will always be a frontier where you don't; every time you try something new there's some part of it that is risky, and the risk is that it will be crap in a way that you don't know enough to realise. And that's okay too. If you're not sure whether something's good or not, that's exactly the time to let go of that responsibility.
If you're in a situation where lives are on the line, or the risks are otherwise enormous, the right way to let go of that responsibility is to put it in the hands of someone who does know. But sometimes the risks aren't so high, and even so there might not be anyone else better to give that responsibility to. In which case, the only person remaining is the one who receives the thing after you're done with it: your audience.
If you make something and it's good to the best of your knowledge and ability, I believe you have discharged your responsibility to quality. Whatever remains is up to the people who use it to decide if it's good enough for them.
However, I don't feel too bad about it. It is a shame that I failed, but I think it would be much worse if I hadn't done it and also hadn't committed to it. In that sense I see commitment as having a double benefit: it helps your goals feel real, and also prevents optimising the goalposts away from what you wanted in the first place. I want to build this thing, and I'm closer to it having failed than not having tried.
But if I want to succeed, I need to go meta and learn from what went wrong this time. In this case, I didn't leave enough time for it early in the week, thinking I could make it up on the weekend. But actually my weekend got busy and I ended up trying to cram it all in in one day. Of course, that day didn't have enough time to actually finish anything. And, worst of all, that outcome wasn't entirely surprising.
To make it surprising again, I'm going to put more effort into estimating the amount of time left to finish everything and compare it to the amount of time left in the week. In a sense, this is the burn-down type information I was thinking of with the scoping calendar. I normally don't do much time estimation for personal projects, but since I'm setting a time-based commitment, a time prediction makes a lot of sense.
So, with that extra level of failure insurance I will commit again: a commitment platform by next Monday's post!
I've never really liked the idea of mantras. At least as I've always seen them used, a mantra is something that you repeat to convince yourself of its truth. So you would get up each morning and say "I'm going to have a great day today". Of course, what happens if you don't have a great day is kind of undefined. Do you just keep saying it? I certainly see the value in repetition as a tool; if we work off associations, then repetition is a way to build a strong association. I just disagree with trying to use that mechanism to make yourself think something is true.
On the other hand, I can certainly see the value in a mantra as a tool for focusing attention. If you have a tendency to feel anxious in social situations, it could be useful to have a mantra like "what's the worst that could happen? and is that realistic?" to encourage you to pinpoint irrational fears. It's less about convincing yourself that something is true, and more about using the mantra as a tag for a thinking process you'd like to make a habit.
Another way that mantras could be useful is to avoid doing certain things without thinking about them. If you're trying to go for a run each morning, but you keep waking up and going on the internet instead, you could try saying "I'm going to go running this morning" when you wake up. It's a mantra, but you say it at about the time you'll be making the decision and it forces you to actually make the decision. So you might not go running, but you won't not go running by accident.
More generally, I think of these as reverse mantras: something you repeat to yourself, not to convince yourself that it's true, but to check if it's true. "I'm going to have a great day today" can't be a reverse mantra because you don't know if it's true when you say it, but "I'm having a great day today" would be a fine reverse mantra. The trick is that you don't say it to trick yourself into thinking your day is great, but rather as a warning sign: if you say it and your day is actually mediocre, you'll feel cognitive dissonance and take notice.
Whether or not you do anything about it is, of course, outside of the scope of the mantra.
I had an interesting idea today about baby names. Or, for that matter, any name selection process at all. I had a look at some of the existing options and they seem to rely on either some kind of name sound index, looking up name meanings, or just pure random generation. All these things seem kind of clunky and unnecessary to me. A much better option would be to model name selection as a recommendation system, like Amazon or Netflix use to recommend products or movies.
You would just enter in any names you like or pick from a semi-random (selected for discriminativity) list. Then you just rate how much you like each name. Every name you rate goes into a sorted list, but also goes as an input into a recommendation engine which selects new names to show you. Your ratings also tune the recommendation system for everyone else. When you're done rating you have a premade list of names you like.
I think this would exploit a more reliable underlying similarity between names, and would avoid having to specify what type of name you're looking for. Boy name? Girl name? Dog name? Fictional character name? It doesn't really matter because those preferences would show up in how you rate the options you're given. It's kind of neat that a recommendation model could make a name choosing system that's not just better, but also simpler.
Another one of those areas where a really nice primitive can make a huge difference.
I've always found it difficult to think about anything superintelligence-related, or even moderate-increase-in-intelligence-related. It's not that I can't reason about those things – that's fairly easy – but it's really difficult to build an intuition about. How would it feel to be a genetically engineered supergenius, or some kind of supercomputer-powered AI, or even just someone a lot smarter than I am? I have no idea.
One of the most interesting things is to go the other direction, and think about times you were temporarily impaired in some way. For example, when you're very tired you might recognise that you're tired because of the physiological symptoms, but what psychological feedback do you have? The brain you use to reason about an impairment might not be able to because of the impairment itself. You see this most acutely in carbon monoxide poisoning, like in this story on Reddit where the author became totally useless but didn't realise anything was wrong.
I wouldn't be surprised to learn that there are much milder forms of this impaired awareness that happen all the time. Maybe you haven't been sleeping as well lately, or you're eating differently, or there are just natural changes in your cognitive function from time to time. How would you even know? Maybe you're half as clever as you were a year ago, and you just haven't realised because it doesn't feel any different. You just don't notice all the things you don't think of. Conversely, you might be more clever and not notice it if there aren't any obvious clues.
I suspect being the sudden recipient of superintelligence might feel a lot like sobering up after a big night and thinking "wow, I didn't realise I was drunk at the time, but I had no idea what I was doing".