This is a transcript of a talk I gave at Targetprocess conference on July 27. It meanders and jumps a lot before making a title point, but that was a good opportunity to discuss some rationality-related topics (one of the working titles was “Thinking Better: Ten Weird Tricks They Don’t Tell You About”, but then I remembered that people love structure and narrative, and (sometimes) giving people what they (subconsciously) want is one the topics here.

So you punched someone in the face. Why did it happen?

An easy and boring answer would be “I was angry” (or even more boring stuff like sports). An interesting answer would include neurons, neurotransmitters, amygdala, hormones, exposure to different chemicals since your childhood, and Homo Sapiens evolution. We’ll focus on the last one today: it’s harder to get rid of.

Humans are not blank slates. You probably don’t expect anyone to choose how many hands they have, or even how fast they can multiply three-digit numbers. However, in a lot of areas there is an implicit expectation that humans are only taught by culture or school or parents. That’s just not true: genes are important, and we all have quite a lot of common genes.

On the other hand, an unexpectedly common pitfall is to use evolution as a guide for moral intuitions. If you want to make a case for polyamory, there’s no need to search for it in a species history. Sure, jealousy has a lot of evolutionary reasons, as do other things which are relevant to this issue, but just because something isn’t natural for humans doesn’t make it wrong, and just because prehistoric humans did something doesn’t make it morally right.

Let’s talk about infanticide.

Gorillas are tournament species. It means that their males have a strict hierarchy, and the most powerful one gets almost all of mating and fathers a lot of children, while losers don’t get anything. However, you can’t be the strongest forever, so every several years a new leader emerges.

Here’s what happens quite often after such revolutions: a new dominant male comes in and kills all the baby gorillas. Why? Well, this gorilla doesn’t care that as a species they’re almost extinct. The only thing he cares about is his own genes, and one thing that he knows is that females won’t ovulate while nurturing babies. Of course, females also want to spread their genes, so it’s not like they’re ready to do it. However, another feature of tournament species is high sexual dimorphism—males in these species are much bigger and stronger than females. So this male happily kills infants, waits until females ovulate, and hopes that he won’t be overthrown too soon.

Now, as far as we know, early humans were not into infanticide, but that’s not the point. Suppose we learn that it was in fact a common practice 200,000 years ago. Does it make any difference? Of course not! Studying human evolution is useful for answering questions about reasons of behaviour, but not for making a case about what behaviours we should have.

However, even if you understand human evolution perfectly (and we don’t), interactions with everything that happened after birth are pretty important. Something being heritable doesn’t mean that it will necessary happen. It’s one thing to make a prediction based on early humans’ lifestyle, and another to check that it indeed works out in a modern environment.

Research

Studying complex systems is pretty hard. Your standard research ideally goes somewhat like this: you notice or theorize that whenever X grows, Y grows as well. To check this, you try to isolate some part of the system, split it into two identical subparts, change only X for one of them, and then compare their changes of Y. If Y changes are noticeably different for one subpart, congrats, now you’ve got a way to manipulate Y. If not, that’s nice too, since now you know that X has nothing to do with Y.

We just made three big assumptions which are often hard to implement, especially in social sciences. First, in a different part of a system (say, on a different continent) things can be working differently. Second, you can’t really split people into identical subgroups. Finally, a lot of random stuff happens to people all the time, and not only X influences Y, but also A, B, Q, and W, some of which may never happen again.

One way to deal with this is to run experiments multiple times. Splitting 10 people into identical groups is impossible, but over thousands of people relevant individual differences will get smoothed out. If you’re reading about a study with 30 subjects, you should be extremely careful. (Pro-tip: don’t read outlets using headlines starting with “Scientists proved that”; also, be cautious about almost any popular science coverage.) It doesn’t mean that only enormous studies are useful: meta-analyses combining multiple small studies are extremely important.

Still, even with all these difficulties, we’ve managed to learn a lot about how humans work. (Also, we’ve managed to unlearn a lot recently.) We know that genes influence a lot of behaviors and outcomes, and so it makes total sense to discuss the legacy of early humans. Early humans are famous for two things: living in groups of up to 150 people and making type I errors because it’s safer to take a stick for a snake than vice versa.

Signaling

Signaling is really simple for simpler species. A peacock wants to spread his genes, and he has to find some way to show that his genes are the best in this forest. Since peafowl geneticists are somewhat lacking, and they don’t have their 38andMe yet, they use a visible signal: an enormous, beautiful, and completely useless tail. Importantly, it is an honest signal: you can’t say “I have a great tail” without having it.

Humans signal as well, and in a lot of ways. Of course, signaling to potential mates is still important, but we also have status and loyalty and other social concepts. The standard example is conspicuous consumption: people don’t buy sports cars because they’re fast, people buy them because everyone around knows they’re expensive. Driving Prius sends a signal that you’re concerned about environment. Giving this talk also sends a lot of different signals.

Of course, signaling is not the only reason for actions. People genuinely like fast driving, care about environment, and want to tell others something interesting. Moreover, there’s not much harm in buying cars. However, a lot of signaling happens in areas where we could really use better decision-making.

Like, say, in politics. Even outside elections, politics isn’t about policy. Politics is about signaling loyalty, and about what groups or individuals should rise and fall in status.

In fact, the more you look around the more stuff looks like signaling. Want to look a better Catholic? Don’t campaign against murders (everyone is against murders), campaign against abortions (they’re controversial, so it’s a much more honest loyalty signal). If higher education is about knowledge, why are students happy when a class is canceled? And what do people stop doing after getting married?

(But you know what is cool? Money is a really, really cool idea. Standard virtue-signaling stance is that it’s bad that we grant higher status to people with more money, but reworking social structures is very, very hard, and giving people money is relatively easy.)

An interlude on fixing oneself

At previous Targetprocess conferences I gave talks with optimistic titles like “It sucks to be a human,” so the next two sub-chapters offer some ways to fix some issues in oneself. We’ll get back to manipulating others, don’t worry.

All these built-in biases are implicit, so the best way to deal with them is to make you assumptions and reasoning explicit: with words, or, better yet, with numbers.

CBT

Cognitive behavioral therapy is in many ways an applied rationality. The basic idea of CBT is that your feelings (e.g., being depressed or anxious) are caused not by events around you, but by your thoughts about these events, and oftentimes these thoughts have a lot of biases and distortions. You hear a single negative remark, immediately jump to “Oh my god this guy hates me and I’m a failure at everything I do and everyone knows it“, and you don’t even notice how silly it is because in reality you don‘t spell it out like this. CBT (this is not medical advice, and you should read Burns on this) wants you to stop and, for starters, go through a short checklist: do I overgeneralize? do I try to mind-read? Jump to conclusions? Take this too personally?

This pattern-matching into several distinct (Burns lists 11) questions is probably easier to use, but most of it boils down to inaccurate estimations: most of the events you notice are not as important as it seems, and you ignore a lot of stuff. (Remember: you have much more information about yourself than about any other person). Many CBT exercises fight these biases: you can count number of times you did something useful during a day, or number of times when that guy told something good about your work, or at least trace how exactly a code review note “you should use autoformatter” turned into suicidal thoughts.

(Jokes aside, some amount of CBT should be taught in high school, and even if you don’t have any depression or anxiety symptoms—really?—you should read the book.)

Bayesian thinking

Probabilities are subjective and are just a measure of your uncertainty. Three different people looking at the same problem can have different probabilities of different events (especially if they don’t all have all relevant information; see also: Aumann’s theorem). It may sound stranger than the frequentist approach, but it’s more useful in a world where you can’t repeat experiments (also known as a real world). It doesn’t mean that any probability is just as good, there are still important rules, and following them will make your probabilities better.

(You’re good at probabilities if of 10 events you forecast with 80% probability, about 8 really happen. 30% probability of rain means that in the long run, it rains after about 30% of such forecasts—and that’s what really happens with short-term weather forecasts.)

We will not go into maths here. The gist of Bayes’ theorem is simple: you have two possible causes A (“aliens”) and B (“not aliens”), and then you learn about an event that is more probable in the world of A (say, in case of A it happens 90% of time, while in case of B only 5%). Now you should update probabilities: A is more probable, B is less. But be careful with the amount of changes: if you noticed something really strange in the sky, it is reasonable to increase A’s probability, however, A’s prior probability is extremely low (we’ve been monitoring the sky for a long time, and no aliens have been seen), so you need a really good evidence to update it even to, say, 10%. A spaceship landing in front of you is good evidence (especially if you’re not the only one seeing it), a bright line on a photograph is not.

In short, be sensitive to new information, but not too sensitive, and don’t use 0 and 1 as probabilities (since you can’t move them with any Bayesian update).

Another useful idea is splitting what happens into skill and luck (it is an obvious term for games like poker and Hearthstone, but life overall is quite similar: you can influence it in some ways, but you don’t know everything, and there is a lot of randomness). Skill is everything under your control, and luck is randomness outside it. Skill is in taking in information, processing it in the best way possible, and doing something that has the most probability of success.

Let’s say you’re adding a new feature to your application. You do your research about potential users, conduct a lot of UX interviews, create beautiful animations, code everything perfectly, and make sure that all 57 new microservices run smoothly and without any issues. However, instead of thousands active users it gets four (on a good day), and you declare it a failure.

But was your decision-making a failure? Maybe not. Maybe your Slack integration failed because two days before its release everyone learned that Slack sends all those passwords in your DMs to Chinese and Russian hackers, or because everyone switched to just-released Snapchat for Business. These reasons are luck. Interviewing wrong users or developing a feature for five years is skill.

A good exercise would be to take two separate groups of people, tell one of them that your feature failed, tell another that it was a complete success, and ask both for a feedback on your decision-making.

It seems strange: on the one hand, you should be a good consequentialist (I had several paragraphs on this, but instead you should check out Scott Alexander’s FAQ), and on the other hand I’m telling you to ignore results. What gives? Well, of course you shouldn’t just ignore them: they should be a very useful input for your next decision. But if you want to evaluate your decision-making, do not fixate on what happened because of luck—at the very least it will help with your anxiety.

Doing good better

Effective altruism is a wild idea that you should care about what exactly your donations are doing. It may feel nice to help puppies in a local shelter, but is it the most good you could get for your money? If for the price of ten thousand dollars you can save one human, two humans, or ten puppies, what is your choice? You could make a case for puppies (you shouldn’t: humans are more important than cute animals), but two is always greater than one. (Outside of simplified examples you would be using metrics like QALYs—Quality-adjusted life years.)

The easiest way to get these two-for-one deals is to earn money in a relatively rich country and spend it in a relatively poor one (most of the top charities listed by GiveWell operate in sub-Saharan Africa). Instead of working directly with less efficient local charities, you can spend these hours doing a well-paid job, and give out all that money. GiveWell top charities are pretty fail-safe: if you spend 100 dollars on anti-mosquito nets, they will definitely work and protect a number of people. A more interesting question is comparison between these straightforward causes and moonshots like trying to eradicate mosquitoes altogether. We may completely eliminate the malaria problem or just spend a lot of money without any effect. It’s hard to tell, it’s in the future.

So, yes, future is hard to predict, but at least economic growth raises all boats. The richer our society gets, the more scientific and technical progress there is, and even if you don’t like industrial-scale state surveillance, at least medical drugs, clean water, and better harvests save a lot of lives. The main issue with economic growth is that it helps mostly people living in the far future.

Inspiration and signaling in tech

Okay, now we can finally get to the point.

It is hard to make people feel emotional about abstract ideas. Making a market a tad more efficient is useful and might make a huge difference in 400 years, but is hardly exciting. Helping 20 kids in Africa is objectively better than saving stray dogs, but doesn’t give the same warm glow. To push people to do good better, we not only have to convince them, we have to inspire.

Inspiration is all about status. You can write think pieces about how Shuri is an inspiration because she is given a high status in the movie. Not just literally in the movie (being a boring princess would’ve been enough for a high status in the fictional universe), but mostly by the way she is portrayed (including all those one-liners—yes, humor is also about signaling).

It’s not necessary that such character is high-status because of The Good Thing—if a cool character does something boring, it becomes cooler by association. It would be nice to have a better Elon Musk who would publicly support GiveWell and Against Malaria Foundation instead of building a pretty useless submarine. More generally, the social norm of not announcing one’s charity contributions is yet another instance of harmful signaling.

I don’t know yet how to make economic growth (or, say, project management) more inspiring, but maybe we don’t need to (although even marginal improvements are useful—as always). Maybe you can indirectly improve status of people working on this stuff. Maybe your “Boring Yet Useful” software development company should just use a higher-status programming language (ask any PHP developer whether there are status differences between programming languages).

Does it sound like something that any good engineer should immediately condemn? Should’t we choose the best tools for the job regardless of what’s on the front page of Hacker News?

However, the tools that we use aren’t just languages and libraries: mostly we build software with people. And look, you already make engineering decisions that are based on human constraints: people can’t work for too long, aren’t experts in every useful technology, and have hard time dealing with concurrency. If a more exciting and cool language is 85% suitable for the job, wouldn’t it be a good trade-off to have your employees more interested and interesting?

It’s also important for hiring. Saying “I work at NASA” has an effect not only because rockets are cool, but also because it sends a signal “I am smart enough to work at NASA”. Yes, to some extent, you can fight anything with financial incentives and just overpay people to do something mundane. Or you might think of the ways to improve the signals people send when they say they work at your company. Maybe you already have some cool people as employees—“I work with X and Y” is pretty good if anyone outside the company knows who X and Y are. “I work on this Z library you probably used” is also nice—have you tried open-sourcing more of what you‘re doing?

I can’t really recommend any specific ideas (even X, Y, and Z). You just should not forget that you’re hiring humans, and humans are social animals. All those standard perks (yay, I don’t have to bring my own cookies! And someone remembers to pay for the gym!) are about convenience. Humans don‘t dislike convenience, but don‘t really notice it. But status is another deal. You notice it, you love it, you crave it.

Still, here’s a catch: is status-chasing a zero-sum game? In a general sense, it’s not: if you’re member of a lot of different groups, it’s easier to be someone important in at least one of them (that’s what hobbies with communities are for). For a single world of tech companies, it is: companies can’t all be the most admired, and if everyone switches to Clojure, you won’t gain much by doing the same (please do send angry emails with long explanations of how Clojure makes you a fitter, happier, more productive developer). But if this rat race leads to a world with more cool open-source technologies, it’s totally worth it.