This year I actually attended talks at EA Global; a departure from my strategy last year of ‘ignore all programming; talk to as many people as possible until my voice runs hoarse’. Towards the end of the ‘Women and Nonbinary people in EA’ meetup when they were shuffling us out of the meeting space, I caught the eye of a young person from Israel.
They mentioned that they were new to EA, and very excited by the ideas and the goals of the movement. But then amid the fragmented conversation it came out:
‘I don’t feel very welcome here.’
This was less than 12 hours into the conference so I was surprised they had picked up that vibe so quickly. I asked what made them feel unwelcome. We both agreed it wasn’t people being overtly rude. Nor was it just the overwhelmingly technical focus of many of the attendees.
It was somewhat, but not really, the overall demographics – the heavily young, white, male, atheist, well-educated skew. We agreed though that it was more than that, and that there was something subtle going on that made it hard for them to get into everything and feel like they could be a part of the community.
I wondered aloud with them how they would find the rest of the conference, and made them promise to report back.
I have a habit of wandering in and out of sessions at conferences that probably vaguely irritates the facilitators when it’s in small groups. In this case I had wandered into Julia Wise’s ‘Mental Health and Wellbeing’ discussion workshop late after cutting short an interesting conversation in the hallway. It took me a moment to even understand what the conversation was about, but Julia made the group rapidly feel at ease and the 30+ people were soon talking fruitfully about their fears and anxieties and how EA values clashed with other parts of their lives.
One woman spoke about feeling like she wasn’t good enough to do direct work; Julia responded by asking the group how many people had experienced impostor syndrome within EA.
Nearly every hand went up.
A few people mentioned strategies for keeping themselves on the straight and narrow, like remembering that they were ‘still doing more good than their non-EA friends’. I gave a reply that accidentally became a minor manifesto.
I said that I believed there was a certain type of person who was drawn to EA because they didn’t feel good enough, or like they had to earn their place in life somehow. I said I believed the principles behind EA and the psychology of the movement fuelled that. I also mentioned that too many people believed they weren’t allowed to be happy if they weren’t the perfect EA, or if they weren’t currently in the process of doing the most good they could. I argued that EA was beginning to have the same function as a religion in terms of providing purpose in (some) peoples’ lives, but that for a pseudo-religion it was doing a crappy job of providing people with the necessary social support to take on the difficult challenges it presented. At times it felt like the movement just attracted people and consumed all their excitement and enthusiasm without regenerating it, leaving people feeling burnt out and alienated.
I then wandered out in search of a different presentation and instantly regretted not staying.
From that point, throughout the conference I had people come up to me and talk about how much what I said in the workshop had resonated with them. That they felt like EA had a guilt problem, that they too had experienced it, and that they agreed that if EA were to thrive with the current demands it placed on people that it needed to to become a community that gave people (and not just people in the right social networks) adequate social support. All of these people came out of the woodwork, as if by magic, to earnestly ask me how we could solve the very real problem that they had previously thought only they were struggling with.
I felt like a minor fairy godmother as I wandered through the venue, collecting whispered stories of people who felt lost, or who felt like they weren’t useful to a movement that they felt only wanted technical supergeniuses who could write AI papers or do research. The rumbles had started.
Later on, as I was speaking to a handful of friends and new acquaintances who had each been in or around the EA movement for a few years by this point, it occurred to us that not a single person in the group actually identified as an EA. EA-affiliated, maybe, but either something had stopped us from fully embracing it, or we had gotten disillusioned with the movement after being hardcore EAs for a while. We joked that so many people were having doubts that the only people at the conference who identified as EAs were the people who had just heard about it a few months ago and were in the honeymoon period. We agreed that there was something in the absolutism, in the black-and-whiteness of the dominant sales pitch, that made us uneasy to half-identify, or identify as ‘part of the EA movement’ in some way. I knew that many leading EAs and EA orgs had tried to do something about this; to emphasise that not everybody needed to be hardcore, but it seemed in that moment that we were collectively some evidence that those efforts hadn’t worked, or worked enough.
Effective Altruism is a psychologically demanding belief system. At it’s core are a few fundamental assumptions (maximising utility, egalitarianism, a moral duty to do the greatest good) inherited from utilitarianism, Peter Singer, and the elite-educated men and women who founded the movement. These assumptions, if you are under a lot of stress or predisposed to anxiety, depression or neuroticism, can feel oppressive. I remember years ago when I had decided that yes, I wanted EA principles to be the guiding principles in my life, feeling an overwhelming sense of dread when I realised that there was no way to resolve the tension between maximising my utility function and that of everyone else without being a martyr or a giant asshole. This ended up making me way less productive, and I only became able to engage with altruism again once I dropped the demanding belief system of EA.
It’s my thesis that the psychological issues that crop up within EA, while they are not the fault necessarily of the founders or leaders of the movement, a) do stem from really fundamental parts of the ideology, and b) are exacerbated by the public narratives around the leaders who are most highly visible. So my suggestions for improvement, if they are actually valuable, would require pretty extensive refactoring of the basics of EA, and ideally a shift in the leadership by those who represent EA both outwardly and to those within the movement. I’m aware this is a big undertaking, so I am currently at the level of sketching out ideas in the hope of starting a discussion, not proposing a panacea. I also want to note that I am explicitly not comparing EA to any other movement (which I’m aware also have their problems) – I am only comparing EA to itself.
Firstly, uh, the ‘child drowning in a pond’ argument. This has been an iconic and central part of EA for years now, and it has definitely contributed to the association between ‘we want to find the way to do the most good’ and ‘it would be immoral not to’. While in belief systems like veganism it is reasonably possible to meaningfully live up to the central ask, in EA, for most people, it is not. That is partially because ‘do the most good’ is an unbounded optimisation problem, and partially because peoples’ monkey brains cannot meaningfully distinguish ‘the best I can do is not the best anyone can do’ from ‘I am failing to do the best I can do’. The original goal is simple can sometimes feel like a call only to those who already have a shot at doing ‘the most good’ in a competitive human sense. This leaves the poor, the disabled, the marginalised and the non-technical feeling like they have nothing to contribute (even when the community is crying out for empathy, great ops people and community builders). And it leaves everyone feeling like they aren’t doing enough, because ‘enough’ in this case is literally impossible.
There are two sub-components to this problem – a) everyone feels guilty for not doing enough, even when they are, and b) people feel like their lives aren’t purposeful or worthwhile if they don’t do the most good. I have a sneaking suspicion (that I mentioned in Julia’s workshop) that even though EA was started by people who wanted to be altruistic out of recognition of their extreme privilege, EA attracts the sort of people who tend to take on the world’s problems in order to feel less bad about themselves.
Whatever the solution to this part is, it has to involve a recognition that taking the weight of the world on your shoulders is dangerous, difficult, and absolutely not mandatory. Responsibility disproportionately larger than your sphere of control is a recipe for a bundle of stress and unhappiness, of the kind that makes you weaker, not stronger. Most people (particularly young idealistic people who are yet to finish the necessary personal growth work to fix whatever ickiness developed from their childhood) are generally not ready. There is a reason cultures have serious initiation ceremonies for adolescents, and that’s just for taking on the responsibility for your own adult life (and maybe that of your family or close community). A thriving EA culture would help those people develop the strength to take on whatever moral responsibility is appropriate for them to manage, along with wise mentors who can advise caution when they want to take on too much too fast.
Current EA culture lowkey says ‘Well, the world is suffering, and it’s your responsibility to fix it,’ and then the newbie closes the browser tab and has to endure their next five existential crises on their own.
One small part of this is the identity ‘Effective Altruist’. Think about it for a second – when you are identifying as an EA you are saying (with your words if not your intentions) that you are already highly proficient at the skill of doing good in the world, and you are already doing it.
Looking only at the psychological effects of identifying as an ‘Effective Altruist’, there is a small lie inherent the minute you take on the moniker. Because for most people who become EAs they are not yet proficient at the skills of doing good. EA should be a term like ‘knight’ – only awarded sparingly and then only to outstanding individuals whose contribution over a long period of time is unparalleled. Not something that people label themselves as soon as they’ve read ‘Doing Good Better’ and made their first donation to AMF.
This is not for PR purposes. Calling yourself an EA when you don’t feel effective nor wholly altruistic feels, to the intellectually honest believer, hollow and insincere. It’s like giving everyone participation trophies. With the shift in focus to collective do-gooding spearheaded by Will at last year’s conference, maybe the way out is to deemphasise the value of individual effectiveness (something that I think unfortunately happens as a byproduct of the fact that 80K’s career advice gets so much media coverage in comparison to other organisations’ work). We might instead emphasise the fact that we are building several machines, or a garden, that itself is improving at doing good.
This is difficult because the term altruism is very human-shaped and inherently has a subject who is altruistic – it doesn’t make sense in English to say that a machine or an institution is altruistic in the same way you would discuss a human. But its a direction that might be useful to aim at. Ideally new converts wouldn’t have to make any commitments or take on any moral responsibilities at all – at least until they were partway through a gradual process of strengthening themselves and understanding the landscape.
In my discussion of EA being a process of building a garden I hinted at what I think is another significant psychological hazard within EA – utilitarian utility-maximisation. This framework has the unfortunate proporty of both being incredibly obvious and fundamental to those who believe in it and completely ridiculous and mystifying to those who don’t. When I rounded up my new Israeli friend again to check in on how they were going in their relationship to EA as it evolved throughout the conference, this was one of the things that came up. It came up later in a group conversation about the shift in 80K’s top careers based on new research and knowledge making people feel crappy when their thoughtfully chosen earning-to-give job wasn’t as effective a career anymore.
A formal linear optimisation function is an algorithm, but it must be only one of several tools used to achieve a long-term and unstructured goal. One of the main things that alienated me from EA in the last couple of years has been feeling like I can’t pursue diverse strategies, like making myself healthy and powerful as a number one and not secondary priority, or focusing on paradigmatic and ecosystems-level changes at the expense of ‘optimal’ ones, without feeling like a ‘bad EA’ within the community.
Maximising utility is great when the problem is well-defined, the terms are clear and agreed-upon by everyone, and the metric we’re optimising for is clear and unlikely to cause any complex secondary effects. With things like ‘minimising suffering’ literally none of these are the case. While yes, it is an important tool in the pursuit of doing good (as long as you understand that it isn’t a requirement for being a ‘good enough’ human, which is not an easy thing to remember within EA at times), it shouldn’t be up front and centre as the predominant strategy of the EA movement.
But ‘removing maximising utility as the movement’s frontman’ isn’t easy even if we’d like to change it (which the community may object to). We would need something even more psychologically, logically and emotionally powerful to sit higher in the belief system’s value hierarchy. This is maybe why religions have been so good at getting people to do good (as defined by the religion). They have a pretty damn powerful idea (God) that the whole altruistic motivation system tops out at. EA mostly tops out at atheist privilege, enlightment-era moral axioms, and abstract maths. I’m not sure how we solve it but it’s interesting to consider what we might put in this highest position.
Maybe a social movement is not the right structure for a collection of humans who want to do the most good in the long run? Social movements are unstable; they rely on the constant frustration of their members and ideally the good ones pick a well-defined problem and dissolve when their problem is solved. EA’s problem is not well-defined or clearly scoped (at least as it is now; when the goal was just ‘make philanthropy more effective’ a few years ago it might have been). Perhaps there are other institutional structures that would better serve the movement’s abilities to meet it’s goals and allow its participants to flourish? I don’t know what they are (here I am throwing out so many problems without answers) but it would be useful to think about. The default, an informal social movement, is prone to the all the same diseases as other social movements and doesn’t offer a lot of support to its members in their challenging quests.
EA as a movement is currently attracting people like they are rocketships, giving them a minimal amount of fuel and then kind of launching them into the sky. Some people inside EA orgs and living in EA hubs may not feel like this, but they are probably the exception to the rule. If we want this movement to succeed and avoid burnout we need to work out how to turn it into an ecosystem that sustains the members who are taking on difficult tasks, and takes advantage of the fact that it exists within a planetary civilisation of abundant resources, and not on an isolated craft out in space.
Towards the end of the conference I ended up at a table with a few of the people who had mentioned their concerns to me after the wellbeing workshop. The atmosphere was vulnerable but caring; we all felt relieved that the problems we were experiencing were not isolated, frustrated at the way the movement was going up until this point, but also optimistic that a way for change was possible.
Quietly a few very established community members joined our table; they were, to put it bluntly, people who did not have similar faces or skin colours to the ones who had initiated the conversation. I half-feared that when they listened to our stories and our feelings of alienation they would dismiss our concerns. I had seen that happen within EA a few times before.
Instead, to my delight the three men listened compassionately and openly, and made the others feel heard without dominating or changing the conversation. It slipped out that they each worked for major EA orgs within the Bay Area, and as the conversation continued my respect for all those at the table grew.
The EA org men, for their openness, and their willingness to be led in solutions by the people experiencing the problem. And the people who had come to me originally; the new, the marginalised, the insecure – for seeing a movement that sometimes made them feel guilty and unwelcome and choosing to stay and create change themselves rather than giving up and running away.