Article DESCHOOLING

Ineffective Rationalism and Effective Alibis | Part 1


featured photo | Dorothe

“Every civilisation has had its irrational but reassuring myth.
Previous civilisations have used their culture to sing about it and tell stories about it.
Ours has used its mathematics to prove it.”
~ David Fleming, Lean Logic: A Dictionary for the Future and How to Survive It

Effective Altruism (EA) has become one of the most powerful ideologies influencing philanthropists and wealth holders in the United States and Europe over the last decade. EA’s mission is to ostensibly deduce the most effective means by which to engage in philanthropic and charitable activity. However, the extreme rationalism of EA and its methodologies (e.g. rankings, randomized control trials, cherry-picking solutions for which data exists, etc.) means that it is structurally designed to not address the root causes of social issues. As the old economic adage of Goodhart’s Law states, “When a measure becomes a target, it ceases to be a good measure.”

This article addresses both EA and Longtermism, EA’s philosophical counterpart. Longtermism (LT) is EA’s philosophical counterpart. LT’s  vision of utopia is based on a future where “digital beings” – ostensibly our offspring of some kind – are part of the future generations we need to care about and this is to whom they state we “owe the future”. LT uses the same base utilitarian logic and moral principles as EA with the same conclusions: it abstracts projected future problems while ignoring current systemic issues.

We deconstruct EA and LT’s core arguments and shed light on the instrumentalist logic that often leads to short-sighted metrics and morally dubious tactics in the name of effectiveness.1 We highlight the danger of superimposing narrow measurements of scientific effectiveness onto complex social issues.

We use the logic of  EA and LT as ‘breach points’ through which to deconstruct neoliberal and techno-utopian fantasies that reinforce materialist, mechanistic, utilitarian knowledge systems that give rise to the paradigm of perpetual economic growth and ongoing exploitation and extraction. EA is linked to the broader trend of philanthrocapitalism – the centering of the super-wealthy as the primary social change agents through their access to power, status, wealth, and good will. We are told that either the wealthy are the only and best possibility to make change because the whole world is determined by the ultra-wealthy.

Readers are invited into an alternative perspective through a systemic, historical lens for approaching root causes to the problems that philanthropy aims to ‘effectively’ address. We provide a brief overview of what an alternative approach might look like, comparing key aspects of the EA/LT cosmology with a post capitalist / symbiotic approach. The final section, what we owe the present, is a call to presence, sobriety, and responsibility for those engaging in the complex and critical work of social change. 

***

Greed is good…again

EA temporarily moved into public disrepute at the end of 2022 after one of its largest promoters and donors, Sam Bankman-Fried, filed for bankruptcy. Bankman-Fried was caught leveraging his clients’ money to cover his own trades in a massive crypto-currency ponzi-scheme. Bankman-Fried himself once described Effective Altruism as a means to  “get filthy rich, for charity’s sake.” This phrase reveals a deeply entrenched logic of late-stage capitalism; namely, the continued destruction and exploitation of the living world for the “greater good”. This cut-throat neoliberalism is then propped up by the alibi and illusion of effective philanthropy.

Part of EA’s appeal stems from its obvious first principles which almost no one would refute. Namely, philanthropic practitioners want their actions to be “effective”. However, we see very little reflection on what actually constitutes effectiveness – and for whom – in the myriad crisis of our times. Instead, we see statistical rigor conflated with reflections on effectiveness alongside strongly-held beliefs, and the accompanying techno-utopian narratives, that the world is getting better by-and-large because of progress through industrial and technological capitalism. Although this may be true in very narrowly defined ways for certain segments of the global population, this position ignores the destruction that has also been created by the current trajectory of development (e.g. 200 species a day going extinct, crossing six of the nine planetary boundaries, the destruction of Indigenous cultures, peoples and languages, etc.). As we have written in other places, the current progress narrative is not founded on a strong evidence base. EA measures effectiveness within the narrow narrative of modernity, supporting and reproducing the context without addressing the root causes. 

Photo | Riya Kumari

The second core principle of EA is that everyone, especially the wealthy, should give more of their wealth to the less fortunate, i.e. altruism is a virtue. William MacAskill, a moral philosopher at Oxford University and the cherubic face of EA, has publicly declared that rich people should give 99% of their wealth away. Amen. But what should they give their funds to? Who decides this? Who defines what is actually “effective” and from what epistemology is this determined? What is the worldview or ontology driving how philanthropists see and make such decisions? 

Let’s take a moment to unpack altruism and how it is defined and enacted for EA. The word altruism derives etymologically from the French word autrui, meaning “other people”. Altruism has been historically understood as a form of “care for other people” that includes some kind of “cost to the actor”; an act more akin to solidarity than simply charity. EA distorts this general relationality into something rather banal, reframing altruism to mean a vulgar form of charity. That is, EA is enacted as charity controlled and directed by “rationally-optimizing” individuals, institutions, and methodologies, exercising power through narrowly defined notions of philanthropic effectiveness. 

EA argues, uncontroversially, that individuals should do more good. However, it emphasizes units of impact and charities (including not-for-profits, non-governmental organizations, etc.) that deliver these units of impact “at scale” as being the most effective. This model of strategic philanthropy disregards the deeper, structural causes of what some have termed the “meta-crisis”. The meta-crisis refers to the cascading and interrelated nature of inequality, poverty, climate catastrophe, species extinction, spiking pandemics and other outcomes of neoliberal capitalism. Whether explicitly or implicitly, proponents of EA suggest that nothing fundamental in society can be changed. Therefore, it is up to the “sovereign individual” to do all the good they can to improve lives and make small reforms to the existing system. 

This is consistent with the notion of charity as practiced by the dominant culture of philanthropy: largely top-down acts of purported benevolence for selected beneficiaries. Charity is used as an instrument to alleviate symptoms of injustice, which were created by these elites hoarding resources in the first place, while forging alibis, and generating kudos, to serve the ‘generous benefactor’ without ever addressing the underlying drivers of unjust outcomes in the first place.2

Moreover, the EA “movement” (as adherents to the ideology generously describe their small but powerful community of largely Silicon Valley techno-utopians and academics at elite universities) has contorted the word ‘altruism’ to suit the purposes of the existing paradigm of growth-based capitalism.3

Effective Altruism reduces social change to the dictates of a wealthy individual’s choice, and, in doing so, excludes the possibility of other avenues for social change including systemic change (e.g. moving away from a growth-based economy), citizen led-collective action (e.g. social movement organizing) or even progressive reforms (e.g. creating a wealth tax). The underlying logic of EA enshrines a preference for individuals, not-for-profits delivering services, and philanthropic institutions as the loci of power, thus strengthening the neoliberal paradigm. 

This is part of why EA is so appealing to wealthy power elites. EA informs us that the most effective way to be altruistic is to give surplus wealth to not-for-profit organizations, those deemed effective in their service delivery by narrowly defined and measured criteria, so as to mitigate some of the ills stemming from wealth creation in the first place. In other words, EA is a means to outsource the clean-up required by the destructive consequences of amassing wealth in a dysfunctional system.

While this may sound a bit far fetched, proponents of EA explicitly espouses the principle of “earn to give”, which tells young people to seek out work on Wall Street or at Big Oil firms in order to earn as much money as they can so that they can ostensibly give more money away. EA is not concerned whether or not the production of that money causes environmental, social or economic damage to begin with. A more apt definition for the acronym EA is “effective alibis” or, to be more pointed, “extraction alibis”. 

While these alibis are patently absurd, they are also deeply seductive. 

EA as Philanthrocapitalism

Within EA, philanthropy is practiced as an extension of the transactional, market-based logic of neoliberalism. As we’ve stated above, an explicit goal of EA practitioners is to accumulate more wealth through the existing system in order to give more to charity.4 This formulation tacitly assumes that if someone knows how to earn money, they must be able to demonstrate the same prowess in giving money away effectively. As such, EA can more accurately be described as a conspicuous branch of “philanthrocapitalism.”5

Two myths of neoliberal capitalism are most obviously reified and amplified through philanthrocapitalism and EA. The first is the idea that ordinary people do not know what is good for them and require the instruction of experts and well-thought-out algorithms of structured generosity created by the wealthy class, whose wealth is tied to their superior skill or intelligence. The second is the ahistorical exoneration of wealth creation including colonialism, imperialism, genocide, enslavement, monopoly creation, perpetual war, collusion, resource extraction, environmental degradation, governmental capture and other drivers of wealth that produce philanthropy’s capital.

​We are conditioned to accept the innate benevolence and wisdom attributed to those who have accumulated wealth. Becoming rich, by any means necessary, is perceived as the main avenue by which the individual can share abundance with the rest of the world. The aim of philanthropy within the EA worldview is to increase the capacity of others to achieve similar economic opportunities within the market system. Indeed, poverty alleviation through economic growth is the uncontested, dominant narrative within philanthropy and international development. As we have argued in Post Capitalist Philanthropy: Healing Wealth in the Time of Crisis, much of philanthropy has shifted its purpose from a silencing salve for economic retribution to being a pro-active co-optation machine, summoning new converts into the church of neoliberal market fundamentalism. 

The perverse consequences of philanthrocapitalism – especially its latest incarnation of EA – are endless. EA is a guise for billionaires to kick the can of consequence and concern down some future time horizon so that they can instead focus on the important work of becoming richer now, because, as we are told, doing so will benefit all of humanity disproportionately. This is the mutation of “trickle-down” economics into “trickle-down charity”.

The EA community already has a mind-boggling $46.1 billion in committed funding (growing at 37% per year since 2015). These funds often flow circuitously to other rich proponents of EA. For example, Open Philanthropy, one of EA’s arms, has created a $150M fund for other “effective funders” and their causes. Its beneficiaries include the Bill and Melinda Gates Foundation and USAID. 

Rationalism ad absurdum

EA advises wealthy philanthropists to channel their money according to a ranking of “evidence based” efficacy, centralized in depositaries such as the Effective Altruism website, GiveWell (founded by two hedge-fund managers), Good Ventures (co-founded by Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna), Open Philanthropy (a joint venture between Give Well and Good Ventures), Oxford’s Center for the Future of Humanity Institute (FHI), Oxford’s Global Priorities Institute, Giving What We Can (created by Wiliam MacAskill and Toby Ord, both of whom work at Oxford and whom we will properly “meet” shortly), the Center for Effective Altruism (also co-founded by Ord and run by MacAskill), FTX Future Fund (now defunct after FTX filed for bankruptcy while MacAskill held the position of senior advisor), Longview Philanthropy, 80,000 Hours (also founded by Ord and MacAskill), and various other interchangeable EA arms – with a cast of shared characters moving through these institutional revolving doors. 

This small group of organizations within the EA community largely rely on the results of “randomized control tests” (RCTs) to determine and define “effectiveness”. Such RCTs are problematically narrow, pseudo-scientific attempts to measure impact.6 As Emily Clough states in the Boston Review:

While they are good at measuring the proximate effects of a program on its immediate target subjects, RCTs are bad at detecting any unintended effects of a program, especially those effects that fall outside the population or timeframe that the organization or researchers had in mind. For example, an RCT might determine whether a mosquito bed net distribution program lowered the incidence of malaria among its target population. But it would be less likely to capture whether the program unintentionally demobilized political pressures on the government to build a more effective malaria eradication program, one that would ultimately affect more people. RCTs thus potentially miss broader insights and side effects of a program beyond its target population.

To make matters worse, after this overly-dependent reliance on a limited methodology such as RCTs, the EA community often cherry-pick research that serves their biases and belief systems for how to address complex issues thereby defining what is most effective. For example, GiveWell chose to elevate a single 2004 study on deworming, which resulted in deworming becoming one of GiveWell’s most well-supported causes even in the face of a major backlash by the scientific community. Some researchers claimed to have debunked the study while others have attested to being unable to replicate the study’s results. Thus, despite the uncertainties surrounding this intervention, GiveWell directed more than $12 million to deworming charities through its Maximum Impact Fund after the research was published via their media. 

There is a deep hubris in EA that implicitly assumes we can understand the world’s problems through a positivist, mechanical, reductionist version of the scientific method. Indeed, EA attempts to give a false sense of statistical precision by imposing probability values on top of subjective beliefs.7 We’ve seen such positivist, reductionist worldviews expressed in and through philanthropy many times before (e.g. the industrialists advancing schooling for greater productivity in factories). Such epistemologies are so ubiquitous that they blind us to mechanized, metric-obsessed approaches and the underlying causes and conditions of social and ecological ills.  

In a critical review of MacAskill’s previous book, Doing Good Better, in the London Review of Books, the philosopher Amia Srinivasan states:

Effective altruism, so far at least, has been a conservative movement, calling us back to where we already are: the world as it is, our institutions as they are. MacAskill does not address the deep sources of global misery – international trade and finance, debt, nationalism, imperialism, racial and gender-based subordination, war, environmental degradation, corruption, exploitation of labour – or the forces that ensure its reproduction. Effective altruism doesn’t try to understand how power works, except to better align itself with it. In this sense it leaves everything just as it is.

Syrian refugee camp | Ahmed Akacha

Complex, messy, entangled, historically-determined, structural drivers of inequality, poverty and ecological collapse cannot simply be reduced to the narrowly-defined, often technical solutions we may currently have on hand or those amenable to be tested via RCTs. We cannot algorithmically solve these deep-seated issues by ranking the few known, temporary solutions from not-for-profits, such as providing mosquito nets to tackle malaria, offering intestinal parasite treatments for worms, giving vaccines to alleviate environmental or structural issues, or creating more renewable energy sources. This latest chapter of reducing complexity of inequity to units of impact while ignoring the perils of modernity is even more troubling as EA has risen in parallel to an even more dangerous, pseudo-utilitarian, techno-consequentialist ideology called Longtermism. 

Enter Longtermism

Longtermism (LT) is an ideology born out of the EA community, largely through Oxford’s Center for the Future of Humanity Institute (FHI) (partly funded by Elon Musk) and founded by the transhumanist Nick Bostrom8 and Oxford’s Global Priorities Institute (where William MacAskill works). LT promotes the belief that unlikely but existential dangers, like a world-destroying Artificial Intelligence (AI) or global biological warfare, are humanity’s most imminent threats. MacAskill has elevated Bostrom’s fringe and obscure “science” of existential risks to the pillars of the EA community.9 With the publication of his recent book, What We Owe the Future, MacAskill has also received widespread attention ranging from puff pieces in the New Yorker to appearances on the Daily Show with Trevor Noah.

Émile P. Torres, a philosopher, former EA proponent and journalist, has argued that LT is the most influential political ideology in the world, and at the same time, most people have never heard of it. Longtermists have directly influenced major reports from the secretary-general of the United Nations. A UN Dispatch article reports on how “the foreign policy community and the UN in particular are embracing EA philosophy.” Jason Gaverick Matheny, the current president and CEO of the RAND Corporation pushes LT ideology. Elon Musk retweeted a link to MacAskill’s book, stating, “Worth reading. This is a close match for my philosophy.” 

Both LT and EA argue that we should care about future generations as much as we care about those living today. Again, it is hard to disagree. However, such assertions are clothed in logical, temporal and value leaps that are not adequately explained by MacAskill and other Longertermists. Most people would agree that we should create structures and systems that actively benefit our descendents. However, this is not the type of future thinking LT offers. LT’s vision of utopia is based on a future where “digital beings” – ostensibly our artifactual offspring – instead of humans or the rest of the biodiverse species of our planet, are the ersatz future generations we need to care about and to whom we “owe the future”.  

Image by Gerd Altmann

It’s worth quoting Torres at length here:

Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible. 

In practical terms, that means we must do whatever it takes to survive long enough to colonize space, convert planets into giant computer simulations and create unfathomable numbers of simulated beings. How many simulated beings could there be? According to Nick Bostrom — the Father of Longtermism and director of the Future of Humanity Institute — there could be at least 1058 digital people in the future…. Others have put forward similar estimates, although as Bostrom wrote in 2003, “what matters … is not the exact numbers but the fact that they are huge.”

The logic of LT, with its obsession around existential risks and the fate of fictional future beings calculated without a transparent or logical methodology, can essentially rationalize any travesty or extreme policy outcome.10 Even the philosopher Peter Singer – arguably the grandfather of EA through his applied utilitarian approach which has inspired MacAskill and many others in the community, has come out against LT, stating: “The dangers of treating extinction risk as humanity’s overriding concern should be obvious. Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth.”

If an issue does not pose an existential risk, then according to arbitrary LT calculus, it is a “mere ripple on the surface of life” to quote Bostrom directly from one of the most cited essays in the LT cannon. This includes “risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS.” Why, you might ask? Because “[t]hey haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.”

Essentially, Longtermists are providing an ethical alibi for rich countries, corporations and the wealthy to continue their destruction and pillage of the living world by employing a fantasy logic of instrumentalization and further risking the lives of the majority of humanity living in the geopolitical global South, as well as countless other species. A recent report from one of the world’s leading medical journals, The Lancet, by economic anthropologist Dr. Jason Hickel, shows that global North countries are responsible for 92% of global climate emissions. How do historical injustices such as this factor into the EA/LT calculus.

Effective alibis are especially seductive in a culture that is willfully consequence-blind. MacAskill’s notion of “what we owe the future” is amnesiatic and never considers the possibility that there is much we “owe the past,” or for that matter, the present.

LT may appear fringe or somehow inconsequential within philanthropy. However, those who are leading the charge have ever growing influence and consequence on the lives of the global majority. For instance, Nick Beckstead (who moved from  Oxford’s Future of Humanity Institute to Open Philanthropy to becoming CEO of the FTX Foundation (a kind of EA/LT triathlon) wrote his PhD thesis on the basis of a LT tenet he proudly called “fanaticism” suggesting lives in the rich countries are more valuable than those in the global South.11 

Such conclusions are rooted in worldviews that see the world and humans through a mechanistic, monological, utilitarian lens. Through perversions of logic, Longtermists can only imagine people making more bots could actually be contributing to the universe. To make this leap, they use narrowly-defined set of methodologies upheld as the most rigorous, important, and consequential. The ideology of LT is in many ways the logical outcome of centuries of Enlightenment rationalism, positivism, dualism, separation from the living world, white supremacy, extreme individualism, unthinkable callousness, and deep, ongoing coloniality. It is also the legacy of the dominant culture of philanthropy. EA now has a powerful hold on the philanthropic sector, while LT is becoming one of the most powerful ideologies amongst scions, from Silicon Valley to elite universities.  

It seems clear that between EA’s moral philosophy of the means justify the ends (e.g. “earn to give”), its limited and exaggerated notions of effectiveness, the importing of deep subjectivities and biases around what matters, focusing on a far-flung future of digital beings, and other massive lacunae in logic and common sense, have landed the EA philosophy in a dialectical wasteland. The EA community, and its growing power, have become an indicator of how disconnected and contextually insensitive the ultra rich, and the small group of intellectual elites they fund and listen to, have become.

At the same time, we are facing unprecedented, ever growing crisis in all directions. The response from philanthropy is woefully inadequate and irresponsible. As we have argued, the historical philosophical underpinnings of EA and LT, and the unexamined assumptions within this particular worldview lead to a type of justification for more (perceived) rigor, concentration of wealth & power, and a growth-based hyper-rationality leading further and further away from ethical responses that actually engage with our current meta-crisis. 

The logical endgame of EA and LT is to trade-off “short-term and acceptable” human extinction and climate destruction for the sake of fictional, virtualized digital beings. It is a “necro-politics” to borrow Achille Mbembe’s apt phrase denoting the politics of choosing who lives and dies. It is not only anti-human, it is anti-nature. It is the virtualization and abstraction of life into 0’s and 1’s for the private gain of techno-utopian elites.

Without a historical, structural understanding of consequence, EA and LT completely blindfold themselves, ignoring the consideration of emancipatory possibilities. In part two of this essay, we consider a worldview that sees through a more symbiotic, animistic, interconnected lens.      

READ PART II OF THIS ARTICLE

Return to Kosmos Edition 24, Issue 3, Healing Wealth

 

NOTES

1  See Princen, Thomas. 2005. The Logic of Sufficiency. Cambridge, MA: MIT Press.

2  In contrast to philanthropy as charity, solidarity is a horizontal act based on principles of cooperation, mutual aid, justice, reciprocity, solidarity, relationality, and interbeing. In combing through the EA literature, the word solidarity is almost never used. In the last section, we will explore solidarity alongside alternatives to EA and the practice of dominant philanthropy.

3 Unsurprisingly, the EA community stands at 71% male and 76% white, with the largest percentage living in the US and the UK, according to a 2020 survey by the Centre for Effective Altruism.

4 This is similar to the logic of most philanthropic institutions as they continue to grow their endowment to have more money to grant.

5 For a deeper dive into the contours of philanthrocapitalism, see Shiva, V. (ed), 2002. Philanthrocapitalism and the Erosion of Democracy: A Global Citizens Report on the Corporate Control of Technology, Health, and Agriculture. Synergetic Press, New Mexico.

6 We use the term “pseudo-scientific” here because of the clear attempt to apply the methodologies of the natural sciences to social sciences – assuming that econometrics, control groups, statistically significant sample sizes, etc. are the most rigorous approaches to analyzing complex social phenomena.

7 A good example of this subjectivity and inconsistency is investment into electoral politics. EA explicitly prejudices very practical, reformist interventions and thus would never, based on its own internal logic, support social movements or electoral politics.Yet, Sam Bankman-Fried (of FTX infamy) has previously established a political action committee (PAC) and funneled $11 million in support of Oregon’s 6th Con­gressional District candidate Carrick Flynn, without any clear metrics or outcomes. How investing in the murky world of electoral politics is deemed sufficiently evidence-based or effective or altruistic is difficult to recognise, although attempts to rationalize this strategy have been published on EA forums.

8 Bostrom supports a program that would prioritize breeding more “intelligent people” and is currently under investigation for publically making racist comments against Black people.

9 It is important to note that both the EA and LT communities do not believe that climate change is an existential risk. For example, Toby Ord claims, based on a dubious methodology, that the chance of climate change causing an existential catastrophe is only 1 in 1,000, which is two orders of magnitude lower than the probability of superintelligent machines destroying humanity this century. This calculus is also founded on the belief that human beings would not be severely affected by a seven to nine degree rise in temperature, although the current scientific consensus clearly states the opposite. For example, the International Panel on Climate Change and others have firmly stated that such a temperature change would create an unprecedented dieback of both human and more-than-human populations, with feedback loops of destruction that no modeling can predict.

10 Bostrom has actually proposed that everyone should permanently wear Orwellian “freedom tags”: devices that would monitor all human activity and permanently guard against the possibility that any individual might become part of a plot to destroy humanity. See Bostrom, N. “The vulnerable world hypothesis,” Global Policy, vol. 10, issue 4, November 2019. Accessed here: https://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12718

11 Nick Beckstead’s thesis states: “[S]aving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards — at least by ordinary enlightened humanitarian standards — saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.” Beckstead then received a job at Oxford’s Future of Humanity Institute, partly on the basis of this thesis.

About Alnoor Ladha

Alnoor Ladha is co-director of the Transition Resource Circle and co-author of the book Post Capitalist Philanthropy: Healing Wealth in the Time of Collapse.

Alnoor’s work focuses on the intersection of political organizing, systems thinking, structural change and narrative work. He was the co-founder and Executive Director of The Rules, a global network of activists, organizers, designers, coders, researchers, writers and others focused on changing the rules that create inequality, poverty and climate change.

 

Read more

About Lynn Murphy

Lynn Murphy is co-director of the Transition Resource Circle and co-author of the book Post Capitalist Philanthropy: Healing Wealth in the Time of Collapse.

Lynn is a strategic advisor for foundations and NGOs working in the geopolitical South. She was a senior fellow and program officer at the William and Flora Hewlett Foundation where she focused on international education and global development. She resigned as a “conscientious objector” to neocolonial philanthropy. She holds an MA and PhD in international comparative education from Stanford University. She is also a certified Laban/Bartenieff movement analyst.

Read more