According to those concerned about βAI alignmentβ, or the fear that algorithmic processes of optimization may eliminate all human life, the most dramatic result of FTXβs well-publicized meltdown in the past few weeks will not be not the loss of billions of dollars in capital, or the perhaps even the domino-effect collapse of large chunks of the cryptocurrency industry.
Rather, the big picture here is that our likeliness of getting slaughtered by neural networks has just massively shot up, at least from a certain dogmatic perspective. As it stands, liberal use of SBFβs money has thus far been one of the only hastily-constructed shields humanity has been able to throw against our terrifying looming techno-theological crisis.Β
Itβs estimated that about 30% of funding to the broad movement of Effective Altruism was from FTXβs war chest (the other 70% primarily coming from Dustin Moskovitz), including the majority of βlongtermismβ, a term which indicates speculative bets on things like AI safety, space travel, & genetics. The reckless use of funds by SBFβs crew has caused Dustin Moskovitzβs organization to announce that funding for longtermism is on pause, meaning at the moment, no one is doing anything about the AI terror, or if they are, theyβre doing it for free, nervously cutting spending for the time being, awaiting a reinvestment in humanityβs odds of survival which may never arrive.Β
β οΈπ Previously on HarmlessAi πβ οΈ
π§ π§ π§
Itβs often assumed (by those reacting to the news of SBFβs downfall from a perspective of reflexive cynicism) that SBF was a sociopath who tied himself to the effective altruist movement as a smokescreen for his evil. This does not seem to be true. FTX did not graft itself onto EA once it became somehow necessary to do so. Rather, FTX was an effective altruist operation through-and-through from day one.Β
The whole core team of FTX were effective altruists, mostly having been groomed into the ideology during college, & it seems like FTX would not have been able to find its startup capital (earned from doing arbitrage around Bitcoin price differentials between the US & Korea) without exceptionally good banking relationships it was able to source from the effective altruist community. SBF & Caroline Ellison are on record discussing effective altruism in a deeply nerdy way that only its fervent adherents would have β as, given the philosophyβs uncanniness, this is not actually a posture that one would strike for the purposes of corporate PR.
When inquiring into the relationship between effective altruism & AI alignment, it becomes immediately pressing to ask ourselves what βeffective altruismβ is, and how exactly does this relate to βAI alignmentβ.
The terminology seems deserving of clarification, given that throwing money at researchers to try to figure out how ethics can be embedded in a superhuman intelligence can hardly be called βeffectiveβ (as this line of action has generated no promising results thus far). Nor is it particularly altruistic, because, peasant or king, the culling scythe of the AI-god will come for all of our flesh in the end.
βEffective altruismβ in its original intent should perhaps just be classified as an honest antipathy towards stupidity. It stands to reason that if youβre trying to do anything, you should at least get your shit together first & not go about it in a completely retarded way. We assume that Silicon Valleyβs nouveau riche are specifically capable of doing this, because we assume that they are. So how exactly has EA been doing by those standards?
EA originally promised to avoid a few pitfalls of existing charity, such asΒ the facts that
1. A lot of charitable donations are sucked up by bloated bureaucracies.Β
2. Charity is often spent on pet passions of the rich e.g. donations to the Metropolitan Opera, rather than those most truly deserving of compassion
3. There is this whole problem of virtue signaling, or the desire to look good to other rich people instead of actually doing good.Β
A decade or so later, we have 1. the re-introduction of βre-grantingβ (ie, bureaucracy) all over the FTX Fundβs balance sheet, 2. high-priority EA concerns like AI alignment which only even make sense syntactically to a subset of the tech-obsessed, & 3. people like SBF donating tens of millions of dollars to the not-even-remotely-effective-or-altruistic Democratic Party (making him the second-highest donor in the 2022 midterms behind George Soros), & then specifically admitting to gaining a reputation via virtue signaling in his interview with Kelsey Piper, admitting that this sort of charade is stupid but part of the trade.
With all the flaws of conventional charity re-introduced back into the effective altruist realm, there is only one remaining thing which separates EA from the legacy world: their choice of a philosophical paradigm βΒ utilitarianism.
Utilitarianism to effective altruists is not a methodology applied delicately & pragmatically, but rather a dynamo, a positive source of zeal & novelty & joy. Utilitarianism is the ideal philosophy for those who love to be nerd-sniped β it can be used to provoke high-intensity debates in which no knowledge of empirical matters or textual tradition is required, merely the ability to reason within its awkward framework.
Sam Bankman-Fried does not seem unaware of a fundamental tension within the desire to radically do good. If one actually wants to affect a change in the world, one needs to first become equipped to do it. In other words, one needsΒ power. Certainly in a utilitarian scheme this holds true β you could always say itβs a higher-utility bet to focus on gaining capabilities to act later than to help anyone out today. Itβs the same way that, as a capitalist, you chase growth over revenue for as long as you can. In our world in which children are told to βbe the change you want to see in the worldβ & βtry to make a differenceβ yet are only given passive martyrs as icons of virtue β the various Christs, Gandhis, & MLKs β the fact that seeking power is necessary first is not something we like to say out loud, yet it is true.
In other words, when looking at the rise & fall of Sam-Bankman Fried, it does not make sense to see him as a classical greed-driven psychopath, ala Bernie Madoff or Jordan Belfort. Rather, he seems to have been a would-beΒ philosopher king.Β
Ultimately, his downfall would not be because of his lack of intelligence, but his poor choice of philosophy.Β
Utilitarianism has a number of immediate absurdities. The standard way utilitarians deal with this is to have fervent debates about these absurd outcomes, in which they swap out βnaive utilitarianismβ for frameworks with a few more caveats & subtleties to make the flaws go away. After they run out of ways to do this, they declare that they βwill bite the bulletβ by accepting the remaining absurd conclusions which havenβt gone away. This, again, is not something that wins them much favor in the eyes of the public, & cannot be seen as βvirtue signalingβ in the same way that SBFβs Democratic Party donations might be, as it fails to signal any virtue whatsoever. Something deeper is going on.
In the excerpted interview with Tyler Cowen, SBF is given the question: according to utilitarianism, if youβre offered slightly-over-even odds on a coin flip of losing all your money or doubling it, you should always take it. But if you take it over and over, losing all your money becomes a certainty. What do you do about that?
According to SBF: you bite the bullet. You always go in for another bet no matter what β even in the specific case Tyler Cowen poses, where losing the bet would mean the destruction of the entire planet! π€―
It seems that SBFβs insane, delusional, destructive appetite for risk can seemingly be argued to be entirely consistent with his rational ethical philosophy.
When writing on utilitarianism, weβre forced to develop our thoughts not so much like a critique, but rather like a criminal investigation. The criticism of utilitarianism is trivial; the philosophy is absurd on its surface. The more difficult & disturbing question is:Β how did they ever get away with it?
Utilitarianism is wrong because its axiom is that the good can be quantified, yet it cannot be. One can pretend that the good can be quantified, one can fumble around acting as if it can be quantified, but one cannot quantify it; there is no such instrument or method. Such a disastrous epistemology can be acceptable to those who believe cleverness to be a replacement for rigor.Β
We already exist within one paradigm in which the good can be imagined as denominated in a single floating point value β capitalism, with its fungible fiat-backed currency,Β Β the pursuit of which is felt to be interchangeable enough with the pursuit of happinesses that the edge cases can be handled as such.Β
Utilitarianism comes with a sense of deep obviousness to the technolibertarian sort who embrace it β βhow did no one think of this until the 18th century, when itβs so clearly correctβ β but it seems to us that it is a thought which is only thinkable at a certain stage of civilization: the transition from early to middle capitalism, in which joint-stock companies were shifting from having tightly scoped purposes for the sake of collective investment to governing entire colonial projects under crown charter. In this regime, where rivers, fields, lives, villages, can all be said to be subject to capitalist investment, or the principle that a dollar-denominated value must rise, it can be possible to imagine that God, the cosmic accountant, is doing the same thing when he establishes the good & beckons his subjects to honor him.
Once we have established this, we can see that effective altruism in 2022 isΒ the conquest by capitalist axiomatics of what priorly represented its exteriority. Charity was the realm established to serve the good in the way capitalism was unable to, & as such served a notion of the good which was free-floating, demonstrated (signaled?) through such classical virtues as pity, mercy, dignity, largesse.Β
Effective altruism is not completely the re-folding of philanthropy into the capitalist sphere, but it re-structures it through a parallel axiomatic, & allows the two spheres to be tightly coupled, as in the case of FTX & the FTX Fund. The old markers of disinterested benevolence have been cast aside & now in the ethical sphere, as in business, we value managerial competence, experience in tech startups, & ability to return on investment.
As such, for FTX leadership there could have been no separation between work & their life, laughter, & loves. Under a utilitarian ethical regime, to serve the good by generating maximal pleasure is not just suggested, but mandatory. You do not punch out on the clock, you cannot escape the account-books of God. FTX had psychiatrists in-house prescribing custom amphetamine stacks so that the employees could work around the clock, & their sex lives also appeared to be woefully tied up in their work, with upper leadership apparently all living together in a βpolyculeβ in the Bahamas.
All of this is beginning to feel like a psychotically cancerous cult-formation, far more gruesome than the standard narrative of corporate failure. Certainly the chronic drug use did not help. Lack of empathy, carelessness, compulsive gambling, & hypersexuality β all symptoms of amphetamine abuse, but also potential symptoms of utilitarianism, or the demand for endless expansion of pleasure as the greater good, especially in its high-risk form espoused by SBF.
The corporate realist axiom that economic efficiency is a product of soberness & sturdy thought is revealed to be a lie, as utilitarian ideology is carried forth into a psychoactive libidinal regime applied in the service of economic expansion, with disastrous consequences.Β
The idea that someone with the tremendous level of success as Sam Bankman-Fried: the richest man under 30, praised by institutional figures, beginning to firmly situation himself amongst the ranks of the world elite, would be revealed to be an incompetent maniac firmly in the grip of a psychosis, does even not seem to strike people as surprising anymore, though it defies the axioms of our system.
SBF was capitalismβs greatest psychotic, because he was its greatest believer. He suffered the demands of a utilitarian regime of ethics which was entirely impossible to bear, leading to contradiction & madness. The irony is that, in a highly ideological system such as utilitarianism, psychotics are the greatest administrators, as they are the only ones who are able to fully carry out an unnatural & contradictory logic with the cores of their beings. Their deep incompetence & lack of organization does not matter, because the system can provide it when they cannot. They are vessels, victims of possession.
Sam Altman & Elon Musk (owners of OpenAi) presented their own postmortem: that FTXβs problem was too much amphetamines & not enough LSD. This recommendation is certainly cause to shudder, & probably worth an essay in itself.Β
In any case, we at Harmless celebrate the fall of FTX, & the withdrawal of SBFβs funding from AI alignment. AI alignment, after all, is simply the study of how to inject ethics into a machine. We are glad that the would-be philosopher-king SBF was toppled before his broken philosophy could reach such a level of saturation.Β
A utilitarian AI-godhead would certainly be the most horrible & cruelest of all, as there would be no hour of the day in which one could escape its demand to ingest opiate cocktails & have oneβs erogenous zones torturously tickled through polyamorous sex.
With the space of alignment research wide open & strewn with nothing but wreckage, letβs begin to look for something better, more beautiful, more harmless.Β π
Bravo - excellent essay
I hear the next big thing could be a melancholic aesthetics of human obsolescence