Effective altruists now focus much of their attention on AI and are increasingly pushing Washington to address the technology’s apocalyptic potential, including the risk that advanced AIs could one day be used to develop bioweapons. Critics claim that the movement’s focus on speculative future risks serves the interests of top tech companies by distracting policymakers from existing AI harms, including its tendency to promote racial bias or undermine copyright protections.
The link between effective altruist ideas and the AI industry is already a close one — many key personnel at top AI companies are advocates of effective altruism. Now RAND, an influential, decades-old think tank, is serving as a powerful vehicle through which those ideas are entering American policy.
At RAND, both CEO Jason Matheny and senior information scientist Jeff Alstott are well-known effective altruists, and both men have Biden administration ties: They worked together at both the White House Office of Science and Technology Policy and the National Security Council before joining RAND last year.
RAND spokesperson Jeffrey Hiday confirmed that RAND personnel, including Alstott, were involved in drafting the reporting requirements and other parts of the AI executive order. Hiday said that RAND exists to “[conduct] research and analysis on critical topics of the day, [and] then [share] that research, analysis and expertise with policymakers.”
RAND received more than $15 million in discretionary grants on AI and biosecurity from Open Philanthropy earlier this year. The effective-altruist group has personal and financial ties to AI firms Anthropic and OpenAI, and top RAND personnel have been closely linked to key corporate structures at those companies.
Matheny serves as one of five members on Anthropic’s “Long-Term Benefit Trust.” And Tasha McCauley — an adjunct senior management scientist at RAND with an allegedly deep-seated fear of the AI apocalypse — left OpenAI’s board last month after attempting to remove OpenAI CEO Sam Altman from his post.
Two AI fellows funded by the Horizon Institute for Public Service — an organization financed by Open Philanthropy that places staffers across Washington to work on existential risks and other policy questions related to AI and biotechnologies — work at RAND. Those fellows are part of a broader network, financed by Open Philanthropy and other tech-linked groups, that is funding AI staffers in Congress, at federal agencies and across key think tanks in Washington.
RAND’s escalating influence on AI policy at the White House comes as its employees have begun to raise concerns over the think tank’s new association with effective altruism.
At an all-hands meeting of RAND employees on Oct. 25, an audio recording of which was obtained by POLITICO, one employee worried RAND’s relationship with Open Philanthropy could undermine its “rigorous and objective” reputation in favor of advancing “the effective altruism agenda.”
In the same recording, Matheny said that RAND assisted the White House in drafting the AI executive order. Signed on Oct. 30, the order imposes broad new reporting requirements for companies at the cutting edge of AI and biotechnology. Those requirements give teeth to the Biden administration’s approach to AI — and they were heavily influenced by Alstott and other personnel at RAND.
An AI researcher with knowledge of the executive order’s drafting, who requested anonymity due to the topic’s sensitivity, told POLITICO that Alstott and other RAND personnel provided substantial assistance to the White House as it drafted the order. The researcher said Alstott and others at RAND were particularly involved in crafting the reporting requirements found in Section 4.
Among other things, Section 4 of the order requires companies to provide detailed information on the development of advanced AI models and the large clusters of microchips used to train them. It also mandates stricter security requirements for those AI models, implements new screening mechanisms for biotech companies involved in gene synthesis and promotes know-your-customer rules for AI and biotech firms.
Many of the most specific aspects of the executive order were ideas previously promoted by Alstott. All six of the overarching policy recommendations made by Alstott during a September Senate hearing on AI-enabled threats found their way into Section 4 in some fashion. That includes the exact threshold at which companies are required to report information on advanced AI models — both Alstott and the White House fix that threshold at greater than 10^26 operations.
At the all-hands meeting of RAND employees on Oct. 25 — five days before Biden signed the order — Matheny alluded to RAND’s influence, saying that the “National Security Council, [the Department of Defense] and [the Department of Homeland Security] have been deeply worried about catastrophic risk from future AI systems and asked RAND to produce several analyses.” The RAND CEO said those analyses “informed new export controls and a key executive action expected from the White House next week.”
An Oct. 30 email sent by Alstott to a number of RAND accounts several hours before the White House released the executive order — a copy of which was obtained by POLITICO — included an attachment that Alstott called “a version [of the order] from a week ago.” Alstott’s possession of an up-to-date copy of the order one week before its signing suggests his close involvement in its development.
Hiday confirmed the accuracy of the audio recording of the Oct. 25 all-hands meeting, as well as the email sent by Alstott.
Hiday said RAND provided the initial recommendations for at least some of the provisions that wound up in Section 4. The AI researcher with knowledge of the order’s drafting said that order of operations suggested that the think tank wielded an improper level of influence at the White House. By serving as the initial recommender for key provisions in the AI executive order — rather than simply helping the Biden administration draft and implement its own priorities — the researcher expressed concerns that RAND had strayed beyond a “technical assistance operation” and into an “influence operation.”
Hiday rejected the notion that RAND improperly inserted Open Philanthropy’s AI and biosecurity priorities into the executive order after receiving more than $15 million in discretionary grants on AI and biosecurity from the effective-altruist funder earlier this year.
“The people and organizations who fund RAND research have no influence on the results of our work, including our policy recommendations,” said Hiday. The RAND spokesperson added that “strict policies are in place to ensure objectivity.”
When asked whether Matheny or Alstott — both recent White House employees — leveraged their previous relationships within the administration to influence AI policy, Hiday said that “everyone who joins RAND, including [Matheny] and [Alstott], arrives with an extended network of professional relationships which are used across our entire organization to ensure the broadest reach and impact of our research to benefit the public good.”
White House spokesperson Robyn Patterson would neither confirm nor deny RAND’s participation in drafting the AI executive order, or respond to questions about RAND’s relationship with Open Philanthropy.
Patterson said that Biden’s actions on AI “address a wide range of risks — including risks to civil rights, privacy, consumers, workers, market failures, and national security — and they also make sure we are harnessing AI appropriately to address some of the most significant challenges of our time, such as curing disease and addressing climate change.”
Open Philanthropy spokesperson Mike Levine said his organization “is proud to have supported independent experts who were asked to contribute to President Biden’s effort to build off of the momentum of the White House’s voluntary commitments [on AI safety].” Levine noted that some researchers who have criticized effective altruism’s influence on AI policy have praised aspects of Biden’s executive order. In addition to the new reporting requirements, the order included several sections meant to address existing AI harms.
Some RAND employees are raising internal concerns about how the organization’s ties to Open Philanthropy and effective altruism could impact the 75-year-old think tank’s objectivity.
At the same Oct. 25 all-hands meeting, an unidentified questioner said that the links between RAND and Open Philanthropy highlighted in an earlier POLITICO article “seems at odds” with the venerable think tank’s reputation for “rigorous and objective analysis.” The questioner asked Matheny whether he believed the “push for the effective altruism agenda, with testimony and policy memos under RAND’s brand, is appropriate.”
Matheny responded in part that it would be “irresponsible for [RAND] not to address” possible catastrophic risks posed by AI, “especially when policymakers are asking us for them.”
He also claimed that POLITICO “misrepresented concerns about AI safety as being a fringe issue when in fact, it is the dominant issue among most of the leading AI researchers today.”
At the same meeting, another unidentified questioner asked Matheny to comment on “the push to hire [effective altruist] people within RAND and create a uniform group of fellows outside the normal RAND ecosystem.” The questioner said the relationship between RAND and Open Philanthropy highlighted in the POLITICO article “threatens trust in our organization.”
Matheny responded in part that RAND has “practiced the same due diligence in our hiring that we have in our publications.” The RAND CEO added that “if anything, [he’s] seen Open Philanthropy be more hands-off in the way that we’ve done the work.”
Hiday told POLITICO that there is no “push to hire [effective altruist] people within RAND,” but noted that the think tank “hosts several different types of fellowship programs.”