80,000 Hours https://80000hours.org/ Mon, 01 Dec 2025 15:41:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Rob & Luisa chat kids, the fertility crash, and how the ‘50s invented parenting that makes us miserable https://80000hours.org/podcast/episodes/rob-luisa-parenting-chat/ Tue, 25 Nov 2025 17:00:24 +0000 https://80000hours.org/?post_type=podcast&p=93703 The post Rob & Luisa chat kids, the fertility crash, and how the ‘50s invented parenting that makes us miserable appeared first on 80,000 Hours.

]]>
The post Rob & Luisa chat kids, the fertility crash, and how the ‘50s invented parenting that makes us miserable appeared first on 80,000 Hours.

]]>
Eileen Yam on how we’re completely out of touch with what the public thinks about AI https://80000hours.org/podcast/episodes/eileen-yam-experts-public-artificial-intelligence-survey/ Thu, 20 Nov 2025 17:00:07 +0000 https://80000hours.org/?post_type=podcast&p=93701 The post Eileen Yam on how we’re completely out of touch with what the public thinks about AI appeared first on 80,000 Hours.

]]>
The post Eileen Yam on how we’re completely out of touch with what the public thinks about AI appeared first on 80,000 Hours.

]]>
The US AI policy landscape: where to have the biggest impact https://80000hours.org/articles/the-us-ai-policy-landscape-where-to-have-the-biggest-impact/ Mon, 17 Nov 2025 12:01:14 +0000 https://80000hours.org/?post_type=article&p=93685 The post The US AI policy landscape: where to have the biggest impact appeared first on 80,000 Hours.

]]>
The US government may be the single most important actor for shaping how AI develops. If you want to improve the trajectory of AI and reduce catastrophic risks, you could have an outsized impact by working on US policy.

But the US policy ecosystem is huge and confusing. And policy shaping AI is made by specific people in specific places — so where you work matters enormously.

This guide aims to help you think about where specifically to work in US AI policy so you can actually make a large impact.

In Part 1, we cover five heuristics for finding the most impactful places to work. In Part 2, we cover five policy institutions that we’d guess are the most impactful for AI and name specific places in each that are especially promising.

If you want to work in US policy, we also recommend the expert-vetted guides at Emerging Tech Policy Careers for practical advice on pathways into government and detailed profiles of key institutions.

Jump to our top recommended institutions

Part 1: How to find the most impactful places to work on AI policy

It’s hard to predict precisely when and where key AI policy decisions will happen, but you can position yourself for greater impact. The following five heuristics can help you judge where you could have the best shot at positively shaping the trajectory of advanced AI.

Prioritise building career capital

Early in your policy career, avoid tunnel vision on AI policy roles. Many entry-level positions worth considering won’t focus on AI. What usually matters more is building career capital — knowledge, context, networks, and credibility that let you navigate the policy world.1

For example, you won’t get to specialise on AI risks as an intern in Congress. (You’ll probably spend much of your time answering phones.) But you’ll gain tacit knowledge, networks, and credentials that may accelerate your career more than an AI-focused opportunity lacking these benefits.

Here are some questions to help figure out how much career capital a role might give you:2

  • Who will I meet? Policy is highly network-driven, so who you’ll meet in a role is incredibly important for your future career. It’s rarely obvious which relationships will ultimately matter most, so finding roles where you can build broad networks — ‘casting a wide net’ — often pays off.
  • What will I learn? Consider how much you’d learn about (1) how people in DC actually talk and think, (2) how policymaking happens, and (3) the substance of your issue area.
  • What skills will I develop? Look for places where you build (and get feedback on) broadly valued policy skills like clear writing, people and project management, and research and analysis.
  • How strong a credential is this? Weigh factors like the institution’s reputation, how competitive the opportunity is, and the work outputs you’d gain (like contributing to publications or drafting policy memos).3

It can be hard to answer these questions by research alone — when possible, talk to people who’ve worked in or near the places you’re considering.

In short: Get your foot in the door, build relationships, and learn how policy works. Then cash out that career capital to move into more targeted, directly impactful roles.

Work backwards from the most important issues

AI policy is a huge and complex field — here are some ways to break it down:

Inputs to AI development

AI applications

Policy levers

  • Data
  • Compute
  • Talent
  • Investment
  • Algorithms

  • Military
  • Science & innovation (e.g. biology, robotics)
  • Cyber operations
  • Supply chains
  • Labor automation
  • … and many more

  • Testing and evaluation
  • Industrial policy
  • Research and development (R&D) funding
  • Regulations
  • Export controls
  • … and many more

You can mix and match across these columns and end up working on very different things. For example, you might work on R&D funding (investment) for military applications, or on export controls for AI chips. Your impact will depend on which issues you choose to work on and which levers you use.

You might start by asking: Which issues seem most important?4 Then, work backwards to the policy tools that you think might address them most effectively. For example, if you’re most concerned about:

In practice, issues often overlap, and many policy roles let you pull on several levers at once — or one lever that mitigates several risks. Still, prioritising which AI issues matter most can help you zero in on the levers, and then the roles, best placed to address them.

Find levers of influence

Some policy institutions are far better equipped for work on your AI policy priorities, depending on what formal or informal powers they hold.

Formal powers are legal authorities5, like deciding what research and development priorities to fund, regulating individuals or companies, or setting interest rates.

For example:

Informal powers — like coordination, research and argumentation, and agenda-setting — aren’t enforceable, but they can be as (or more) important than formal ones. Many policy organisations have little budget or regulatory authority but can sway others that do.

For instance, White House offices can’t create new laws, but they can steer agencies toward their priorities and broker compromises among them.7 Likewise, most think tanks and advocacy organisations don’t have formal powers, but they can influence Congress or the White House if they’re trusted advisors.

Both kinds of power matter. The most impactful institutions usually have one, or both, in abundance.

When you’re looking for impactful places, ask: Does this place control money or relevant rules directly? Or can it reliably influence those that do?8

Prepare for ‘policy windows’

Timing matters a lot in policy. Sometimes a single crisis, scientific discovery, or article can catapult an issue onto the policy agenda overnight. Other times, it takes years of building evidence that something is needed before action finally breaks through.

These breakthrough moments are called ‘policy windows.’ It’s hard to predict when they’ll open. A few examples:

  • 1.5 months after the 9/11 attacks, Congress passed the PATRIOT Act to vastly expand government surveillance and law enforcement powers.
  • Four months after Upton Sinclair published The Jungle (1906), Congress passed new meat safety laws.
  • Nine years after the Surgeon General linking smoking to lung cancer, Congress restricted cigarette advertising.

Your potential for policy impact can spike when a certain window opens:

Window of opportunity

For your career, this means:

  • Build expertise and connections early. By the time an issue is ‘hot,’ you want to have the networks, credibility, and expertise to act quickly.9 COVID-19, for example, turned long-time biosecurity specialists into go-to advisors trusted with major decisions almost overnight. If you only switch to working on an issue once a window of opportunity opens, you may be too late to make a big impact.
  • Stay flexible. It’s hard to know which places will matter most in five years. Instead of working towards a narrow career goal, build broad career capital that you can ‘cash out’ in many directions.
  • Think about your political affiliation.10 Your party alignment (or lack thereof) can shape when windows open and close for you. This varies depending on where you work:
    • In Congress, you almost always pick a party and stick with it, and switching later is rare.
    • In the executive branch, civil servants are nonpartisan11 and stay through presidential transitions. Political appointees are partisan and usually rotate out when power shifts.12
    • Think tanks can be partisan or nonpartisan. These affiliations can constrain or boost your policy career depending on who’s in power.

In AI, there’s also the consideration of how quickly the technology is developing. Many think the most effective time to act is before AI systems get very powerful, which may be quite soon. If you think the most important AI policy decisions will be made in the next 3–5 years, you probably want to prioritise paths that focus on AI earlier while still developing career capital. That might mean:13

Consider personal fit

Policy impact highly depends on your personal fit. If you’re especially well-suited for a particular policy role, you can often achieve vastly greater impact, and poor fit often leads to burnout.

Some traits matter for almost all policy work, like professionalism, humility, initiative, and being able to work with people who hold different views and values. But beyond that, different roles reward very different strengths. For example:

  • Congress or White House roles might suit you if you could thrive in fast-paced, social, and politicised environments, work long hours, and cover a broad portfolio of issues.
  • Think tank work can involve working on long-term research projects and distilling complex technical topics into policy recommendations.
  • Federal agency roles often reward technical expertise and the ability to navigate bureaucracy.
  • Advocacy roles tend to value networking, coalition-building, and persuasion skills to mobilise support for specific causes.

The stereotype of a suit-wearing, cocktail-reception-attending staffer captures only a slice of the policy world. Your most impactful role could be in any of the places we discuss below (or beyond), depending on your skills, interests, and preferred work style.

Part 2: Our best guess at the most impactful places (right now)

Below, we cover five policy institutions and give our best guesses for the most impactful places to work in each.15

1. Executive Office of the President

The Executive Office of the President (EOP; aka the White House) is small but mighty. Its ~2,000 staff help implement the president’s agenda and oversee the ~three-million-person executive branch. Spread across over 20 offices, EOP influences everything from the federal budget to national security to science and technology priorities. The leaders of these offices are often the president’s closest advisors, and their guidance — shaped by their staff — can sway decisions at the highest level.

The White House matters for AI policy by:

  • Setting agendas: The president sets priorities that agencies, Congress, and the public respond to.16 Semiconductor policy is one case study:
    • Under President Trump’s first term, the US mainly blacklisted specific Chinese firms like Huawei. Biden expanded those measures into sweeping, category-wide restrictions on advanced chips and AI hardware and coordinated allies to follow suit. In 2025, Trump rolled back the broad controls and shifted focus to enabling domestic AI progress.
  • Moving quickly: Congress often moves slowly, but in some cases, presidents can make sweeping changes overnight (often via executive orders). In a fast-moving AI crisis, the White House is one of the only places in government that could respond in mere hours.
    • Just 12 days after 9/11, President Bush signed an executive order creating sanctions to freeze assets of designated terror organisations.
  • Proposing the budget: Almost every new government program, office, or regulation needs money. Each year, the president sends a budget request to Congress, proposing how to spend roughly $6 trillion. Congress decides the final numbers, but the proposal sets a starting marker and signals priorities that lawmakers (especially in the president’s party) may hesitate to oppose.17
  • Leading US foreign policy: The president is especially empowered in foreign affairs, with the ability to recognise foreign governments, negotiate international agreements, and command military operations.
    • When the Cuban Missile Crisis erupted in 1962, President Kennedy imposed a naval quarantine and negotiated directly with USSR leader Khrushchev over 13 days. Congress wasn’t convened to debate the response.
  • Gaining power: In recent decades, the White House has increasingly stretched the limits of its power, often through novel interpretations of existing laws and regulations, selectively enforcing or declining to enforce laws, or declaring national emergencies.18 While not always upheld in court, actions like these shift expectations of what the presidency can get away with and gradually expand the White House’s reach.
    • President Trump used executive orders (EOs) to impose blanket tariffs, claim authority to end birthright citizenship, and pause a law banning TikTok.
    • President Biden tried to cancel up to $400 billion in student loans and mandate COVID-19 vaccines or weekly testing for employees of large companies without new legislation.
    • President Obama used an EO to shield millions of undocumented immigrants from deportation after Congress rejected his immigration bill.

The White House also has some key institutional constraints. Being so far upstream, the White House doesn’t get much ‘ground-level’ visibility into how policies are developed and carried out. Compared to the whole executive branch, White House offices have very small staffs and budgets, and most largely rely on soft powers to achieve their policy goals.

Here are some key career considerations for working in the White House:

  • Security clearances: Most White House roles require a clearance, which can take months to more than a year and could be harder to get if, for example, you’ve used illegal drugs or have concerning foreign ties.19
  • Partisan considerations: Roughly 10% of White House staff are political appointees, who the president nominates to serve in leadership roles.20 The rest are nonpartisan career civil servants, generally hired through public openings.21 All staff are expected to advance the administration’s policies, regardless of their personal positions.
  • Strong credentials help: Elite law, policy, or technical graduate degrees can be highly valued — under Biden, 41% of mid- or senior-level staffers had Ivy League degrees. Few people start their policy careers in the White House; most arrive with years of prior experience in Congress, federal agencies, or think tanks. Strong networks and proven expertise can sometimes substitute for formal credentials.22
  • There isn’t much stability: Most White House political appointments last at most until the end of an administration (but they’re often much shorter).
  • Intensity: White House jobs are notoriously demanding, with long hours, high stakes, and little control over your schedule. Staff constantly react to headlines and crises, which can make it hard to focus on long-term priorities. In former White House Adviser Dean Ball’s words: “The pace and character of your workday can change at a moment’s notice — from ‘wow-this-is-a-lot’ to ‘unbelievably-no-seriously-you-cannot-fathom-the-pressure’ levels of intense.”23
  • Exit opportunities: White House credentials are highly prestigious in DC. Alumni often move into senior agency jobs, think tank positions, or return later in more senior political roles.

In short, White House roles can be exceptionally impactful — you’re close to the president, shaping government-wide agendas, and often in the room for time-sensitive, pivotal decisions. But they’re also typically short-lived and intense, tied to political cycles, and sometimes only as effective as your ability to rally the much larger machinery of government behind you.

Based on how much they have historically influenced technology policy, their overall levels of soft and hard power, and their potential for building career capital, we’d guess that the following offices would be especially impactful choices for AI policy:

2. Federal departments and agencies

Federal departments and agencies implement policy: they administer social programs, guard nuclear stockpiles, break up monopolies, approve new drug trials, launch satellites, and train the military, among thousands of other things. Most people and money in the US government sit in these departments.

Departments are massive and specialised, with tens or hundreds of thousands of employees spread across dozens of sub-agencies. Fifteen secretaries (the Cabinet) lead the 15 Departments.24

Federal departments and agencies can matter for AI policy by:

  • Shaping how policies get implemented: Agencies carry out the work that laws, executive orders, and other policy directives set in motion. Much of a policy’s impact depends on how it is interpreted, put into practice, and enforced. For example, agencies:
    • Set export controls for critical technologies and materials.
      • The Bureau of Industry and Security (BIS), inside the Department of Commerce, can restrict or block US companies from selling to foreign entities when doing so could threaten national security. BIS spells out which items get controlled and where they can (or can’t) go.
    • Fund and manage research and development (R&D) — roughly $200 billion annually — that steers innovation toward national priorities.
      • One R&D agency called DARPA operates like a defense venture capital firm, spending $4 billion a year to “make pivotal investments in breakthrough technologies for national security.” Its projects drove the early foundations of the internet, GPS technology, and self-driving cars.
    • Set technical standards and develop evaluations for systems that may pose national security risks.
      • The Center for AI Standards and Innovation (CAISI) leads evaluations of the capabilities of US and adversary AI systems. CAISI coordinates government and industry efforts to test and evaluate advanced AI models. CAISI is building methods to measure model capabilities, safety, and reliability, including red-teaming for potential misuse or loss-of-control scenarios.
    • Write and enforce rules that put laws into action.
      • The Federal Trade Commission (FTC) enforces consumer protection and competition laws that increasingly apply to AI. For example, the FTC has investigated companies for making deceptive claims about ‘AI-powered’ products and for using algorithms trained on illegally obtained or biased data.
    • Decide how the military will use AI.
      • The Department of Defense tests and integrates AI tools for logistics, intelligence analysis, and battlefield decision making — for example, using algorithms to spot patterns in satellite imagery, plan supply routes, or to make vehicles or aircraft autonomous.
    • Protect critical infrastructure from AI-enabled threats.
      • The Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Energy assess how AI could introduce new vulnerabilities to power grids, communications networks, and other critical systems and test tools to defend against these risks.
  • Spending enormous budgets: Federal agencies collectively spend trillions each year. Most of that is locked into programs like Social Security and Medicare, but the flexible remainder is still massive. In 2024, for example, departments spent about $11 billion on autonomous military applications, $1 billion on AI-enabled supercomputing, and $69 million on national AI research institutes.25

With their huge scope comes important limitations. Agencies answer to both Congress and the president: Congress sets their missions and budgets through laws, and the White House directs their day-to-day operations and high-level priorities. And as enormous, specialised bureaucracies, departments tend to develop entrenched procedures and risk-averse cultures that can make change slow.

Here are some key career considerations for working in federal agencies:

  • Diverse role options: Agency staffers’ work is incredibly diverse. Some design new R&D programs, some run lab experiments as government scientists, others manage multimillion-dollar defense contracts. So it’s hard to determine your personal fit for working in federal departments generally — you’ll need to research specific roles and offices.
  • Career vs political appointments: The vast majority of agency staff are career civil servants who stay through changes in administration. The most senior roles are typically filled by political appointees — people nominated by the president to serve for the duration of that administration.
  • Security clearances: Many roles — especially those touching defense, intelligence, or foreign policy — require clearances.26
  • Byzantine hiring: Agency hiring is infamously slow, opaque, and bureaucratic. Formatting requirements are strict, criteria can be idiosyncratic, and it’s not unusual to wait months before hearing back. Fellowships can help you bypass many standard hiring hurdles.

Our best guess at the five most impactful federal departments for AI policy:

3. Congress

Congress formally holds some of the most important levers in government: setting the federal budget and making laws. This means most big, lasting policy changes need buy-in from Congress.

Congress matters for AI policy by:

  • Setting the federal budget: While the president proposes a budget, Congress holds the power of appropriation and sets the US government’s ~$6 trillion annual budget.
  • Writing laws: Only Congress can pass binding national laws. And many important actions can only happen through laws: for example, creating or abolishing federal agencies, raising taxes, or setting immigration law.28 Executive orders by the president can’t override laws.
  • Overseeing the executive branch: Once laws are on the books, Congress makes sure agencies carry them out as intended. It uses a host of tools: public hearings, letters from members, reporting requirements, and subpoenas. It can also override presidential vetoes.
  • Shaping the public agenda: Congress makes news. Hearings, press conferences, votes, and public statements can draw attention to neglected issues, push companies to change behaviour, or draw fringe ideas into mainstream debate.
Photo of Congressman Don Beyer speaking at a hearing. Text overlay shows a quote: 'As we move forward, I get ever more scared about AI. The deeper we get into it, the more we realize that it's also possible that the race to be the first in AI is the race to be the first to lose control.'
Rep. Don Beyer discusses AI risks, via source.

On the flip side, Congress isn’t exactly known for its efficiency.

Political cartoon showing a classroom scene. A teacher stands at a blackboard that reads 'Today's lesson: How Congress works'. The teacher has a speech bubble saying 'It doesn't.' A student sits at a desk looking up at the teacher.
Comic about Congress dysfunctionality, via source.

There’s good reason for this scepticism:

  • Slowness: Even bills with majority support can stall for months or years if they don’t fit leadership’s priorities or the bill agenda.
  • Low yield: Stand-alone bills rarely pass, meaning most policy changes need to hitch a ride on ‘must-pass’ bills like the annual budget bill or the National Defense Authorization Act. This creates lots of veto points and often means that good ideas are diluted, delayed, or dropped altogether.
  • Politics and showmanship: Congress members’ top priority is usually getting reelected.29 This can make them prioritise short-term wins and benefits to their district over long-term national priorities. Political dynamics can also make bipartisan cooperation costly, and some offices focus more on messaging or ‘just-for-show’ bills than on substantive legislating.30
  • Executive ‘power creep’: US policy influence has steadily shifted to the executive branch in recent decades. Congress still controls the budget and can pass durable laws, but in practice, more policy action now happens in the executive branch.

But Congress is easy to underestimate. The more polarised and theatrical something is, the more coverage it tends to get. This means bipartisan policymaking is often underrepresented in the news. (You probably never heard about Congress funding $175 billion to upgrade public water systems in 2020 or raising the tobacco purchasing age from 18 to 21 in 2021).

You’ll need to consider three major structural dynamics when finding roles in Congress:

  • Senate vs House office: Senators usually carry more weight than representatives. There are only 100 senators compared to 435 house members, and each senator represents an entire state rather than a single district. They serve longer terms, sit on more committees, and have larger, more specialised staff. Senate rules also give individual senators unusual power: single senators can more easily stall legislation, demand concessions, or tip the balance on close votes.
  • Committee vs personal office: Most staffers work in personal offices, where they support a single member of Congress. These offices are sometimes described as 535 small businesses, each with its own priorities, culture, and way of doing things. Personal office staff tend to juggle many different tasks and subject matter areas. Committees, on the other hand, do most of the heavy policy lifting. Committees ‘mark up’ bills and decide whether to move them forward (or kill them). Because every bill must pass through a committee, staff working on them often have more direct sway over policymaking.31
  • Majority vs minority party: The majority party sets the agenda: it controls committee chairs, decides which bills come up for votes, and generally has an easier time moving forward legislation. You can still build valuable career capital in the minority party, but your direct impact may be more limited.

Impact rules of thumb: All else equal, Senate offices usually matter more than House offices, committees more than personal offices, and the majority more than the minority.32

But an office’s culture and your specific role in it matter greatly for your work experience and impact. For instance, some offices are highly hierarchical and top-down; others give junior staff more autonomy in writing legislation, leading meetings, or managing issue portfolios. These dynamics are hard to research, so prioritise talking with current or former staff who can give you a fuller picture.

Many people who thrive elsewhere in government find the Hill uniquely chaotic and political. This means you should think carefully about your fit — but also means that if you are a good fit, you may have an unusual comparative advantage.

Here are some career considerations for working in Congress:

  • Relationship-driven: The ‘Hill’ runs on networks. Success depends on trust and coalition-building with other offices, committees, lobbyists, and advocates, so your reputation and relationships often matter more than your title.33
  • Partisan: You’ll almost always work for one party, which could constrain or boost your later career moves. Navigating political dynamics is part of day-to-day policy work.
  • Few entry barriers and fast progression: Most offices have clear pecking orders, but strong-performing staffers can advance rapidly (potentially moving from intern to an influential policy role in 2–3 years) and most roles require few formal credentials.
  • Unpredictability: When Congress is in session, 60+ hour weeks are common, and work can stretch late into the night. Workdays are fast-paced and unpredictable — you may juggle several urgent issues at once with limited guidance. Long-term job security is rare given electoral cycles.
  • Lower pay: Entry-level congressional salaries are notoriously underpaid (senior staff salaries are usually more reasonable).34
  • Big and broad portfolios: Many congressional staff own substantive, ‘mile-wide’ portfolios early in their careers, which means you’ll learn rapidly across wide-ranging issues but may find it hard to focus exclusively on what you care about most.
  • Strong career capital: Hill experience is prized in DC. Three years as a congressional staffer often signals deeper, more applied policy and political know-how than three years at a think tank. The relationships you build in Congress often uniquely open doors across the policy world.

Our best guess at the five most impactful Senate and House committees for AI policy:35

Senate committees

House committees

4. State governments

State legislatures and executive agencies don’t command headlines as much as Congress, but they often move much faster. Many state legislatures are dominated by a single party, which means fewer veto points and less gridlock. They’re also closer to the communities and industries they govern. And because state staff are usually smaller and thinner on technical expertise, one capable hire can have an outsized influence.

This agility and leverage make states important players in AI policy.36 For example, California Governor Newsom signed SB 53 into law in September 2025, which introduces frontier AI lab whistleblower protections and safety incident reporting and requires large developers to publish their plans for mitigating catastrophic risks. In June 2025, the New York State Assembly passed the RAISE Act, which would also introduce transparency-focused rules.37

States shape AI outcomes on two fronts: locally, within their borders, and nationally, by influencing federal policy and industry behavior.

  • Local powers: States directly control huge areas of daily life — from education and health to infrastructure and business regulation — and they implement many federal programs with wide discretion. This means that in states that host frontier AI companies or data centers, policy choices can directly affect how advanced AI is developed and deployed.38
  • National effects: States can set precedents that are picked up by other states or the federal government — for example, Massachusetts pioneered vaccination mandates, Florida broke ground on computer crime laws, and Minnesota shaped data privacy rules for the entire country. Big companies often treat rules imposed by major states as de facto national standards (‘the California Effect’).39

As with federal policy, those working at the state level will have to choose between policy institutions, such as state legislatures, government agencies or executive offices (like the governor’s office), or state-focused think tanks or advocacy organisations. The tradeoffs between these options often mirror those at the federal level, but each state also has its own quirks that can change the calculus. For instance, it matters whether a state legislature has unified or divided party control, meets year-round or only part of the year, and whether members have their own staff or share them with leadership.

State AI policy also faces a major vulnerability: Congress can often override it. When federal and state laws clash, federal law typically wins, and Congress can sometimes go further by barring states from regulating in an area altogether.40

This risk is ever-present: In June 2025, Congress considered a 10-year ban on certain state AI laws.41

Map showing US state AI governance legislation status as of October 2025. California, Colorado, Utah, and Texas have signed laws (red). New York and Vermont have passed legislation (pink). Many states have inactive bills (dark grey) or no comprehensive bills introduced (light grey). A few states have bills in various legislative stages (blue shades).
State AI legislation tracker, updated October 6, 2025 from source.

Here are some key career considerations for working in state-level AI policy:

  • Launchpad for federal roles: State experience offers great skill-building for federal work. You see firsthand how federal programs get implemented and hone skills that transfer to DC. (But your state-specific network and knowledge may not matter much outside your state.)
  • Access: State jobs are generally less competitive than federal ones, which means you can get in the door faster and often take on more responsibility earlier. They’re also available in every state, which is great if you’re tied to a specific region, or just not looking to move to DC.
  • Intensity in bursts: Many state legislatures are only in session for a few months of the year, or every other year. During sessions, state legislative staff often face long, brutal hours — especially in states with biennial legislatures like Texas, where most policymaking is crammed into five months every two years. Agencies and advocacy groups usually have smaller staffs than their federal cousins, so expect to juggle many hats.
  • Job security: In state legislatures, some positions are ‘session-only’ and even ‘safe’ legislative jobs can disappear after redistricting, retirements, or intraparty drama.42 Like in Congress, staff job security often rises and falls with their boss’s electoral fortunes.

All else equal, federal policy usually has a higher ceiling for impact. But state roles are often more accessible, easier to land, and — particularly in influential states — could bypass gridlock at the federal level to shape AI trajectories nationally.

Our best guess for the five most impactful states for AI policy:

  • California
  • New York
  • Texas
  • Florida
  • Washington

Within states, we think the highest-impact roles are usually in the legislature, the Governor’s office, or in agencies that implement relevant AI policies.43

5. Think tanks and advocacy organisations

Policymakers have little time to think deeply about the range of issues they have to cover. Think tanks can do it for them: they conceive, analyse, and push for ideas, serving as ‘idea factories.’ Advocacy organisations play a similar role, but usually with a sharper ideological edge or a specific mission. The lines between think tanks and advocacy organisations can be blurry in practice, and some policy-focused nonprofits don’t clearly fall in either category.44

Think tanks influence policy through several routes:

  • Informing policymakers: Think tanks’ core business is to generate and communicate ideas. The most effective think tanks don’t just publish 50-page reports and hope someone reads them: they build trust with staff and officials, which lets them put ideas and research in front of the right person at the right moment.
  • Talent pipelines: New administrations often pull talent from ideologically aligned think tanks to fill political roles, especially during the transition period when thousands of political appointees are selected. This revolving door lets think tanks exert influence indirectly, as their alumni carry their ideas and priorities into government.
  • Shaping narratives: Public writing can shift policy windows, changing what policy proposals are politically viable. Even if a proposal isn’t adopted, putting it into debate can raise its profile and nudge policymakers to treat it more seriously. Especially for policies with broad and diffuse benefits, persistent advocacy from a dedicated actor can be highly impactful in moving the needle.

Advocacy organisations may also use these channels, but generally focus more on lobbying — for instance, meeting with policymakers to push their agenda or mobilising constituents to call their senators about an issue.

The biggest drawback of think tank and advocacy work is distance from actual decision makers. This makes their impact especially ‘lumpy’ — sometimes very high, but generally sporadic and hard to predict. In one think tank staffer’s words:

If we judge [think tanks] by whether they are successful in getting policy implemented, most would probably fail most of the time.

Andrew Selee, former executive vice president of the Woodrow Wilson Center

If you’re just starting out, you might have some direct policy impact in a think tank, but the bigger payoff is usually career capital. Think tanks let you test whether you enjoy policy work, build skills valued in the policy world, and grow your network. Many junior congressional staffers come from think tanks, where they’ve built early credibility and relationships. And sometimes, junior researchers ‘ride the coattails’ of a senior staffer into a new administration, landing entry-level political roles when their boss gets appointed.

Some think tanks and advocacy organisations that we think could be impactful for AI policy are:45

Think tanks

Advocacy orgs

Conclusion

The tl;dr on how to have a big impact in US AI policy: build career capital, work backward from the most important issues, prioritise institutions with meaningful power, stay ready for policy windows with AI ‘timelines’ in mind, and choose roles that fit your strengths.

We think especially promising options include the White House, federal departments like Commerce and Defense, Congress (especially on relevant committees), major state governments like California, and well-connected think tanks or advocacy organisations. But your personal fit really matters: the ‘best’ place to work in the abstract may not be the best place for you.

Want one-on-one advice on pursuing this path?

If you think this path might be a great option for you, but you need help deciding or thinking about what to do next, our team might be able to help.

We can help you compare options, make connections, and possibly even help you find jobs or funding opportunities.

APPLY TO SPEAK WITH OUR TEAM

Learn more about how and why to pursue a career in US AI policy

Top recommendations

Further reading

Resources from 80,000 Hours

Resources from others

Our job board features opportunities in AI safety and policy:

    View all opportunities

    The post The US AI policy landscape: where to have the biggest impact appeared first on 80,000 Hours.

    ]]>
    OpenAI: The nonprofit refuses to die (with Tyler Whitmer) https://80000hours.org/podcast/episodes/tyler-whitmer-openai-saved-attorneys-general/ Tue, 11 Nov 2025 16:41:53 +0000 https://80000hours.org/?post_type=podcast&p=93608 The post OpenAI: The nonprofit refuses to die (with Tyler Whitmer) appeared first on 80,000 Hours.

    ]]>
    The post OpenAI: The nonprofit refuses to die (with Tyler Whitmer) appeared first on 80,000 Hours.

    ]]>
    Helen Toner on the geopolitics of AI in China and the Middle East https://80000hours.org/podcast/episodes/helen-toner-ai-policy-washington-dc/ Wed, 05 Nov 2025 15:58:30 +0000 https://80000hours.org/?post_type=podcast&p=93536 The post Helen Toner on the geopolitics of AI in China and the Middle East appeared first on 80,000 Hours.

    ]]>
    The post Helen Toner on the geopolitics of AI in China and the Middle East appeared first on 80,000 Hours.

    ]]>
    Our top tips for successful networking https://80000hours.org/2025/11/our-top-tips-for-successful-networking/ Wed, 05 Nov 2025 12:21:39 +0000 https://80000hours.org/?p=93560 The post Our top tips for successful networking appeared first on 80,000 Hours.

    ]]>
    Everyone talks about the importance of networking for a successful career. And they’re right — the people you connect with will shape your habits, the ideas you’re exposed to, and your job opportunities.

    But how do you actually network well?

    I asked this question to our career advisors, who have helped thousands of people break into high-impact roles. Here’s what they recommended.

    How to network

    The basics are simple: find people who can help you learn or move forward with your career, or who you can help. Increase your opportunities to connect with them, and try to build genuine relationships with the people you meet.

    Find the right people

    • Attend conferences, courses, and social events in your professional community, or one you’d like to be a part of. There are many opportunities to meet like-minded people in person and virtually — we have lots of recommendations to get you started on our community page.
    • Get involved in online discourse. People often use social media, forums, and other online platforms to connect with others and discuss ideas. Whatever your interests are, there’s probably an online community out there for you! Our advisors sometimes recommend being active on X/Twitter, especially if you’re interested in AI policy.
    • Try visiting a hub. There are some locations (like the San Francisco Bay Area, Washington DC, London, and Oxford) with a large number of people interested in existential risks and effective altruism. Travelling isn’t practical for everyone, but if you have the flexibility, it can be a great way to meet people in these communities — especially if you plan your visit around a local event.
    • Try talking to peers. You don’t always need to target the leading experts in your field of interest. People who are just one or two steps ahead of you in their career can be well-placed to help you work out your next steps, and are often easier to book in with.
    • Look for people you can help. These might be peers who want book recommendations, organisation leaders looking for referrals for a role, or people earlier in their careers who need guidance you can provide. Not only is this a great way to build relationships that could be mutually beneficial, it’s a great way to pay forward the help you get in your career!

    Multiply your opportunities to network

    Or, in other words, “increase your surface area for luck”.

    • Reach out widely (and don’t be too discouraged by rejections!). The more people you try to talk to, the greater your chances are of making a great connection. Note that people are often very busy, so don’t take it personally if they don’t respond.
    • Ask for introductions. At the end of a conversation, or if someone declines your request to have a conversation with them, try: ‘Is there anyone else you think it would be useful for me to talk to?’
    • Host your own events. This takes more effort than other options, but you can start small. Invite a few people you already know well and a few people you’d like to get to know better — and encourage them to bring guests!
    • Cold emailing. You can also try reaching out to people you haven’t met before — we give some specific advice on this below.

    Have great conversations and build relationships

    So you’ve decided who you want to speak to, and you’ve put yourself in a good position to do it. Now what?

    Here’s how to make your conversations go as well as possible:

    • Keep your requests specific and concise. When reaching out to someone, clearly state what you’d like to discuss and why you’re contacting them in particular. But bear in mind: busy people are more likely to engage with a brief email.
    • Don’t make it a sales pitch. Treat conversations as an opportunity to learn from or help each other, rather than a transaction. You’ll build more valuable connections by showing genuine interest than by selling yourself or asking for a job outright.
    • Be open-minded. The most helpful and honest advice will sometimes challenge your opinions. It’s important to be open to different perspectives, change your mind when presented with solid new evidence — and be happy people have shared their views, even if you don’t agree.
    • Invest in your interpersonal skills. These are trainable, and can make a big difference when you’re trying to make connections. We’ve got some tips on this in our career guide.

    Reaching out to people you don’t know

    Though your chances of getting a response are always higher if you get an introduction first or have met the person before, you can always just reach out to people you don’t know.

    Maybe you’ve read a fascinating paper recently, or heard about a project that makes you think ‘I wish I could have done that’. If so, you can try getting in touch with the people involved to learn more and get pointers — many people are happy to chat about their work.

    This can feel daunting, and you won’t always get a response. But it’s actually common to send ‘cold’ emails like this — and some people say it’s really helped their career progression.

    Remember:

    • It’s up to the person receiving your message to decide if talking to you is worth their time. You’re not imposing just by reaching out.
    • As long as you’re respectful, the worst thing that can happen is that you don’t get a response.
    • If you don’t get a response, it’s usually fine to follow up once, in case they’ve missed or forgotten about your message — but don’t keep chasing them after that.

    More resources

    Our career guide includes lots more relevant advice, including why you should consider joining a professional community and more tips for building connections.

    If you’re looking for tips on networking in the AI policy space, our advisors recommend this resource from Horizon.

    Not sure where to start with a cold message? We have a few example email scripts that can help you. Or, if you’re messaging someone at a conference, we like this advice from Neel Nanda and Jemima Jones.

    And finally, to generally improve your people skills, we recommend reading Never Eat Alone or How to Win Friends and Influence People — old, but it holds up.

    The post Our top tips for successful networking appeared first on 80,000 Hours.

    ]]>
    Open positions to grow our podcast team https://80000hours.org/2025/10/open-positions-to-grow-our-podcast-team/ Fri, 31 Oct 2025 17:20:06 +0000 https://80000hours.org/?p=93540 The post Open positions to grow our podcast team appeared first on 80,000 Hours.

    ]]>

    These positions have now closed. Please keep an eye on our Work with us page for future postings.

    We’re looking to hire two people to join the 80,000 Hours podcast team to help us make extremely high quality episodes that shape how the people building and governing transformative AI think about doing it safely.

    We’ve outlined three possible roles (Podcast Growth Specialist, Research Specialist or Assistant, and Podcast Producer / Content Strategist), but we’re really looking for the right people — we’ll figure out exactly how to structure responsibilities based on who we hire.

    Location: London, UK is preferred for some roles. San Francisco/Bay Area or Washington, DC may also work well. All locations considered.

    Salary: Varies depending on skills, fit, and experience, but ranges from approximately £47,000 to £85,000 per year (there may be flexibility at the upper end for especially experienced candidates).

    To apply: Please complete this application form by EOD PST 30 November.

    Why this matters

    The 80,000 Hours Podcast aims to help the world safely navigate the transition to a flourishing future with artificial general intelligence. We focus on influencing the decisions that matter most — helping people identify high-leverage career moves and important actions they can take in their current positions.

    Our episodes reach tens or hundreds of thousands of people, and we regularly hear from employees in government and at leading AI companies who find them useful. We’re also growing fast: in 2025, our listenership was around 60% higher than the same period last year. In order to further improve the usefulness and reach of our episodes, we need to increase the team’s capacity.

    Right now, in addition to recording important conversations, our hosts handle much of the crucial work that surrounds each episode:

    • Identifying which conversations would actually change important decisions
    • Finding guests who can deliver those insights
    • Shaping episodes to land the key arguments effectively
    • Ensuring the right decision-makers encounter the content at the right moment

    We’re looking to hire two people to the team who can take on some of this work, allowing Rob and Luisa to focus more of their time on making great episodes.

    The window is short. If we’re right about AI timelines, the decisions that shape humanity’s long-run trajectory are happening now, not in 20 years. We’re a small team trying to do something unreasonably ambitious and we’re looking for the right people to help us scale up.

    Who we’re looking for

    We think the team’s responsibilities could be structured in various different ways, so we’re open to hiring various different profiles of people to the team, into variously structured roles.

    We’re hoping to make two hires, both of whom:

    • Are ambitious about impact — you want your work to meaningfully shape important conversations, not just contribute at the margins. You’re excited to take on significant challenges.
    • Have strong judgement about content — you understand what makes people click, watch, and share, particularly among professional and policy audiences.
    • Are clear communicators — you can articulate complex ideas plainly and concisely, making information easy for others to understand and act on. You’re comfortable discussing uncertainties in decision-making and naturally anticipate questions others might have.
    • Consume a lot of audio or video podcasts — or secondarily, Substacks, AI/effective altruism Twitter, or video clips.
    • Actively engage with AI/AGI developments — you follow the field, take the implications seriously, and want to contribute to better outcomes.
    • Are willing to take an experimental approach, changing plans in response to what seems to resonate with the target audience.

    Depending on your experience level and the responsibilities you take on, you’ll report to either the Director of Podcast, Michelle Hutchinson, or the Chief of Staff for the podcast team, Eve McCormick. In either case, you’ll work closely with our hosts, Rob Wiblin and Luisa Rodriguez, and receive coaching from them.

    Below are particular profiles of people and role specifications we might be excited to hire. We expect many strong candidates won’t fit neatly into a single category and might be suited for multiple roles, or none of them as they are described here.

    If you fit the above description and don’t see yourself in any of these particular roles, please still apply!

    Podcast Growth Specialist

    In this role, you would oversee the packaging, promotion, and distribution of our content, figuring out how to reach the right people and how to measure what’s working.

    Responsibilities for this role might include:

    • Packaging episodes for impact: Crafting titles, descriptions, thumbnails, and promotional clips that accurately represent content while maximising engagement with priority audiences.
    • Multi-platform strategy: Developing and executing plans to promote episodes across YouTube, Twitter/X, LinkedIn, podcasting platforms, and potentially TikTok/Instagram and Substack.
    • Audience research: Understanding both qualitatively and quantitatively who engages with our content, what they look for in content, and what gaps exist in their knowledge.
    • Launch coordination: Managing the logistics of episode releases, including scheduling, platform uploads, and promotional campaigns
    • Performance analysis: Tracking metrics across platforms, identifying what’s working, and translating insights into actionable recommendations.

    You might be suited to this role if you have:

    • Experience with content marketing, social media growth, or digital advertising
    • Data-driven mindset — you’re comfortable with analytics tools and using data to guide decisions
    • Understanding of key audiences — you understand the interests and information needs of people working across the AI policy, safety, and research landscape, or are excited to develop that understanding
    • Excellent communication — you write compelling copy and can articulate strategic thinking clearly
    • Organisational capability — you can manage multiple campaigns and initiatives simultaneously without dropping balls

    Research Specialist or Assistant

    In this role, you would specialise in identifying what topics to cover, who to talk to, and what questions to ask them.

    Responsibilities for this role might include:

    • Strategic guest selection and content planning: Tracking developments in the AGI space to identify what topics deserve coverage, determining which perspectives and expertise would be most valuable for our audience, and which guests we should ask to speak to those topics.
    • Guest research and outreach: Researching potential guests’ work to identify what makes them valuable to interview, how their expertise fits into our content strategy, and how to best approach them so they want to come on the show.
    • Interview preparation: Drafting potential questions, anticipating key considerations and cruxes, and helping hosts prepare for substantive conversations that surface the most important ideas.

    You might be suited to this role if you have:

    • Background in AI safety, AI policy, computer science, philosophy, or related fields
    • Strong analytical and research skills — you can quickly get up to speed on complex topics and identify key considerations
    • Intellectual curiosity — you enjoy diving into technical details while keeping sight of the big picture
    • Clear written communication — you can synthesise research and communicate findings concisely
    • Good judgement — you can assess the credibility of sources and the strength of different arguments

    Podcast Producer / Content Strategist

    This role would be for the “editorial generalist” — someone who can own episodes end-to-end, from helping select topics and guests through to launching the final episodes and assessing their impact.

    Responsibilities for this role might include:

    • Guest discovery and vetting: Identifying potential guests who could speak authoritatively on priority topics, conducting preliminary research and outreach, assessing fit for the show.
    • Content-level editing: Reviewing recorded interviews and shaping the narrative flow — suggesting cuts, reordering segments, identifying sections that need clarification, and ensuring episodes are clear and compelling.
    • Launch strategy and execution: Selecting engaging episode titles and thumbnails, crafting compelling opening clips and promotional materials to attract and retain audience attention, writing episode descriptions and social media copy.
    • Audience learning: Understanding what the audience needs from us, gathering and channeling feedback into content improvements, tracking what resonates and why.

    You might be suited to this role if you:

    • Have experience with at least one of:
      • Content creation and editing, whether text, video or audio
      • Content marketing
      • Social media
    • Have organisational capability — you can independently manage podcast projects, preparing content and structuring episodes without much oversight
    • Enjoy working across different levels — from strategy to execution
    • Understand our key audiences — you understand the interests and information needs of people working across the AI policy, safety, and research landscape, or are excited to develop that understanding
    • Have strong analytical and research skills — you can quickly get up to speed on complex topics and identify key considerations
    • Have clear written communication — you can synthesise research and communicate findings concisely

    Do you have several relevant skills but none of the above roles feel quite right? We’d love to hear from you anyway. We expect to design the exact roles around the most promising candidates.

    What we offer

    • We’re open to a wide range of levels of experience for these roles. The salary will depend on your skills and experience, but to give a rough sense of the range:
      • The starting salary for more junior versions of the roles for someone with 1 year of experience would be ~£47,000 per year
      • The starting salary for more senior versions of the roles for someone with 10 years of experience would be ~£85,000
      • There may be some flexibility at the upper end of this range for the most experienced candidates.
    • Staff can work flexible hours. We encourage staff to work a schedule (consistent with full-time status) which will allow them to be personally effective, while also facilitating collaboration with the rest of the team.
      • In particular, it would be very beneficial to have at least two hours of overlap in your working day with our hosts, who are currently based in London, UK.
    • Location
      • We prefer people to either work in the San Francisco Bay Area, in order to be connected with other organisations working on AGI issues, or to work part-time or full-time in our London office, getting the benefits of being in-person with most of the team.
      • For some versions of these roles, we would have a preference for people working from our London office.
      • A third-best option is to locate in Washington, DC, which will be a centre of AI policy and governance.
      • We are also open to remote work in some cases. For remote or US-based candidates, we would be interested in you travelling to London 2-4 times per year to work with the team in person.
      • We may be able to sponsor UK visas. For US-based candidates, visa sponsorship may be possible through our Employer of Record.
    • The start date of the role is flexible, but we would expect you to start during the first half of 2026.

    Our benefits

    • 25 days of paid holiday, plus public holidays in line with your location (at least eight per year)
    • Up to 10 days of paid sick leave per year, in addition to holiday
    • Private medical insurance with substantial coverage for your dependents (including your partner)
    • Long-term disability insurance
    • Pension scheme / retirement plan with employer contributions
    • Up to 14 weeks of fully paid parental leave and childcare allowance for children under five
    • Business travel insurance
    • £5,000/$6,000 annual mental health support allowance
    • £5,000/$6,000 annual self-development budget
    • The option to use 10% of your time for self-development
    • Gym, shower facilities, and unlimited free food provided at our London office
    • Up to £8,000 relocation stipend if you need to move due to your role at 80k

    Application process

    To apply, please fill in this application form by EOD PST 30 November 2025.

    If you have any problems submitting the form, please send your CV to eve.mccormick@80000hours.org

    The application process will vary a bit depending on the candidate, but is likely to include written work and interview components, and a multi-day in-person trial. We offer payment for work samples and trials, conditional on your location and right to work in the UK.

    If you’re feeling unsure whether you meet our criteria, we’d like to strongly encourage you to apply.

    The post Open positions to grow our podcast team appeared first on 80,000 Hours.

    ]]>
    Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes https://80000hours.org/podcast/episodes/holden-karnofsky-concrete-ai-safety-frontier-ai-companies/ Thu, 30 Oct 2025 16:06:36 +0000 https://80000hours.org/?post_type=podcast&p=92099 The post Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes appeared first on 80,000 Hours.

    ]]>
    The post Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes appeared first on 80,000 Hours.

    ]]>
    Extreme power concentration https://80000hours.org/problem-profiles/extreme-power-concentration/ Thu, 24 Apr 2025 10:53:40 +0000 https://80000hours.org/?post_type=problem_profile&p=89674 The post Extreme power concentration appeared first on 80,000 Hours.

    ]]>
    Why might AI-enabled power concentration be a pressing problem?

    The main reasons we think AI-enabled power concentration is an especially pressing problem are:

    1. Historically unprecedented levels of automation could concentrate the power to get stuff done, by reducing the value of human labour, empowering small groups with big AI workforces, and potentially giving one AI developer a huge capabilities advantage (if automating AI development leads to runaway AI progress).
    2. This could lead to unprecedented concentration of political power. A small number of people could use a huge AI workforce to seize power over existing institutions, or render them obsolete by amassing enormous wealth.
    3. AI-enabled power concentration could cause enormous and lasting harm, by disempowering most people politically, and enabling large-scale abuses of power.
    4. There are ways to reduce this risk, but very few are working on them.

    In this section we’ll go through each of these points in turn, but first we’ll give an illustrative scenario where power becomes extremely concentrated because of advanced AI. The scenario is very stylised and there are loads of other ways things could go, but it gives a more concrete sense of the kind of thing we’re worried about.

    An AI-enabled power concentration scenario

    Note that this scenario, and the companies and institutions in it, are made up. We’re trying to illustrate a hypothetical, and don’t have particular real-world actors in mind.

    In 2029, a US AI company called Apex AI achieves a critical breakthrough: their AI can now conduct AI research as well as human scientists can. This leads to an intelligence explosion, where AI improving AI improving AI leads to very rapid capability gains. But their competitors — including in China — are close on their heels, and begin their own intelligence explosions within months. Fearing that China will soon be in a position to leverage its industrial base to overtake the US, the US government creates Project Fortress — consolidating all US AI development under a classified Oversight Council of government officials and lab executives. Apex leverages their early lead to secure three of nine board seats and provides the council’s core infrastructure: security systems, data analytics, and AI advisors.

    By 2032, AI companies generate the majority of federal tax revenue as AI systems automate traditional jobs. Unemployment rises. The Oversight Council now directs hundreds of millions of AI workers, controls most of the tax base, and makes the most important decisions about military AI procurement, infrastructure investment, and income redistribution. Only those with direct connections to the council or major AI companies have access to the most advanced AI tools, while most citizens interact with limited consumer versions. When the president proposes blocking Apex’s merger with Paradox AI (which would create a combined entity controlling 60% of compute used to train and run US AI systems), council-generated economic models warn of China overtaking and economic collapse if the move is carried out. The proposal dies quietly. The council’s AI systems — all running on Apex architecture — are subtly furthering Apex’s interests, but the technical traces are too subtle for less advanced models to detect. Besides, most people are bought into beating China, and when they ask their personal AI advisors (usually less advanced versions of either Paradox or Apex models) about the merger, they argue persuasively that it serves the national interest.

    By 2035, the US economy has tripled while other nations have stagnated. Project Fortress’ decisions now shape global markets — which technologies get developed, which resources get allocated, which countries receive AI assistance. Apex and Paradox executives gradually cement their influence: their AI systems draft most proposals, their models evaluate the options, their security protocols determine what information reaches other council members. With all major information channels — from AI advisors to news analysis to government briefings — filtered through systems they control, it becomes nearly impossible for anyone to get an unbiased picture of the concentration of power taking place. Everything people read on social media or hear on the news seems to support the idea that there is nothing much to worry about.

    The executives are powerful enough to unilaterally seize control of the council and dictate terms to other nations, but they don’t need to. Through thousands of subtle nudges — a risk assessment here, a strategic recommendation there — their AI systems ensure every major decision aligns with their vision for humanity’s future.

    Automation could concentrate the power to get stuff done

    We’ve always used technology to automate bits of human labour: water-powered mills replaced hand milling, the printing press replaced scribes, and the spinning jenny replaced hand spinning. This automation has had impacts on the distribution of power, some of them significant — the printing press helped shift power from the church towards city merchants; and factory machines shifted power from landowners to capitalists and towards industrialising countries.

    The thing that’s different with AI is that it has the potential to automate many kinds of human labour at once. Top AI researchers think that there’s a 50% chance that AI can automate all human tasks by 2047 — though many people think this could happen much sooner (several AI company CEOs expect AGI in the next few years) or much later. Even if full human automation takes a long time or never happens, it’s clear that AI could automate a large fraction of human labour — and given how fast capabilities are currently progressing, this might start happening soon.1

    This could have big implications for how power is distributed:

    • By default, less money will go to workers, and more money will go to the owners of capital. Automation could reduce the value of people’s labour, in extreme scenarios causing wages to collapse to very low levels indefinitely. 2 This would increase how much of the pie goes to capital compared to labour, and those with capital could become even more disproportionately powerful than they are now.
    • Small groups will be able to do more. Right now, large undertakings require big human workforces. At its peak, the Manhattan project employed 130,000 people. It takes 1.5 million people just to run Amazon. As AI becomes more capable, it’ll become possible to get big stuff done without large human teams — and the attendant need to convince them what you’re doing is good or at least OK — by using AI workforces instead.
      • This would already empower small groups to do more. But the effect will be even stronger because using AI to get stuff done won’t empower everyone equally: it’ll especially empower those with access to the best AI systems. Companies already deploy some models without releasing them to the public, and if capabilities get more dangerous or the market becomes less competitive, access to the very best capabilities could become very limited indeed.
    • Runaway progress from automated AI development could give one developer a big capabilities advantage. The first project to automate AI R&D might trigger an intelligence explosion, where AI systems improving AI systems which improve AI systems leads to a positive feedback loop, meaning their AI capabilities can rapidly pull ahead of everyone else’s. Competitors might follow on with intelligence explosions of their own, but if they are far enough behind the leader to begin with or the leader’s initial boost in capabilities is sufficiently huge, one company might be able to entrench a lasting advantage.

    If these dynamics are strong enough, we could end up with most of the power to earn money and get stuff done in the hands of the few organisations (either AI companies3 or governments4) which have access to the best AI systems — and hence to huge amounts of intelligent labour which they can use for any means.

    Furthermore, within these organisations, more and more employees may get replaced by AI systems, such that a very small number of people wield huge amounts of power.5

    Graphic showing three stages of white collar automation.

    It’s plausible that entry-level white collar jobs will be automated first. Organisations could become more top-heavy, with an expanded class of managers overseeing many AI agents.

    There are many other ways this could go, and it’s not a foregone conclusion that AI will lead to this kind of power concentration. Perhaps we’ll see a stronger shift from expensive pre-training to more accessible inference scaling, and there will be a boom in the number of frontier companies, putting equally-powerful AI in more hands. There might be no intelligence explosion, or it might fizzle quickly, allowing laggards to catch up. If commercial competition remains high, consumers will have access to smarter and smarter models, which could even out differences in capabilities between humans and push towards greater egalitarianism. AI might allow for much more direct democracy by making it easier to aggregate preferences, and for greater transparency. And so on (more on this below).

    So there are forces pushing against power concentration, as well as forces pushing towards it. It’s certainly possible that society naturally adjusts to these changes and successfully defends against AI-enabled power concentration. But given the speed that AI progress might reach, there’s a real risk that we don’t have enough time to adapt.

    This could lead to unprecedented concentration of political power

    So we could end up in a situation where most of the power to earn money and get stuff done is in the hands of the few.

    This power might be kept appropriately limited by existing institutions and laws, such that influence over important decisions about the future remains distributed. But it’s not hard to imagine that huge capabilities advantages for some actors and the erosion of the value of most human labour could undermine our current checks and balances, which were designed for much more even levels of capabilities in a world which runs on human labour.

    But how would this actually happen? People who are powerful today will fight tooth and nail to retain their power, and just having really good AI doesn’t automatically put you in charge of key institutions.

    We think that power could become extremely concentrated through some combination of:

    • AI-enabled power grabs, where actors use AI to seize control over existing institutions
    • Economic forces, which might make some actors so wealthy that they can easily influence or bypass existing institutions
    • Epistemic interference, where the ability of most people to understand what’s happening and coordinate in their own interests gets eroded

    Experts we’ve talked to disagree about which of these dynamics is most important. While it might be possible for just one of these dynamics to lead all the way to AI-enabled power concentration, we’re especially worried about the dynamics in combination, as they could be mutually reinforcing:

    • Power grabs over leading companies or governments would make it easier to amass wealth and control information flows.
    • The more that wealth becomes concentrated, the easier it becomes for the richest to gain political influence and set themselves up for a power grab.
    • The more people’s ability to understand and coordinate in their own interests is compromised, the easier it becomes for powerful actors to amass wealth and grab power over institutions.

    Below, we go into more detail on how each of these factors – power grabs, economic forces, and epistemic interference – could lead to AI-enabled power concentration, where a small number of people make all of the important decisions about the future.

    AI-enabled power grabs

    There are already contexts today where actors can use money, force, or other advantages to seize control of institutions — as demonstrated by periodic military coups and corporate takeovers worldwide. That said, there are limits to this: democracies sometimes backslide all the way to dictatorship, but it’s rare;6 and there are almost never coups in mature democracies.

    Advanced AI could make power grabs possible even over very powerful and democratic institutions, by putting huge AI workforces in the hands of the few. This would fundamentally change the dynamic of power grabs: instead of needing large numbers of people to support and help orchestrate a power grab, it could become possible for a small group to seize power over a government or other powerful institution without any human assistance, using just AI workforces.

    What would this actually look like though?

    One pathway to an AI-enabled power grab over an entire government is an automated military coup, where an actor uses control over military AI systems to seize power over a country. There are several different ways an actor could end up with control over enough military AI systems to stage a coup:

    • Flawed command structure. Military AI systems might be explicitly trained to be loyal to a head of state or government official instead of to the rule of law. If systems were trained in this way, then the official who controlled them could use them however they wanted to, including to stage a coup.7
    • Secret loyalties. As AI capabilities advance, it may become possible to make AI systems secretly loyal to a person or small group.8 Like human spies, these systems would appear to behave as intended, but secretly further other ends. Especially if one company has much more sophisticated AI than everyone else, and only a few actors have access to it, these secret loyalties might be very hard for external people to detect.9 So subsequent generations of AIs deployed in government and the military might also be secretly loyal, and could be used to stage a coup — either by AI company leaders or foreign adversaries, or by parts of the government or military.
    • Hacking. If one company or country has a strong advantage in cyber offense, they could hack into many military AI systems at once, and either disable them or use them to actively stage a coup.

    Diagram showing how AI systems could propagate secret loyalties forwards into future generations

    AI systems could propagate secret loyalties forwards into future generations of systems until secretly loyal AI systems are deployed in powerful institutions like the military.

    These scenarios may sound far-fetched. Militaries will hopefully be cautious about deploying autonomous military systems, and require appropriate safeguards to prevent these kinds of misuse. But competition or great power conflict might drive rushed deployment,10 and secret loyalties could be hard to detect even with rigorous testing. And it might only take a small force to successfully stage a coup, especially if they have AI to help them (there are several historical examples of a few battalions successfully seizing power even without a technological advantage, by persuading other forces not to intervene).11

    Outside military coups, another potential route to an AI-enabled power grab is overwhelming cognitive advantage, where an actor has such a huge advantage in skilled AI labour that it can directly overpower a country or even the rest of the world. With a very large cognitive advantage, it might be possible to seize power by using superhuman strategy and persuasion to convince others to cede power, or by rapidly building up a secret military force. This is even more sci-fi, but some people think it could happen if there’s a big enough intelligence explosion.

    An AI-enabled power grab — whether via an automated military coup or via overwhelming cognitive advantage — wouldn’t automatically constitute AI-enabled power concentration as we’ve defined it. There’s no single institution today which makes all of the important decisions — not even the most powerful government in the world. So there might still be a long path between ‘successful power grab over one institution’ and ‘making all of the important decisions about what happens in the future’. But a power grab could be a very important incremental step on the way to a small number of people ending up with the power to make all of the important decisions about the future12 — or if power had already become very concentrated, a power grab could be the final step.

    Economic forces

    There are several different ways that a small group could become wealthy enough to effectively concentrate power, in extreme cases making existing institutions irrelevant:

    • Eroding the incentives for governments to represent their people, by making the electorate economically irrelevant. Of course, the mission of governments in democracies is to represent and serve the interests of their citizens. But currently, governments also have direct economic incentives to do so: happier and healthier people make more productive workers, and pay more taxes (plus they’re less likely to rebel). If this link were broken by automation, and AI companies provided the vast majority of government revenues, governments would no longer have this self-interested reason to promote the interests of their people.
      • There might still be elections in democracies, but very fast rates of progress could make election cycles so slow that they don’t have much influence, and misinformation and lobbying could further distort voting. In scenarios like this, there might still be governments, but they’d no longer serve the functions that they currently do, and instead would mostly cater to the interests of huge AI companies.13
    • Outgrowing the world, where a country or company becomes much richer than the rest of the world combined. An intelligence explosion of the kind discussed above could grant the leading AI developer a (maybe temporary) monopoly on AI, which could allow them to make trillions of dollars a year,14 and design and build powerful new technologies. Naively, if that actor could maintain its monopoly and grow at a faster rate than the rest of the world for long enough, it would end up with >99% of resources. There are lots of complications here which make outgrowing the world less likely,15 but it still seems possible that an actor could do this with a very concerted and well-coordinated effort if they had privileged access to the most powerful technology in the world. Today’s institutions might continue to exist, but it’s not clear that they would be able to enact important decisions that the company or country didn’t like.
    • First mover advantages in outer space, where the leader in AI leverages their advantage to claim control over space resources. If AI enables rapid technological progress, the leader in AI might be the first actor to develop advanced space capabilities. They could potentially claim vast resources beyond Earth — and if space resources turn out to be defensible, they could maintain control indefinitely. It’s not clear that such first mover advantages actually exist,16 but if they do, the first mover in space would be able to make unilateral decisions about humanity’s expansion into the universe — decisions that could matter enormously for our long-term future.

    All of these routes are quite speculative, but if we don’t take steps to prevent them, it does seem plausible that economic forces could lead to one country or company having much more political power than everyone else combined. If that actor were very centralised already (like an autocratic government or a company where most employees had been automated), or if there were later a power grab that consolidated power in the hands of a small group, this could lead to all important decisions about the future being made by a handful of individuals.

    Epistemic interference

    Power grabs and economic forces that undermine existing institutions would be bad for most people, so it would be in their interests to coordinate to stop these dynamics. But the flip side of this is that it’s in the interests of those trying to amass power to interfere with people’s ability to understand what’s happening and coordinate to stop further power concentration.17

    This is the least well-studied of the three dynamics we’ve pointed to, but we think it could be very important. Tentatively, here are a few different factors that could erode the epistemic environment, some of which involve deliberate interference and some of which are emergent dynamics which favour the few:

    • Lack of transparency. Powerful actors in AI companies and governments will have incentives to obfuscate their activities, particularly if they are seeking power for themselves. It might also prove technically difficult to share information on AI capabilities and how they are being used, without leaking sensitive information. The more AI development is happening in secret, the harder it is for most people to oppose steps that would lead to further power concentration.
    • Speed of AI progress. Things might be shifting so quickly that it’s hard for any humans to keep up. This would advantage people who have access to the best AI systems and the largest amounts of compute: they might be the only ones who are able to leverage AI to understand the situation and act to promote their own interests.
    • Biased AI advisors. As AI advice improves and the pace of change accelerates, people may become more and more dependent on AI systems for making sense of the world. But these systems might give advice which is subtly biased in favour of the companies that built them — either because they’ve been deliberately trained to, or because no one thought carefully about how the systems’ training environments could skew them in this direction. If AI systems end up favouring company interests, this could systematically bias people’s beliefs and actions towards things which help with further power concentration.
    • Persuasion and manipulation campaigns. Those with access to superior AI capabilities and compute could deliberately interfere with other people’s ability to limit their power, by conducting AI-powered lobbying campaigns or manipulating individual decision makers. For example, AI could make unprecedentedly intensive and personalised efforts to influence each individual congressperson to gain their support on some policy issue, including offers of money and superhuman AI assistance for their reelection campaigns. It’s not yet clear how powerful these techniques will be (maybe humans’ epistemic defences are already quite good and AI won’t advance much on what humans can already do), but if we’re unlucky this could severely impair society’s ability to notice and respond to power-seeking.

    That list of factors might be missing important things and including things that are not really going to be problems — again, the area is understudied. But we’re including it to give a more concrete sense of how AI might erode (or be used to erode) the epistemic environment, making it harder for people to realise what’s happening and resist further power concentration. Epistemic interference in isolation probably won’t lead to extreme AI-enabled power concentration, but it could be a contributing factor.

    AI-enabled power concentration could cause enormous and lasting harm

    In a commonsense way, handing the keys of the future to a handful of people seems clearly wrong, and it’s something that most people would be strongly opposed to. We put a fair bit of weight on this intuitive case.

    We also put some weight on specific arguments for ways in which AI-enabled power concentration would be extremely harmful, though the reasoning here feels more brittle:

    • It could lead to tyranny. Democracy usually stops small groups of extremists from taking the reins of government and using them to commit mass atrocities against their peoples, by requiring that a large chunk of the population supports the general direction of the government. If power became extremely concentrated, a small group could commit atrocities that most people would be appalled by. Many of the worst atrocities in human history were perpetrated by a small number of people who had unchecked power over their people (think of the Khmer Rouge murdering a quarter of all Cambodians between 1975 and 1979). We can think of two main ways that AI-enabled power concentration could lead to tyranny:
      • Malevolent — or just extremely selfish — humans could end up in power. Particularly for scenarios where power gets concentrated through AI-enabled power grabs, it seems quite likely that the sorts of humans who are willing to seize power will have other bad traits. They might actively want to cause harm.
      • Power corrupts. Even if those in power start out with good intentions, they’d have no incentive to continue to promote the interests of most people if their power were secure. Whenever other people’s interests became inconvenient, there would be a strong temptation to backtrack, and no repercussions to doing so.
    • It could lead us to miss out on really good futures. AI-enabled power concentration might not lead to tyranny in the most egregious sense: we might somehow end up with a benevolent dictator or an enlightened caste of powerful actors who keep an eye out for the rest of us. But even in this case, the future might be much less good than it could have been, because there’d be:
      • Injustice and disempowerment. AI-enabled power concentration would disempower the vast majority of people politically. From some philosophical perspectives,18 justice and political empowerment are intrinsically valuable, so this would make the future much less good.
      • Less diversity of values and ways of life. A narrower set of people in power means a narrower set of values and preferences get represented into the future. Again, from many perspectives this kind of diversity is intrinsically valuable.
      • Less moral reflection (maybe). Making good decisions about the future might require thinking deeply about what we value and what we owe to others. If power over the future is distributed, there’s a good chance that at least some people choose to reflect in this way — and there will be more disagreement and experimentation, which could prompt others to reflect too. But if power is extremely concentrated, those in charge might simply impose their current worldview without ever questioning it. This could lead to irreversible mistakes: imagine if the Victorians or the Romans’ moral blindspots had become permanent policy. If those in power happen to care about figuring out what’s right, power concentration could also lead to more moral reflection than would happen in a default world — but it would be limited to a narrow set of experiences and perspectives, and might miss important insights that emerge from broader human dialogue.

    Extreme AI-enabled power concentration would also probably be hard to reverse, making any harms very long-lasting. As is already the case, the powerful will try to hold onto their power. But AI could make it possible to do this in an extremely long-lasting way that hasn’t been possible historically:

    • Even if most people opposed an AI-powered regime, they might have even less power than historically disenfranchised groups have had to overturn it. If all economic and military activity is automated, humans won’t have valuable labour to withhold or compelling force to exert, so strikes and uprisings won’t have any bite.
    • Human dictators die, but a government run by AI systems could potentially preserve the values of a dictator or other human leader permanently into the future.
    • If power becomes so concentrated there’s just one global hegemon, then there won’t be any external threats to the regime.19

    These harms need to be weighed against the potential benefits from AI-enabled power concentration, like reducing competitive dynamics. We’re not certain how all of this will go down, but both our intuitions and the analysis above suggest that AI-enabled power concentration poses serious risks to human flourishing that we should work to avoid.

    There are ways to reduce this risk, but very few are working on them

    Many people are working to prevent more moderate forms of power concentration. Considered broadly, a lot of the work that happens in governments, the legal system, and many parts of academia and civil society contributes to this.

    But very few are focused on the risk of extreme power concentration driven by AI — even though, if the above arguments are right, this is a very serious risk. We’re aware of a few dozen people at a handful of organisations who are working on reducing this risk, and even fewer who work on this full time. As of September 2025, the only public grantmaking round we know of on AI-enabled power concentration is a $4 million grant programme (though there’s more funding available privately).

    This is in spite of the fact that there are concrete things we could do now to reduce the risk. For example, we could:

    • Work on technical solutions to prevent people misusing massive AI workforces, like:
      • Training AI to follow the law
      • Red-teaming model specs (documents that AI systems are trained to follow which specify how they should behave) to make sure AIs are trained not to help with power grabs
      • Auditing models to check for secret loyalties
      • Increasing lab infosecurity to prevent tampering with the development process and unauthorised access, which would make it harder to insert secret loyalties or misuse AI systems20
    • Develop and advocate for policies which distribute power over AI, like:
      • Designing the terms of contracts between labs and governments to make sure no one actor has too much influence
      • Sharing access to the best AI capabilities widely whenever this is safe, and with multiple trusted actors like congress and auditors when it isn’t, so that no actor has much more powerful capabilities than everyone else
      • Building datacentres in non-US democracies, to distribute the power to run AI systems amongst more actors
      • Mandating transparency into AI capabilities, how they are being used, model specs, safeguards and risk assessments, so it’s easier to spot concerning behaviour
      • Introducing more robust whistleblower protections to make it harder for insiders to conspire or for company executives to suppress the concerns of their workforces
      • All of the technical solutions above
    • Build and deploy AI tools that improve people’s ability to reason and coordinate, so they can resist epistemic interference

    To be clear, thinking about how to prevent AI-enabled power concentration is still at a very early stage. Not everyone currently working on this would support all of the interventions in that list, and it’s not clear how much of the problem would be solved even if we implemented the whole list. It might be that the structural forces pushing towards AI-enabled power concentration are too strong to stop.

    But it certainly doesn’t seem inevitable that power will become extremely concentrated:

    • It’s in almost everyone’s interests to prevent AI-enabled power concentration — including the interests of most powerful people today, since they have a lot to lose if they get out-competed.
    • It’s promising that we can already list some concrete, plausibly achievable interventions even though thinking about how to solve the problem is so early stage.

    There’s a lot more work to be done here than there are people doing the work.

    What are the top arguments against working on this problem?

    We’ve touched on these arguments in other places in this article, but we’ve brought them all together here so it’s easier to see what the weakest points are in the argument for prioritising AI-enabled concentration of power, and to go into a bit more depth.

    AI-enabled power concentration could reduce other risks from AI

    Some forms of power concentration could reduce various other risks from AI:

    • If there were no competition in AI development, the sole AI developer wouldn’t have competitive pressures to skimp on safety, which might reduce the risk of AI takeover. These competitive pressures are a major reason to worry that AI companies will race ahead without taking adequate AI safety precautions.
    • The risk of great power war would fall away if power became entirely concentrated in one country.
    • The risk of catastrophic misuse of bioweapons and other dangerous technologies would be much lower if only one actor had access to dangerous capabilities. The fact that AI could democratise access to extremely dangerous technology like bioweapons is one of the major reasons for concern about misuse.

    That said:

    • There are other ways to manage those risks. It’s not the case that either we have a benevolent dictatorship, or we suffer existential catastrophe from other AI risks. Some combination of domestic regulation, international coordination, technical progress on alignment and control, and AI tools for epistemic security could allow us to navigate all of these risks.
    • The prospect of AI-enabled power concentration could also exacerbate other risks from AI. It’s one thing to imagine a world where power is already extremely concentrated. But the process of getting to that world might drastically increase the stakes of competition, and make powerful actors more willing to make risky bets and take adversarial actions, to avoid losing out.
    • Many interventions to reduce AI-enabled power concentration also help reduce other risks. There isn’t always a trade-off in practice. For example, alignment audits help reduce the risk of both power concentration and AI takeover, by making it harder for both humans and AIs to tamper with AI systems’ objectives. And sharing capabilities more widely could both reduce power differentials and allow society to deploy AI defensively: if we can safeguard AI models sufficiently, this needn’t increase risks from catastrophic misuse.

    Weighing up these risks is complicated, and we’re not claiming there aren’t tradeoffs here. We currently think it isn’t clear whether the effects of AI-enabled power concentration net out as helpful or harmful for other AI risks. Given that power concentration is an important and neglected problem in its own right, we think it’s still very worth working on. (But we would encourage people working on AI-enabled concentration of power to keep in mind that their actions might influence these other issues, and try to avoid making them worse.)

    The future might still be all right, even if there’s AI-enabled power concentration

    For the reasons we went into above, we think extremely concentrated power is likely to be bad. But even if you agree, there are some reasons to think a future with AI-enabled power concentration could still turn out all right on some metrics:

    • Material abundance: AI might generate such enormous wealth that most people live in material conditions that are far better than those of the very richest today. In a world with AI-enabled power concentration, people would be politically disempowered, but if the powerful chose to allow it, they could still be materially well-off.
    • Reduced incentives for repression and brutality: part of why autocracies repress their peoples is that their leaders are trying to shore up their own power. If power became so concentrated that leaders were guaranteed to remain in power forever, there’d no longer be rational incentives to do things like restrict freedom of speech or torture dissidents (but there’d still be irrational ones like spite or fanatical ideologies.)
    • Selection effects: while perhaps not likely, it’s possible that the people who end up in power would genuinely want to improve the world. Maybe getting into such a powerful position selects for people who are unusually competent, and maybe they assumed power reluctantly because people were racing to develop unsafe AI, and power concentration seemed like the lesser of two evils.

    Again, we don’t find these arguments particularly compelling, but believe they’re plausible enough to be worth considering and weighing.

    Efforts to reduce AI-enabled power concentration could backfire

    AI-enabled power concentration is a spicy topic, and efforts to prevent it could easily backfire. The more salient the risk of AI-enabled power concentration is, the more salient it is to power-seeking actors. Working to reduce AI-enabled power concentration could:

    • Galvanise opposition to interventions by those who stand to gain from power concentration.
    • Directly give power-seeking actors ideas, by generating and publicising information on how small groups could end up with large amounts of power.
    • Trigger a scramble for power. If everyone thinks that everyone else is trying to consolidate their power, they might be more likely to try to seize power for themselves to preempt this.

    Some interventions might also reduce the probability that one actor ends up with too much power, but by increasing the probability that another actor does. For example, increasing government oversight over AI companies might make company power grabs harder, but simultaneously make it easier for government officials to orchestrate a power grab.

    We do think that preventing AI-enabled power concentration is a bit of a minefield, and that’s part of why we think that for now, most people should be bearing the risk in mind rather than working on it directly. But there are ways of making this work less likely to backfire, like:

    • Being thoughtful and aware of backfire risks. If you don’t think you have good judgement on this sort of thing (or wouldn’t have anyone with good judgement to give you feedback), it’s probably best to work on something else.
    • Using frames and language which are less adversarial. For example, ‘power grabs’ seems spicier than ‘power concentration’ as a framing.
    • Focusing on kinds of work that are hard for power-seeking actors to misuse. For example, developing and implementing mitigations like transparency measures or alignment audits is harder for a power-seeking actor to make use of than detailed threat-modelling.

    Power might remain distributed by default

    Above, we argue that power could become extremely concentrated. But this isn’t inevitable, and the arguments may turn out to be wrong. For example:

    • AI capabilities might just not get that powerful. Maybe the ceiling on important capabilities like persuasion or AI R&D is quite low, so the effects of AI are less transformative across the board.
      • A particularly important variant of this is that maybe self-reinforcing dynamics from automating AI R&D will be weak, in which case there might be no intelligence explosion or only a small one. This would mean that no single AI developer would be able to get and maintain a big capabilities lead over other developers.
    • The default regulatory response (and the institutional setup in places like the US) might be enough to redistribute gains from automation and prevent misuse of big AI workforces. People with power today — which in democracies includes the electorate, civil society, and the media — will try very hard to maintain their own power against newcomers if they are able to tell what’s going on, and most people stand to lose from AI-enabled power concentration.
    • If people are worried that AI is misaligned, meaning that it doesn’t reliably pursue the goals that its users or makers want it to, this could both reduce the economic impacts of AI (because there’d be less deployment), and make power-seeking individuals less willing to use AI to attempt power grabs (because the AI might turn on them).

    We think that the probability that power becomes extremely concentrated is high enough to be very concerning. But we agree that it’s far from guaranteed.

    It might be too hard to stop AI-enabled power concentration

    On the flip side, it might turn out that AI-enabled power concentration is not worth working on because it is too difficult to stop:

    • The structural forces pushing towards AI-enabled power concentration could be very strong. For example, if there’s an enormous intelligence explosion which grants one AI developer exclusive access to godlike AI capabilities, then what happens next would arguably be at their sole discretion.
    • Most actors who could stand to gain from AI-enabled power concentration are already very powerful. They might oppose efforts to mitigate the risk, obfuscate what’s going on, and interfere with other people’s ability to coordinate against power concentration.

    That said, we don’t think that we should give up yet:

    • We don’t know yet how the structural dynamics will play out. We might be in a world where it is very possible to limit power concentration.
    • It’s in almost everyone’s interests to prevent AI-enabled power concentration — including the interests of most powerful people today, since most of them stand to lose out if one small group gains control of most important decisions. It might be possible to coordinate to prevent power concentration and make defecting very costly.
    • There are already some interventions to prevent AI-enabled power concentration that look promising (see above). If this area receives more attention, we may well find more.

    What can you do to help?

    Because so little dedicated work has been done on preventing extreme AI-enabled power concentration to date, there aren’t yet interventions that we feel confident about directing lots of people towards. And there certainly aren’t many jobs working directly on this issue!

    For now, our main advice for most people is to:

    • Bear the risk of AI-enabled power concentration in mind. We’re more likely to avoid AI-enabled power concentration if reasonable people are aware of this risk and want to prevent it. This is especially relevant if you work at an AI company or in AI governance and safety: policies or new technologies will often have knock-on effects on power concentration, and by being aware of this you might be able to avoid inadvertently increasing the risk.
    • Be sensitive to the fact that efforts to reduce this risk could backfire or increase other risks.

    There are also some promising early-stage agendas, and we think that some people could start doing good work here already. We’d be really excited to see more people work on:

    For more ideas, you can look at the mitigations sections of these papers on AI-enabled coups, gradual disempowerment, and the intelligence curse; as well as these lists of projects on gradual disempowerment. The field is still very early stage, so a key thing to do might just be to follow the organisations and researchers doing work in the area,22 and look out for ways to get involved.

    Learn more

    The problem of AI-enabled power concentration:

    How bad AI-enabled power concentration could be:

    Some mitigations for AI-enabled power concentration:

    The post Extreme power concentration appeared first on 80,000 Hours.

    ]]>
    Daniel Kokotajlo on what a hyperspeed robot economy might look like https://80000hours.org/podcast/episodes/daniel-kokotajlo-ai-2027-updates-china-robot-economy/ Mon, 20 Oct 2025 14:10:21 +0000 https://80000hours.org/?post_type=podcast&p=93397 The post Daniel Kokotajlo on what a hyperspeed robot economy might look like appeared first on 80,000 Hours.

    ]]>
    The post Daniel Kokotajlo on what a hyperspeed robot economy might look like appeared first on 80,000 Hours.

    ]]>