Skip to content

Commit

Permalink
Situational Awareness
Browse files Browse the repository at this point in the history
  • Loading branch information
bmann committed Jun 27, 2024
1 parent 84f6291 commit 87c3778
Show file tree
Hide file tree
Showing 2 changed files with 48 additions and 0 deletions.
2 changes: 2 additions & 0 deletions _notes/Open Source Beyond Licensing - The Evolution Ahead.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,8 @@ The presentation below is exported from Keynote and the links are clickable, but
Not included in the presentation, but [[Open Source is a restaurant]] is a useful related read on how think about paying for open source.

The concept of [[open source as a job]] is predicated on more global participation in software.

In discussion, I mentioned [[Situational Awareness]] set of essays as American propaganda, but important to understand for AI future directions.
### Presentation

<iframe src="/assets/2024/06/26/open-source-beyond-licensing/" width="100%" height="650px">
Expand Down
46 changes: 46 additions & 0 deletions _notes/Situational Awareness.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,49 @@
---
link: https://situational-awareness.ai/
tags:
- article
published: 2024-06-01
author:
- Leopold Aschenbrenner
---
I consider this article to be a form of American propaganda. There are a number of things in here I agree with, or believe may happen, but don't necessarily _want_ to have happen.

Some of the questions to ask yourself is if you want corporations, the US military, or other actors to have this sort of power.

You can download the full contents as a single [PDF](https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf)
## Introduction [this page]

History is live in San Francisco.
## I. [From GPT-4 to AGI: Counting the OOMs](https://situational-awareness.ai/from-gpt-4-to-agi/)

AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027. 

## II. [From AGI to Superintelligence: the Intelligence Explosion](https://situational-awareness.ai/from-agi-to-superintelligence/ "From AGI to Superintelligence: the Intelligence Explosion")

AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to _vastly_ superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.

## III. The Challenges

### IIIa. [Racing to the Trillion-Dollar Cluster](https://situational-awareness.ai/racing-to-the-trillion-dollar-cluster/ "Racing to the Trillion-Dollar Cluster")

The most extraordinary techno-capital acceleration has been set in motion. As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade. The industrial mobilization, including growing US electricity production by 10s of percent, will be intense. 

### IIIb. [Lock Down the Labs: Security for AGI](https://situational-awareness.ai/lock-down-the-labs/ "Lock Down the Labs: Security for AGI")

The nation’s leading AI labs treat security as an afterthought. Currently, they’re basically handing the key secrets for AGI to the CCP on a silver platter. Securing the AGI secrets and weights against the state-actor threat will be an immense effort, and we’re not on track. 

### IIIc. [Superalignment](https://situational-awareness.ai/superalignment/ "Superalignment")

Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic.

### IIId. [The Free World Must Prevail](https://situational-awareness.ai/the-free-world-must-prevail/ "The Free World Must Prevail")

Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way?

## IV. [The Project](https://situational-awareness.ai/the-project/)

As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. 

## V. [Parting Thoughts](https://situational-awareness.ai/parting-thoughts/ "Parting Thoughts")

What if we’re right?

0 comments on commit 87c3778

Please sign in to comment.