Skip to content

Commit

Permalink
ai
Browse files Browse the repository at this point in the history
  • Loading branch information
bmann committed Jun 30, 2024
1 parent 9603975 commit 49683ca
Show file tree
Hide file tree
Showing 3 changed files with 48 additions and 0 deletions.
20 changes: 20 additions & 0 deletions _notes/AI Interfaces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
link: https://origamiparty.substack.com/p/ai-interfaces
tags:
- article
- AI
published: 2024-06-29
author:
- Silicon Jungle
---
## Deconstructed Media

> In order to be able to do faithful transformations across modalities, we need software that has an understanding of the media. Not just the pixel values or words used, but an understanding of what the objects, ideas and composition is. We need world models.
## Creative Control

> One of the big gripes with AI tooling is that for most of it - we go from some high-level idea like “I want a picture of an old man with a cane” to a fully finished piece of media...This takes away all the autonomy from those doing the creative work and feels incredibly reductive.
>
> ...what we really need is tools that allow us to think about media at a higher level, to fill in the gaps and simplify tasks, without reducing our ability to express ourselves and get our hands dirty.
>
> We need tools that give us intuitive sense of what it will output & the ability to steer it. Text will only get us so far.
9 changes: 9 additions & 0 deletions _notes/Silicon Jungle.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: James A
tags:
- person
- AI
- Vancouver
twitter: https://x.com/junglesilicon
---
Working on [[AI Interfaces]]
19 changes: 19 additions & 0 deletions _notes/Situational Blindness.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
link: https://situational-blindness.ai/
tags:
- AI
---
A response to [[Leopold Aschenbrenner]]'s [[Situational Awareness]]

[Both a PDF of the full essay and an audio version are available for your convenience.](https://situational-blindness.ai/#section10) If you like my work, please feel free to follow me on [@IridiumEagle](https://x.com/IridiumEagle)

> Leopold Aschenbrenner is a young man. He writes with the barely contained breathless enthusiasm of the true believer who is stretching out his hands to a crowd of onlookers, ready to pull them into giddy flights of intellect that he has trailed in the morning sky. He lets you know, right there, at the beginning, that you are soon to be an initiate to secrets only the elect few have reckoned with. As well put together as his multi-chapter writing is, its most interesting aspect is the insight it seems to lend into his psychology and that of his fellow aspirants. To reframe a line from the essay: if these are the attitudes of the people in charge of developing the world’s most advanced technology, we’re in for a wild ride.
> ...The domain he’s working in is in its infancy. His own experience is limited. He has trendlines but no context or real precedents; the precedent he chooses is flawed. He ignores wide swathes of crucial social, economic and political theory. His geopolitical sections are jingoistic caricatures that nonetheless read as self-assured as his technical sections. Despite writing chapters of text, he rushes to his conclusions.
- In [Part 2](https://situational-blindness.ai/#section2), we’ll examine how Leopold’s projections fail to consider any of the obvious social implications of the the timelines he proposes, which will confound the projections themselves
- In [Part 3](https://situational-blindness.ai/#section3), we’ll look at how he sets up potential obstacles to his proposed timeline as straw men that he can blow over with mere intuition and builds a scary historical analogy based on a misapprehension of the way knowledge diffuses in his own field
- In [Part 4](https://situational-blindness.ai/#section4), we’ll review how his proposal for the government to subsidize the infrastructure of the US’ biggest and most profitable tech companies in the name of democracy would actually lead to a democratic collapse at home and a destabilization of democracies abroad
- In [Part 5](https://situational-blindness.ai/#section5), we’ll review how his proposed military-grade secrecy around both AGI and AI safety would greatly diminish global security in relation to AGI hacking to no purpose (as the US is an irredeemably soft target for nation state hackers), and how his favored foreign policy would unite the world against the US
- In [Part 6](https://situational-blindness.ai/#section6), we’ll use a lens of fragility to show how Leopold’s policy suggestions are more likely than any other policies to cause the very catastrophes he fears
- And in [Part 7](https://situational-blindness.ai/#section7), we’ll propose an alternative to his reductive and antidemocratic approach, that has some chance of being successful

0 comments on commit 49683ca

Please sign in to comment.