Important, neglected AI topics
Lukas Finnveden discusses some important but neglected AI topics that don’t fit easily within the usual conception of alignment:
- The potential moral value of AI.
- The potential importance of making AI behave cooperatively towards humans, other AIs, or other civilizations (whether it ends up intent-aligned or not).
- Questions about how human governance institutions will keep up if AI leads to explosive growth.
- Ways in which AI could cause human deliberation to get derailed, e.g. powerful persuasion abilities.
- Positive visions about how we could end up on a good path towards becoming a society that makes wise and kind decisions about what to do with the resources accessible to us. (Including how AI could help with this.)
I’m currently trying to figure out useful research projects on AI that speak to my comparative advantages (whatever those may be), so I’m interested in exploring suggestions like these.
There’s already some good work on the potential moral value of AI (e.g. by Bostrom, and also this report on consciousness in AI). I’m not sure how much I have to add to this, though it’s certainly an area I’d like to keep up with.
I’ve been thinking a bit about cooperation as a useful framing for AI alignment and safety, particularly in the context of cultural evolution, though without making much progress so far. But I wonder how valuable this is if we don’t get intent alignment. (I’m intuitively skeptical, though perhaps that’s wrong.) And I’m not sure I’m ready to take the plunge on wacky multiverse-wide cooperation stuff.
The question of how governance institutions will keep up with explosive AI growth is probably better suited for someone with more of a social science background.
I haven’t spent much time thinking about the dangers of AI persuasion. Probably something I should read up on.
I’m definitely excited about creating positive visions of what an AI future could look like. This is something to pursue further, though I’m unsure where to start. Holden Karnofsky has some suggestions here.