alignment
In late March 2023, the NYT released a series of explainer articles about AI. The first article in the seriesYou can also read it on Archive.org if you don’t have a subscription. characterizes the recent history of AI as a progression of new technological ideas appearing over time. Of course that’s partially true, but it gets the order wrong and misses important non-technical events that are key to understanding our current position.
Read moreThese are some thoughts I’ve had while listening to a Lex Fridman interview with Edward Frenkel, a mathematician at UC Berkeley working on mathematical quantum physics.
In the information age, we like to see everything as computation. But what do we mean when we say that something is computation? We mean that a physical system with predictable interactions has a meaningful result. If we somehow learned that the universe was computational in nature, the only thing that adds is that the universe’s state is meaningful somehow.
Read moreThese are my notes from a conversation between Yoshua Bengio and Kate Crawford held at Mila on 2023-03-20, announcing the release of a new book created as a joint report between Mila and UNESCO called Missing links in AI governance (link). There were news articles in French (Le Devoir), but not as many in English unfortunately (Datanami).
Bengio: What motivates you to do what you do? This topic has turned from an academic move to a societal one really quickly, and it’s scary.
Read moreThe examples in this book make it clear that there is no easy line we can draw between autonomous and non-autonomous weapons (and by extension, autonomous AI agents). There is a smooth gradient of autonomy, which makes the question of allowing autonomous weapons much more nuanced. It’s probably the case that higher-level alignment becomes important proportionally to the level of autonomy and intelligence.
He analyzes the Patriot fratricides,In a military context, the word fratricide means the killing of someone on the same side of a conflict.
Read moreI presented this paper in Bang Liu’s research group meeting in two installments on 2023-02-20 and 2023-02-27, and also in Irina Rish’s scaling and alignment course (IFT6760A) on 2023-03-07. You can view the slides I used here.The thumbnail for this post was generated with stable diffusion! See the alt text for details.
Behind each vision for ethically-aligned AI sits a deeper question. How are we to decide which principles or objectives to encode in AI—and who has the right to make these decisions—given that we live in a pluralistic world that is full of competing conceptions of value?
Read more