ethics

Army of none: autonomous weapons and the future of war

The examples in this book make it clear that there is no easy line we can draw between autonomous and non-autonomous weapons (and by extension, autonomous AI agents). There is a smooth gradient of autonomy, which makes the question of allowing autonomous weapons much more nuanced. It’s probably the case that higher-level alignment becomes important proportionally to the level of autonomy and intelligence. He analyzes the Patriot fratricides,In a military context, the word fratricide means the killing of someone on the same side of a conflict. and ends up blaming the individuals involved for automation bias. I would say that these humans in the system were set up to fail by the training and the functioning of the system. They’re expected to decide whether the computer is right, with only seconds to decide. He acknowledges this later when he talks about Aegis.
Read more

Artificial intelligence, values, and alignment

I presented this paper in Bang Liu’s research group meeting in two installments on 2023-02-20 and 2023-02-27, and also in Irina Rish’s scaling and alignment course (IFT6760A) on 2023-03-07. You can view the slides I used here.The thumbnail for this post was generated with stable diffusion! See the alt text for details. Behind each vision for ethically-aligned AI sits a deeper question. How are we to decide which principles or objectives to encode in AI—and who has the right to make these decisions—given that we live in a pluralistic world that is full of competing conceptions of value? Is there a way to think about AI value alignment that avoids a situation in which some people simply impose their views on others?
Read more

Unsolved problems in ML safety

This was a paper we presented about in Irina Rish’s neural scaling laws course (IFT6760A) in winter 2023. You can view the slides we used here, and the recording here (or my backup here).

ethics drift within bubbles

Here are some snippets from a Lex Fridman interview with John Abramson, outspoken critic of Big Pharma. Lex: Are people corrupt? Are people malevolent? Are people ignorant that work at the low level and at the high level, at Pfizer for example? How is this possible? I believe that most people are good, and I actually believe if you join Big Pharma your life trajectory often involves dreaming, wanting, and enjoying helping people. And then we look at the outcomes that you’re describing and that’s why the narrative takes hold that Pfizer CEO Albert Bourla is malevolent. The sense is that these companies are evil. So if the different parts are people that are good and they want to do good, how are we getting these outcomes?
Read more

Tools and weapons: the promise and peril of the digital age

I started taking notes later in the book. There were lots of good insights in the first half. Sorry! broadband access permalink Getting the internet to rural communities is a big deal for the rural economy. Just like electricity, it’s something that needs government support because there isn’t the economic incentive for ISPs to reach some of these locations. ethical AI permalink The focus on AI now is not just a fad, but a convergence of several trends that have made AI the next logical step: the increased computational resources, flexible access to compute through the cloud, etc.
Read more