for a socially beneficial and responsible development of AI

Posted on
ethics machine-learning
thumbnail

These are my notes from a conversation between Yoshua Bengio and Kate Crawford held at Mila on 2023-03-20, announcing the release of a new book created as a joint report between Mila and UNESCO called Missing links in AI governance (link). There were news articles in French (Le Devoir), but not as many in English unfortunately (Datanami).

Bengio: What motivates you to do what you do? This topic has turned from an academic move to a societal one really quickly, and it’s scary. We need to figure out what to do.

Crawford: In AI we often think about data/algorithms/infra, without looking at the material impacts of these systems. This system is profoundly extractive by nature: data laborer, lithium mines, environmental impacts of planetary-scale computation.

Crawford: Quote from the book: “The myth of data collection as a benevolent practice in computer science has obscured its operations of power, protecting those who profit most while avoiding responsibility for its consequences.” See also 21 lessons for the 21st century by Yuval Harari.

Bengio: Steering private unaligned companies through governance has been good, but not enough. These systems are becoming more powerful. We may be hitting a wall. AI may well become powerful enough one day that it could be as destructive as nuclear bombs. Whose hands are these tools going to be in? It’s a big danger for democracy.

Crawford: What we have, instead of the international inspection system of nuclear systems, is a few core companies doing the research and building the systems. It’s really hard to regulate these systems because you can’t inspect them as easily.

Bengio: We need international cooperation both on regulation and on development. We need a “CERN” for AI research. Then we can build these massive neural nets that will be owned by all of us.

Crawford: What keeps me up at night is how fast this is moving, compared to how fast governance is used to moving. Regulators move slowly by design, in order to listen to constituents and establish consensus. The EU is about to pass an AI act. Canada might move too. The US is radically depressing in comparison.

Bengio: Canada is moving on this a bit. One of the benefits of what Canada is trying to do: principles-based law, with adaptive specific regulation.

Crawford: If the only solution is to use market dynamics, that would suck.

Bengio: Write regulation in ways that incentivize companies to think about the bad things that could happen.

Crawford: We need social scientists and humanists thinking about what kinds of regulatory structures would provide reliable forms of accountability. So little investment has gone into these humanistic questions, compared to the massive amounts going to the technical research.

Bengio: We need to go even further. We need to rethink our societies, both nationally and globally. This misalignment of capitalism becomes more and more of a problem as these tools become more powerful. What we have now is a race to the bottom in terms of ethics. Are there other political or economic models that are alternatives to what we have now? If we don’t do this, humanity and civilization won’t be here in 50 years. And that’s not even thinking about pandemics or climate change, which our system is not capable of effectively dealing with. We should invest in serious research about what our options are for building a different world.

Valerie (moderator): We need to think about this stuff at Mila too.

Bengio: We need to value democracy, and to do that we have to value truth.

Q&A permalink

Q: Is OpenAI’s approach (that of no longer publishing anything) a valid approach to “innovating on regulation”?

Crawford: It’s anti-democratic and opacity will make it hard for regulators.

Q: Facebook is a counterexample to how fast regulation can work. What can we do?

Crawford: Lobbying is slowing down regulatory will. If we’re going to see any change, it’ll come from the fact that AI is so general-purpose and will affect you whether you choose to participate or not. Still a very real problem thanks to lobbying.

Q: What about the people working at these companies?

Bengio: Most of these people want to do good. We need to nourish that culture and focus on social impact in academia in order to have an influence. That’s not enough, obviously. It’s the responsibility of government to put up the right guardrails.

Crawford: Unionization and employee advocacy about weapons work are signals that employees are thinking about this.

Q: How do we address the crises that are happening now?

Crawford: The system card acknowledges: chat bots can concretize ideas and ideologies. All of these harms are a way that AI will go unless we put in very clear ways of stopping it.

Q: What about third party regulation or inspection?

Bengio: These companies can be good but they’re paid by the companies they inspect.

Q: What about open-source community-driven efforts trying to match the private efforts? Could regulation make these efforts harder to do?

Bengio: There are community efforts but they don’t have enough capital to bring things to the scale we’re seeing at OpenAI. Governments need to understand that we need to collectively invest in alternatives to decentralize power and make it work for everyone. We have to be careful, for the same reasons OpenAI says it’s hesitant to release its model details.

Q: Lots of mention of the extractive nature of AI development. Do you see AI having the potential to be restorative?

Bengio: This is really important. Specifically target innovations that will distribute power. AI that democratizes education, if done right, has a huge potential.

Q: These systems and companies (big tech) are providing services that are traditionally being provided by governments. How can we provide a check on that? Also, how can we build on intergovernmental efforts?

Crawford: This small group of tech companies is doing this intentionally. Publicly owned services are being “enclosed” upon by the private sector. “These are public goods, but we’re going to capitalize on them.” The public needs to decide whether some things should be nationalized. The problem with the global partnership, etc., is that they don’t have enforcement ability. The UN and other orgs are losing strength and respect.