papers
These are my notes from research papers I read. Each page’s title is also a link to the abstract or PDF.
I wrote this paper with Stephen Betts, a friend of mine doing Mormon Studies at the University of Virginia. (Click the title to download the preprint.) Our interest in the topic was initially piqued when we stumbled across the Wilford Woodruff AI Learning Experience, and this paper explores the unique intersection of Mormonism, transhumanism, and machine learning that makes such a thing possible.
We’re currently looking for a venue to publish this work. Any feedback on the preprint is appreciated!
Read moreI will present this paper in the FATE (fairness, accountability, transparency, ethics) reading group tomorrow (2023-10-25). You can view the slides I’ll use here.
There are unresolved tensions in the algorithmic ethics world. Here are two examples:
Is inclusion always good? Gebru: “you can’t have ethical A.I. that’s not inclusive… [a]nd whoever is creating the technology is setting the standards” Nelson: “… I struggle to understand why we want to make black communities more cognizant in facial recognition systems that are disproportionately used for surveillance.” academic activism O’Neil: why is there a lack of academic efforts to inform policymakers and regulators? PERVADE: Academics have been doing this work for a while but it is underfunded, marginalized, and at odds with a US political apparatus generally favorable towards Silicon Valley. Ethics manifestos or value statements mask these tensions behind a business ethics lens.
Read moreThis was a paper I presented about in Bang Liu’s research group meeting on 2023-09-25. You can view the slides I used here.
I presented this paper in Bang Liu’s research group meeting on 2023-07-24. You can view the slides I used here.
It seems like the authors made a mistake that inflated the scores for the multilingual experiments, according to Ken Schutte.
I presented this paper in Bang Liu’s research group meeting in two installments on 2023-02-20 and 2023-02-27, and also in Irina Rish’s scaling and alignment course (IFT6760A) on 2023-03-07. You can view the slides I used here.The thumbnail for this post was generated with stable diffusion! See the alt text for details.
Behind each vision for ethically-aligned AI sits a deeper question. How are we to decide which principles or objectives to encode in AI—and who has the right to make these decisions—given that we live in a pluralistic world that is full of competing conceptions of value? Is there a way to think about AI value alignment that avoids a situation in which some people simply impose their views on others?
Read more