Uncategorized

SCS Special Seminar

Tuesday, April 6, 2021 – 12:00pm to 2:00pm

Location:

Virtual Presentation – ET Remote Access – Zoom

Speaker:

VINCENT CONITZER, Kimberly J. Jenkins Distinguished University Professor of New Technologies https://users.cs.duke.edu/~conitzer/

New Design Decisions for Modern AI Agents

Consider an intelligent virtual assistant such as Siri, Cortana, or Alexa, or perhaps a more capable future version.  Should we think of all of Siri as one big agent?  Or is there a separate agent on every device, each with its own objectives and/or beliefs?  And what should those objectives and beliefs be?  Such questions reveal that the traditional, somewhat anthropomorphic model of an AI agent — with clear boundaries, centralized belief formation and decision making, and a clear given objective — falls short for thinking about today’s AI systems.

We need better methods for specifying the objectives that these agents should pursue in the real world, especially when their actions have ethical implications.  I will discuss some methods that we have been developing for this purpose, drawing on techniques from preference elicitation and computational social choice.  But we need to specify more than objectives.  When agents are distributed, systematically forget what they knew before (say, for privacy reasons), and/or potentially face copies of themselves, it is no longer obvious what the correct way is even to do probabilistic reasoning, let alone to make optimal decisions.  I will explain why this is so and discuss our work on doing these things well.

Vincent Conitzer is the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He is also (part-time in summers and for a fixed term) Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford.  Conitzer works on artificial intelligence (AI). Much of his work has focused on AI and game theory, for example designing algorithms for the optimal strategic placement of defensive resources. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders?

Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, an ACM Fellow, a AAAI Fellow, and one of AI’s Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-C

Faculty Host:  Nihar Shah, Computer Science Department

Zoom Participation. See announcement.

For More Information, Contact:

Keywords:

Special Seminar

Similar Posts