» » »

The Alignment Problem - Livestream

The real risk with AI is not that it will "turn on us," but rather, the danger has always been about being careful what we wish for. The real risk is the Midas Curse: that we build a powerful system able to learn by example but are somehow imprecise or inexact in what we ask it to do. The AI community is rife with cautionary tales of this kind, and as these systems get more powerful and more omnipresent, the stakes are only going up. They have a term for it: the alignment problem. Up until fairly recently, concerns about “the alignment problem” were considered fringe. What brought the fear of it to the fore?

If you could get across one main idea about AI to politicians and policymakers, what would it be? AI will affect every single one of us, whether we are judges, doctors, teachers, lawmakers, activists, artists, etc. Part of what I set out to do was offer a nontechnical curriculum of contemporary machine-learning, its open problems and active areas of research. In short order it will become increasingly, if uncannily, normal to make a verbal requests, and a new genre of computer bug will emerge that is less like an error and more like a misunderstanding. It will be, of course, the alignment problem. And, for better or worse, we will soon be accustomed to encountering it first-hand.

Jennifer Pahlka, founder of Code for America and former deputy CTO of the United States says: “The Alignment Problem should be required reading for anyone influencing policy where algorithms are in play - which is everywhere. But unlike much required reading, the book is a delight to read, a playful romp through personalities and relatable snippets of science history that put the choices of our present moment into context.”

In this AI engineers meetup, I will discuss:

• Nature of Expertise- Why an eighty-year history of comparing statistical models to human experts suggests we have no choice but to use the models, and what it teaches us about the true nature of human expertise.

• Machine learning models - Why the bleeding edge of machine learning is not about making models more complex than ever before, but rather about making models simpler than ever before, and how this might make AI's "black box" problem a moot point.

•Inference- Why an opening front in AI (and, some believe, the crux of making AI safe) is in systems that can infer the things we want even when those things are difficult or impossible for us to state directly. In fact, they can even figure out what we want when we have no idea how to even demonstrate it. All we need is to be able to know it when we see it.

• Decision making with uncertainty- One of the most safety-critical capacities for any decision maker - human or machine - is uncertainty: the ability to know when you don't know. Early AI systems had a well-deserved reputation for brittleness, confident in their outputs even when they were essentially random. And, tragically, real people have lost their lives from self-driving cars that, in the face of wild uncertainty in what they were seeing, failed to slow down as a result. The next generation of AI systems, from medical diagnosis to robotics to cars, involves systems that know when they don't know - and that won't take an irrevocable action unless they're sure.

Speaker: Brian Christian, UC Berkeley

Register at weblink to obtain connection information

Monday, 11/23/20

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

SF Bay Association of Computing Machinery


, CA

Categories: