» » »

Limitations of AI systems on Explainability, Causuality, and Ethic - Livestream

Alok Aggarwal

Today, there are more than fifty domains in which AI systems perform at least as well as humans, and there are hundreds more where they are helping humans in making better decisions. However, current AI systems suffer from several debilitating limitations. For example, even the well-trained deep learning networks are often not robust and can recognize random images having perturbed patterns with over 99% confidence (such as a king penguin or a starfish). Medical doctors make mistakes too but by and large, we trust them. Unfortunately, trust mainly develops over time, but transparency and predictability can help.

Whereas predictability usually requires that these systems do not falter with small perturbations, transparency requires openness, proper explanations, and useful interpretations. The latter requirements are particularly daunting since most AI algorithms act as “black-boxes” and provide no explanations. This lack of “explainability” can be a stumbling block in adopting these systems especially in the fields of health care, legal and criminal, security, defense and military, product liability, and financial services.

In this talk, we first discuss need for explainble AI (XAI) and application domains that generally require XAI. Since explainability may not be easy to come by for current AI systems, we discuss various research efforts for achieving interpretable, causal, fair, and ethical AI systems.

Speaker: Alok Aggarwal, Scry Analytics

Register at weblink to receive connection information

Wednesday, 08/17/22


Website: Click to Visit



Save this Event:

Google Calendar
Yahoo! Calendar
Windows Live Calendar

SF Bay Association of Computing Machinery

, CA