Advances combining artificial intelligence techniques with computational neuroscience have shown that time-averaged neural responses in the primate visual and auditory systems can be modeled with reasonable accuracy by task-optimized deep neural networks. I'll discuss our lab's recent work to broaden and deepen these results, using convolutional recurrent networks to model the rodent somatosensory system and capture neural dynamics in the visual system. I'll also talk about attempts to plug the biggest hole in the task-optimized theory --- moving beyond unrealistic labeled supervision by creating self-supervised interactive agents that create powerful sensory representations --- and discuss the connection between these ideas and development. Moving beyond sensory systems, I'll describe models bridging to decision-making and memory, in the context of modular continual learning. Finally, I'll discuss how, taken together, these directions constitute one possible roadmap for the future of artificial intelligence in computational neuroscience.
Speaker: Dan Yamins, Stanford
Contact:Website: Click to Visit
Save this Event:iCalendar
Windows Live Calendar
Share this Event:Email to a Friend
Stanford, CA 94305
Website: Click to Visit