» » »

Learning Boolean Semantics from Questions and Answers, and Information Theory Driven Regularization in Neural Networks

"Learning Boolean Semantics from Questions and Answers" (Jason Freeman)
Learning word meaning from examples of language use is an important paradigm for both human learning and machine learning. My project explores how well a system can learn the meanings of logical boolean connectives from example utterances in a variety of contexts. I establish that it is straightforward to learn boolean semantics from purely descriptive sentences, as well as from answers to questions. I will probe the extent to which these semantics are learnable from questions, and discuss results suggesting that this is not possible.

"Information Theory Driven Regularization in Neural Networks" (Allen Nie)
Neural network has been the most effective point estimate probability distribution model in history. Information theory developed by Claude Shannon analyze probabilistic systems via blackboxing them with entropy and mutual information. Recent development from Schwartz-Ziv and Tishby have demonstrated that we can understand the training process of neural net as an information bottleneck procedure - learning the minimum sufficient statistics of X - T(X), while being maximally expressive about Y. This work hope to use their theoretical observations to derive regularization methods to train real-life neural network on real-life tasks.

Monday, 05/15/17

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Stanford Symbolic Systems Forum

Margaret Jacks Hall
460-126
Stanford, CA 94305

Website: Click to Visit