Reliable Multimodal Artificial Intelligence
On Monday, March 29, at 6 pm Alexander von Humboldt Prof. Marcus Rohrbach is talking about "Reliable Multimodal Artificial Intelligence"
The talk is online.
Abstract: The recent success in the AI research community has been driven by Deep Learning and has led to a step-change improvement in average performance on standard benchmarks. However, existing models often lag behind in cases that are underrepresented in the training data. Specifically, models can be biased, have difficulties with transferring to rare examples, and, frequently, they quickly forget what they have previously learned. Typically they are not well calibrated and don’t know when they are wrong and when they are right. Marcus Rohrbach’s ongoing and long-term research goal is to build reliable multimodal models. Specifically, how to improve model performance on these hard cases and, on the other hand, to enable self-awareness of the models, so they know when they may fail. This means, for example, that reliable models are able to understand rare and novel situations with explicitly built-in compositionality. When learning new concepts, reliable models do not forget their past experiences by continually learning yet not changing the important decision paths. Importantly, models should not only make predictions but also communicate if they are certain of their answer: e.g. we can verify implicit reasoning in standard deep models with explicit symbolic reasoning over facts, drawn from large-scale knowledge databases.
In this talk, Marcus Rohrbach will discuss his current progress in building such reliable modes to start a discussion on how this can be connected to how the brain handles such scenarios.
