My research focusses on human visual perception of materials and objects. Which visual cues does the brain use to identify the physical properties of things and stuff in our surroundings? Which learning processes shape the representations? How does spatiotemporal context alter our perception of surfaces and objects? How do we reason about and visually predict physical events, like bouncing objects or oozing liquids? How do our percepts evolve over time when viewing complex dynamical processes, and how do gain control and prediction contribute? How do we plan and execute effective interactions with objects and materials? To answer questions like these, my lab uses a combination of visual psychophysics, motion tracking, computer graphics, image analysis and computational modelling including deep learning approaches.