Computational Models of Affordance for Robotics


Organizers: Philipp Zech, Barry Rdige and Emre Ugur

Website: https://afford.gitlab.io/rss-workshop/

Gibson's theory of affordance, in its adherence to bottom-up direct perception, is antithetical to the top-down inferential models often proposed by modern robotics research purporting to tackle it. Such research assumes internal representation to be sacrosanct, but given current developments, to what extent can this assumption now be reexamined? The recently proposed sensorimotor contingency theory furthers the theoretical argument that internal representation is unnecessary, and its proof-of-concept application in robotics as well as the subsequent explosion in deep learning methodology sheds new light on the possibility of equipping robots with the capacity for directly perceiving their environments by exploiting correlated changes in their sensory inputs triggered by executing specific motor programs. This reexamination of direct perception is only one of several issues warranting scrutiny in current robotic affordance research. The aim of this workshop is therefore twofold. Firstly, we will provide an overview of the state-of-the-art in affordance research and dissect open research challenges yielded thereof. Our speakers further will be encouraged to ground these issues in human perceptual development and elaborate on their importance for robotics. Secondly, we will encourage our speakers to debate whether computational models of affordance can potentially be advanced by adopting approaches that are more congruent with Gibson's original conception of direct perception. The question of whether the deep hierarchical models of the zeitgeist constitute a bridge between direct perception and internal representation provides a new vista in this interpretation of visual perception, and one which we aim to thoroughly explore.