Dr Peter Scarfe
The Vision and Haptics Lab is based in the School of Psychology and Clinical Language Sciences at the University of Reading. Work in the lab investigates human multi-sensory perception using techniques that straddle the boundary between Psychology and Engineering.
Current work is focused in three areas:
(1) Understanding how sensory data is used to construct internal models of the physical laws governing the environment and how these models shape our perception of the world.
(2) Determining how information from multiple sensory modalities is integrated for perception and visuomotor control (particularly vision and haptics).
(3) Elucidating the sensory information and internal models people use when actively moving and navigating within their environment.
Techniques we use to investigate these areas include: behavioural experiments and psychophysics, stereoscopic presentation, 3D immersive virtual reality, haptic robotics, machine learning and Bayesian modelling.
First PhD Student graduates: Congratulations to Dr. Mark Adams
A very proud moment as my first PhD student passes his viva. Mark was co-supervised by Prof. Andrew Glennerster and Prof. William Harwin. Marks research focused on how vision and touch are integrated when localising objects in the environment. The research combined immersive virtual reality and spatially co-aligned haptic robotics. Mark is now a post-doc in Dundee.
RAIN Grant Accepted
The lab, together with Prof. William Harwins group in the School of Biological Sciences, has had a grant accepted to become part of the Robotics and AI in Nuclear (RAIN) consortium. The grant will fund a post-doctoral position in the lab to investigate ways in which to optimise bi-manual multi-finger haptic robotic and VR telepresence systems. On the grant we will be working closely with Generic Robotics, to integrate our hardware with the TOIA software system for rendering haptics in Unreal Engine.
Latest Paper from the lab
Scarfe, P. and Glennerster, A. (2019, in press). The science behind virtual reality displays. Annual Review of Vision Science. link
Work in the lab would not be possible without generous funded from