This project will develop a multisensory search engine. It will first design and fabricate a robotic interface integrating visual, haptic and audio feedback. It will then use a predictive coding approach to analyse, interpret and generate tactile, audio and taste cues for the users to yield a virtual experience of various objects. Finally, it will implement systematic experiments to test and develop the capability of the system to improve object recognition in speed and accuracy.