2D refreshable tactile displays for automatic audio-tactile graphics



Translating graphical information to a tactile display is a difficult process requiring vast expert knowledge due to differences in visual and tactile perception and limitation of devices This project will first focus on developing guidelines and rules on how to map different graphical content types to tactile domain, considering insight into tactile processing. To analyze and extract the most relevant information from graphics, state of the art image processing techniques will then be used, such as automatic classification of selected graph types using state of the art machine learning techniques.

Expected Results

Automatic generation of audio-tactile graphics with state-of-the-art machine learning techniques.


Host institution: Karlsruhe Institute of Technology

Enrolments (in Doctoral degree): Karlsruhe Institute of Technology


Rainer Stiefelhagen, Klaus-Peter Hars

Presentation of ESR13

My name is Omar Moured, I received my High-honor B.Eng. and M.Sc degrees in Electrical and Electronics Engineering from Middle East Technical University. During my MSc, I specialized in the deep learning area. More specifically, multiple objects tracking. At the same time, I was working as AI researcher for 3 years. I am currently working on document analyzing (Natural Language Processing), segmentation and detection (Computer vision) within the INTUITIVE project.