Description
Impairments in visual perception lead to major challenges in everyday mobility for people. Conventional mobility aids such as white canes do often do not provide sufficient support in unknown or unstructured unfamiliar or unstructured environments, so the assistance of sighted persons is needed. In order to enable an increased level of autonomy anyway, machine learning methods for assistance are investigated in the context of this dissertation. In this approach, data from a 3D camera are semantically segmented using convolutional neural networks to map pathways and obstacles in low-structured environments. Furthermore, complementary sensor modalities for sensing the environment are investigated. The acquired information can be transformed into environment maps and used for navigation by computing a safe path. Finally, the navigation instructions can be intuitively transmitted to the user via acoustic and vibrotactile interfaces. Using a demonstrator in the form of a backpack, the methods are evaluated in laboratory and field tests. User-friendliness and intuitiveness are examined within the scope of subject studies.
Reviews
There are no reviews yet.