Learning multi-modal localization

Contact: Dümbgen, Frederike

Synopsis: Development of a learning-based framework for indoor localization using different modalities.

Level: MS


Due to their aptitude in depicting complex dependencies, neural networks are a promising candidate for indoor localization. Omnipresent phenomena such as multi-path signal propagation, shadowing and device noise introduce non-linear effects in the data, and make conventional geometry-based methods fail even in simple environments.

We would like to create a neural network which is flexible to include any subset of available measurement modalities, thus the name “multi-modal”, and outputs a viable estimate of the user’s position in a room or corridor. Commonly used signals include wifi (time of arrival and signal strength), bluetooth (signal strength and angle of arrival), images, audio, just to name a few.

The scope of this semester or master’s project is to investigate proposed solutions for indoor localization using machine learning, to develop a machine learning framework for indoor localization using PyTorch or tensorflow from scratch,  and to validate the proposed framework on real and simulated data.

Deliverables: Well-documented and modular code and a short project report including experimental results.

Prerequisites: Solid machine learning basics, good programming skills. Curiosity and willingness to learn new skills.

Type of Work: 20% theory, 50% coding, 30% experimental validation.