Visual Encoding Method for Semantic Mapping with Federated Learning Concept

TytułVisual Encoding Method for Semantic Mapping with Federated Learning Concept
Publication TypeConference Proceedings
Rok publikacji2025
AutorzySobczak Ł, Biernacki P, Domańska J
Conference NameMobiHoc '25: Proceedings of the Twenty-sixth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing
Pagination428 - 435
PublisherACM
Abstract

We present a visual encoding method for semantic mapping in indoor environments, designed to minimize redundancy in image data and support federated learning across a fleet of service robots. Our pipeline combines 2D LiDAR-based segmentation with RGB image filtering based on geometric orientation, distance, visibility, and uniqueness. The result is a compact set of representative visual samples suitable for downstream semantic tasks such as object recognition or language grounding. We evaluate our method in a Gazebo simulation using a TurtleBot platform and compare it against a naive odometry-based sampling strategy. Our approach achieves up to 57.5\% reduction in collected images while preserving scene coverage. Additionally, we demonstrate how multiple robots can collaboratively improve the visual map in a federated setup, reducing collection time and enabling model generalization across diverse environments. The proposed method offers an efficient and scalable solution for semantic mapping under bandwidth and computation constraints.

URLhttps://doi.org/10.1145/3704413.3765513
DOI10.1145/3704413.3765513

Historia zmian

Data aktualizacji: 24/10/2025 - 17:00; autor zmian: Łukasz Sobczak (lsobczak@iitis.pl)