Testing SLAM Simulation in Quest2
- caps
- Oct 11, 2024
- 1 min read
Updated: Feb 4
Since I'm working on testing progressive SLAM iterations on resource-constrained devices, I ported this to Quest2, it interfaces directly into OpenXR and runs smoothly with the Oculus Meta Quest2. Quite cool experience actually.
I know you might be asking, what the heck are these points? Well, I wrote this simulator that loads 3D meshes and projects the mesh vertexes to screen space and I use this as image features. I call it vertex features, for more info:
Sure, this avoids so many of the real problems of performing SLAM using real images from a camera sensor. Thing is, this works so beautifully well for prototyping and avoids association errors, reduces noise and runs super fast. So if you want to understand classic feature-based SLAM, I would recommend to try something like this, all parameters are then independent of the image processing issues, and you can focus on understanding much better of the tracking, keyframe updating, mapping and optimization parameters instead. I will post this soon on my github.
I consider this done, and I will now move to the next challenge, working with camera images and reconstruction... so stay tuned... :)
Comentários