Hardware and operating system agnostic

The core codebase can target most processor architectures, and there is no reliance on the presence of operating system functionality. Multiple processor classes can be utilised, ranging from low-powered general purpose to highly custom DSPs. A large variety of hardware sensors are supported, ranging from monocular and stereo cameras, up to visual-inertial depth cameras.

Not targeting a particular use-case

Our SLAM is designed to be as general purpose as possible. It can be used equally well in a variety of situations, ranging from mobile positional tracking through to autonomous driving.

Fully configurable and easy to integrate

Every aspect of the system is highly configurable and exposed via a simple to use API, allowing easy tuning to the target hardware and use-case.

And Much More PDF file

Additional information on KudanSLAM's versatility can be found here.



Everything from algorithm design through to implementation targets speed. This allows for low-latency tracking at high framerates, all while minimising CPU utilisation to save power. Lower powered classes of processor can easily be used.


A unique approach to tracking and mapping provides the most accurate positional and pointcloud data and keeps drift under control. High precision tracking maintains a low amount of jitter.


The real world isn’t always as ideal as in datasets, so we make sure to test in difficult conditions such as bad lighting and fast camera motion.

And Much More PDF file

Additional information on KudanSLAM's performance can be found here.


High level SLAM

Different approaches to tracking and mapping are available, as well as modules such as loop detection and closure, relocalisation and bundle adjustment.

Mid-level computer vision

Efficient implementations of various point matching mechanisms, stereo matchers and pose estimation. All are highly configurable and optimised, utilising better algorithms unavailable in the public domain.

Low-level image processing and maths

Highly optimised versions of common vision processing building blocks such as various blurs, interpolations and image warps. These are typically SIMD optimised and provide far superior performance compared to OpenCV. We also provide our own linear algebra library.

Companion modules

While our focus is on SLAM itself, it’s often useful to have different modules to help with integration. We provide a GUI library designed for cross-platform computer vision debugging and as well as well as modules to work with the generated SLAM maps.