2017 has arrived, and promises to be a year full of projects for the Kudan team. Kudan’s CTO John Williams replies to some questions about their work on 2D tracking, and the exciting new developments in the area of 3D tracking and SLAM.
3D Tracking and SLAMQ: Is your SLAM based on PTAM or ORBSLAM?
No, it is developed completely from scratch using our own algorithms.Q: Why did you decide to develop from scratch while others are building on top of existing open source program?
The biggest reason is that the open source solutions didn't meet our requirements. The academic solutions typically exist to demonstrate a particular technique, meaning other parts are neglected. They are often tested in very limited, inflexible conditions that don't exist in the real world. Not every environment is well-lit and full of features, and not every camera has low amounts of noise and blur. Kudan's SDK is designed for production in a large variety of industries, all of which have different constraints. As such, we needed complete flexibility.
We didn't want to inherit the limitations of the academic solutions.Q: Does Kudan SLAM have similar limitation with PTAM and ORBSLAM?
Another reason we developed the SLAM system from scratch is to avoid inheriting any limitations of the academic solutions. One of the biggest problems with both PTAM and ORBSLAM is that they're heavy on processor resources. We've been able to create a scaleable solution that's suitable on a large variety of hardware and ensure that every part is production quality.Q: You are offering both markerless and SLAM. What are the differences?
Both markerless and SLAM can be used to track the environment. With SLAM, you get a 3D map of your environment, and this generally results in more accurate tracking over time. Markerless, however, is much lighter weight, and actually just as suitable as SLAM in a number of use-cases. We see these technologies as complementing one another and they can both be used together.Q: Do you require depth sensors, and/or special devices?
Our only requirement is some kind of camera. If more advanced peripherals, such as stereo cameras and depth sensors, are present, then we can fully utilise them in order to improve performance. One of our main goals was to become hardware agnostic, both in terms of inputs and outputs, but also the underlying platform. This means we can bring our solutions to any combination of operating system, processor architecture and peripheral set possible, allowing us to cover a diverse range of use-cases including: mobile, IoT, embedded devices, AR/VR wearables, robotics and even the automotive industry.Q: How is it different to project Tango?
Project Tango is a combination of hardware and software, with the hardware being designed to help the software as much as possible. It has a very specific use-case in mind, which is small space AR. Kudan develops a software solution that can run on a variety of hardware, and isn't tied to a particular platform (Android). We see augmented reality as being just one use-case for our technology and don't in any way want to limit ourselves to just that.Q: Are you more focused on mobile or non-mobile?
Non-mobile is more interesting and has more exotic use-cases. However, Kudan comes from mobile roots and it'll always be important to us. By supporting mobile, we avoid situations where we require an absurd amount of computing power, or really expensive peripherals. There is significant overlap between mobile and other embedded devices, such as IoT and robotics, which all share the same low-power requirements.Q: As CTO, what do you think make Kudan different to others?
Kudan isn't afraid to challenge the industry standards. We didn't see PTAM and get discouraged from developing our own SLAM system. We didn't see Vuforia and get discouraged from making a mobile image tracking SDK. Competition is good for the industry and we're good at innovating. The way Kudan is operated is such that we don't chase business opportunities. We try to develop technology we think is interesting, doing so without bounds, then later trying to find use-cases for it.
2D TrackingQ: Do you still work on image tracking?
We constantly improve our tracking algorithms and optimise our existing codebase, as well as developing new functionality. We try to do monthly releases and make sure that critical bugs get fixed immediately.Q: Who do you see as competitors and what makes you different to them?
I honestly don't think we have competitors simply because we are so different. We are the only company that provides a collection of production quality tracking algorithms that are ready-to-use in multiple situations and across a broad spectrum of platforms. The closest was probably Metaio prior to Apple removing them from the market, but even back in 2015, we were confident in our various strengths over them. Almost two years later and we're stronger and more flexible than ever.