New feature spotlight! Multi-sensor labeling

We have news for you! Our latest feature, Multi-sensor labeling, is now available for all our customers. And as far as we can tell, it’s already helping teams be highly efficient and optimize their processes even more.

What’s this feature for? In a nutshell, you can use Multi-sensor labeling to annotate multiple sensors at the same time, saving time for annotators and reducing the cost of producing data. But let us dive deeper into what our new feature can offer you and your technical teams. 👇

From an idea to a solution

As you probably know, a vehicle can be equipped with as many as 6 LiDAR sensors, and all those sensors that a test vehicle is equipped with see the world at different times. Companies want and need one label for each LiDAR, for each object, to train their models.

We’ll explain this with a practical example. Let’s say that if a truck is seen by two different sensors, you would want to annotate the truck on each LiDAR, and you would get two different sets of labels for that truck. Sometimes, repeating the same annotation process for each LiDAR might not be the most fun activity in the world. That’s why we wondered whether there was a way to optimize this process: how could we produce an annotation for each sensor as efficiently as possible and help customers label faster? That’s when we had a light bulb moment: we had to develop a multi-sensor labeling option. And after some months’ full time work, we can finally (and proudly) present our feature.

How does it work?

It’s simple, yet it involved a lot of math and resources when developing it. When placing a labeling box around a vehicle and annotating it with respect to points coming from one lidar sensor in particular, you get another labeling box for each sensor that can see the object, automatically 🔥. Meaning that every sensor that can see the object will get a label at once.

Our Multi-sensor labeling allows teams to annotate as many cameras and point cloud sensors as needed. And all thanks to the velocity information and interpolation processes, which allow us to get better information about where the object actually is or was when the image was taken, and this leading to even better projections in the 2D image.

The team involved

When asking the team who developed Multi-sensor labeling, it is palpable that they feel proud about this accomplishment. And it’s more than fair, it’s their baby! “Seeing that all we did during those months became an actual product the customer is super happy with was one of the greatest parts for me, personally”, explains Henrik Arnelid, Machine Learning Engineer and Product Developer. “It’s rewarding to know that the targeted user found the final product both user-friendly and visually beautiful, even more when the solution considered the user experience throughout the project”, adds Jessica Karlsson, UX/UI Designer and Product Developer.

It was a challenge, if we can admit. We were trying to solve a need and a problem that no one else seemed to have solved before. One of the main issues the team faced was to make all this intuitive for annotators or anyone watching the data to understand what is happening, and how it is represented. “The mathematics behind were not so difficult”, explains Henrik. “But making all these on demand predictions and calculating where the object should be, that’s one part of it. Then we have the user design, how you interact with everything, and for us at Kognic it is key that everything we do is accessible, intuitive and easy to use”, he continues.

Jessica explains this further from her perspective as a Product Designer: “It was a bit challenging not being able to create a sketch or prototype due to how complex the concepts were, we had to use other methods. As Henrik said, it was paramount to us all that whatever we produced would be intuitive to use, and so receiving that good feedback from the final customer made all the hard work worth it”, tells Jessica. “At the same time, this has been one of my favorite projects to work on because of the cross-functional team work. All experts within the team were needed and you could feel it throughout the process. Even if we don’t share the same technical background or knowledge there’s no prestige when developing. Everyone found it just as important to understand how the user would interact with the final product as it was for us to understand the technical challenges”, she concludes.

Book a demo!

Any update or feature we develop for our platform aims to enhance workflows and make your life easier. So we’re very glad to know our multi-sensor is already helping teams save time!

Feeling as excited as we do about how this feature could boost your work? Would you like to save time on your processes and learn more about this? Reach out to any of our account executives, and book a demo with our team by clicking on the button “Schedule a demo”, located at the top menu.