Kognic Blog

Kognic CEO Discusses the Future of Human Feedback in Autonomous Systems

Written by Björn Ingmansson | Mar 15, 2024 11:57:46 AM

In a recent interview with AITechPark, Kognic's CEO, Daniel Langkilde, explored how human feedback is accelerating machine learning for autonomous systems and why the industry is shifting from traditional annotation to more sophisticated forms of human judgment.

From Code to Data: Rethinking ML Development

When asked about implementing an iterative mindset in AI-ML product development, Daniel emphasized a fundamental shift in how teams need to think about building autonomous systems:

"...remap software organisations to think about "programming with data" versus "programming with code". For this, the skill sets of product developers, engineers and other technical staff need to be adept and comfortable with exploring, shaping and explaining their datasets. Stop trying to address machine learning as a finite process, but rather an ongoing cycle of annotation, insights and refinement against performance criteria."

This reflects Kognic's core belief that machines learn faster with human feedback—and that the most productive teams are those who embrace continuous iteration rather than treating annotation as a one-time task.

Aligning AI with Human Expectations

Daniel also addressed one of the most critical challenges in autonomous systems: ensuring that AI behavior aligns with human expectations. This challenge is especially complex in the automotive and mobility sectors, where context, culture, and individual judgment all play a role.

"The answer can actually vary significantly, depending on where you are in the world, the topography of the area you are in and what kind of driving habits you lean towards. For these factors and much more, aligning and agreeing on what is a road is far easier said than done. So then, how can an AI product or autonomous vehicle make not only the correct decision but one that aligns with human expectations? To solve this, our platform allows for human feedback to be efficiently captured and used to train the dataset used by the AI model. Doing so is no easy task, there's huge amounts of complex data an autonomous vehicle is dealing with, from multi-sensor inputs from a camera, LiDAR, and radar data in large-scale sequences, highlighting not only the importance of alignment but the challenge it poses when dealing with data."

This is why Kognic focuses specifically on multi-modal, real-world autonomy data. Our platform is designed to handle the complexity of sensor-fusion annotation—where camera, LiDAR, and radar data must be processed together to create a comprehensive understanding of the environment. By combining automation with expert human judgment, Kognic helps teams get the most annotated data for their budget while maintaining the quality and alignment necessary for safe, reliable autonomous systems.

To read the full interview with Daniel, visit AITechPark here.