Kognic Blog

Annotating smarter with pre-annotations

Written by Björn Ingmansson | Jan 29, 2024 2:14:38 PM

A Conversation on Pre-Annotations with Tommy Johansson.

Tommy has been working in the automotive industry since 2006, specializing in ADAS/AD. With a background in both OEMs and Tier 1-2 suppliers, Tommy brings extensive experience in requirements, testing, and verification of autonomous systems.

We recently sat down with Tommy to discuss how pre-annotations and human feedback work together to maximize annotation productivity. At Kognic, we believe machines learn faster with human feedback—and pre-annotations are a powerful tool for making that feedback more efficient and cost-effective.

Background

While offline ML models can generate predictions, human feedback remains essential for high-quality ground-truth data. The key is integrating automation and human expertise to get the most annotated data for your budget.

The Evolution of Pre-Annotation

Tommy emphasized that unless you can trust your pre-annotations 100%, their main value lies in guiding where annotators should focus their attention. Pre-annotations can help indicate the presence of an object, though they may not always provide accurate information about size or class. The goal is to strike the right balance—enabling annotators to efficiently refine pre-annotations without spending excessive time on unnecessary adjustments. This human-in-the-loop approach is what makes annotation both faster and more reliable.

When Annotations From Scratch Win

Tommy shared an interesting observation: in many experiments, annotations made from scratch using Kognic's efficient platform still yield better results than working with pre-annotations. While pre-annotations provide guidance, the process of micro-adjusting can sometimes take more time than simply creating annotations from scratch. This reinforces why platform productivity matters—the right tools can make human annotators incredibly efficient, even without automation.

Human Feedback Still Matters

Tommy emphasized that human feedback will always be essential in the annotation process. While automation quality improves over time, human expertise remains invaluable for correcting annotations and making decisions in complex, ambiguous situations. As autonomous systems advance, human feedback shifts toward higher-level judgment—providing guidance on critical safety decisions and edge cases where machines need human wisdom most.

The Real Value: Cost-Efficient Human Feedback at Scale

The value of pre-annotations lies in enabling more productive human feedback. By combining smart automation with efficient annotation tools and expert human judgment, we help customers get the most annotated autonomy data for their budget. Pre-annotations reduce repetitive work, allowing annotators to focus on what matters most—providing the high-value feedback that makes autonomous systems safe and reliable.

How Kognic Customers Get Started

For our customers, getting started with pre-annotations means integrating their existing models into our platform. If they don't have a model yet, our team provides support and guidance. We import data seamlessly using our APIs, and for custom integrations to your cloud or data center, our Solutions Engineering team helps you optimize your infrastructure and maximize the value of your dataset. Our platform combines automation, efficient tools, and scalable human expertise to ensure the annotation process delivers maximum ROI.

Looking Ahead

As pre-annotations and human-in-the-loop workflows continue to evolve, Tommy remains excited about the possibilities. By combining the productivity of our annotation platform with smart automation and expert human feedback, we deliver better results faster. At Kognic, we're committed to pushing the frontier of annotation—helping customers train autonomous systems that are safe, reliable, and aligned with human expectations.

Stay tuned for more updates on how we're making machines learn faster with human feedback!