Human Feedback: The Foundation of Safe Autonomy

The past decade has witnessed explosive growth in autonomous systems across automotive and robotics. As these systems advance, there's a common misconception: that AI will soon operate entirely without human involvement. However, this overlooks a fundamental truth—the most capable autonomous systems are built on continuous human feedback.

At Kognic, we believe that machines learn faster with human feedback. While automation plays a critical role in scaling annotation, human judgment remains the scarce, essential input that physics and simulation cannot provide. As models learn more from themselves, the scarce input is no longer pixels, but judgment. Human feedback will always be needed, because physics does not tell you when to yield, and simulations do not decide what counts as safe enough.

The challenge isn't just about labeling data—it's about turning scarce human judgment into high-quality, auditable signals that accelerate machine learning. This requires not only advanced platform capabilities and efficient processes, but also fairly treated, skilled annotators who can deliver the expert judgment that autonomous systems depend on.

Recently, concerns have been raised about working conditions in the data annotation industry. As the price leader in autonomy data annotation, we recognize that fair working conditions aren't just an ethical imperative—they're a substantial requirement for building a sustainable industry where high-quality human feedback can thrive at scale.

This post explains the current state of the data labeling sector and describes how Kognic approaches this challenge. Our goal is to demonstrate that achieving cost-efficiency and fair working conditions are not mutually exclusive—they're interdependent. By sharing our approach to ethical annotation practices and the right platform tools, we hope to inspire the industry toward higher standards.

What does the data labeling ecosystem look like?

In recent years, there has been significant growth in annotation firms providing data labeling services. Many of these private firms, also referred to as offshore business process outsourcing (BPO) companies, employ staff for data annotation work. However, the working conditions in this sector vary widely, and quality human feedback requires sustainable working environments.

Common challenges in the industry include: annotators labeling large volumes of data in single shifts with strict quality requirements and tight deadlines; atypical work schedules including night shifts to serve international clients; limited vacation time and benefits; and compensation that often barely meets minimum wage requirements despite demanding performance expectations. These conditions create stress and can compromise the quality of the expert judgment that autonomous systems require.

This situation exists globally, though many annotation companies operate in large cities of emerging economies. While these companies may claim fair working conditions through self-declarations, third-party ethical compliance certification remains rare in the industry.

Our take on this issue

At Kognic, we recognize that annotators are essential to advancing autonomous systems. Their expert judgment—evaluating intent, ranking trajectories, and validating machine decisions—cannot be automated away. We believe that delivering the most annotated autonomy data for your budget requires fairly compensated, well-supported annotators working with the right platform and processes.

Fair working conditions and economic security for annotators aren't optional—they're fundamental to our ability to be the price leader in autonomy data annotation while maintaining the quality standards that safety-critical applications demand. Our unique combination of Platform, People, and Processes only works when all three elements are optimized together.

While global regulation of borderless data labeling services presents challenges, these are not insurmountable. Progress requires both regulatory efforts and cooperation among industry leaders. Until comprehensive standards emerge, companies must take responsibility for ensuring their annotation workforce operates under fair conditions.

How Kognic creates fair working standards for annotators

We believe that carefully evaluating ethical compliance requirements when selecting annotation partners is essential for creating fair working conditions. Our approach integrates three key elements—People, Platform, and Processes—to deliver cost-efficient, high-quality human feedback at scale. Here's how we approach each step:

BPO selection

Our first opportunity to support fair working conditions is in selecting Business Process Outsourcing (BPO) partners. We have rigorous selection procedures to identify partners who prioritize people over short-term profits. We recommend being highly selective, gathering comprehensive information, and asking detailed questions to ensure your partners provide fair working conditions for their annotation teams.

Providing decent working conditions

There are many steps companies can take to provide good working conditions that support high-quality annotation work:

  • Limiting working hours to sustainable levels
  • Allocating adequate time for breaks
  • Providing appropriate technical and ergonomic equipment (stable internet, reliable computers, proper displays, and comfortable office space)
  • Ensuring a safe working environment (building safety, fire safety, accident prevention, first aid availability, proper cleaning, heating, lighting, and access to recreational spaces)
  • Working with partners who offer long-term employment contracts

Visiting the BPO location and meeting the annotation team

To accurately understand the culture and working conditions at our BPO partners, we conduct site visits. Meeting partners and annotators in person allows us to gain firsthand experience of daily operations and understand the real working environment.

Onsite visits also help build trust and collaborative relationships. Face-to-face communication with BPO leadership enhances dialogue and partnership. Meeting annotators directly helps us understand challenges they face and identify solutions that improve both working conditions and annotation quality.

Getting feedback from annotators

Beyond information gathered onsite, regular anonymous surveys provide valuable insight into what's working well and what needs improvement. This feedback loop is essential for maintaining quality human feedback at scale.

When formulating survey questions, be thoughtful and include visual resources where appropriate. Importantly, demonstrate that feedback is valued by updating your workforce on improvements and changes implemented based on their input. Without closing this feedback loop, you won't meaningfully improve conditions or annotation quality.

Fair salary levels

To ensure collaboration partners pay fair wages, investigate local minimum salary levels for each BPO location and compare these to actual wages paid. Compensation should meet or exceed regulated minimum wage. We recommend hourly wages rather than per-unit payment, providing annotators with income predictability.

Kognic's minimum requirement is that employers pay at least the statutory minimum wage, the prevailing industry wage, or wages negotiated in collective agreements—whichever is highest. Compensation reflects experience, qualifications, and performance, with employees receiving written specifications of wage calculations. We also ensure that wages are paid regularly and on time, and that contracts are clear and provided in the local language.

Reasonable working hours

Our agreements stipulate that all annotators have freedom to take breaks (which we actively encourage) and flexibility to decide their schedules within working expectations (active annotation time averages about 30 hours per week). BPO companies working with Kognic cannot exceed 45 hours of scheduled working time. Weekends remain free for annotators—everyone deserves time to rest and maintain work-life balance.

As an employer, plan for work during weekdays so annotators receive deserved time off. Clearly communicate these expectations to collaborators. On rare occasions requiring overtime or weekend work, ensure just compensation.

Providing the right tools & processes

Facilitating good working conditions includes providing appropriate equipment and, critically for annotation work, the right platform tools. At Kognic, we focus on making human feedback as productive as possible for autonomy data—the hardest, richest, and most safety-critical domain. Our job is not to maximize the number of labels produced, but to minimize the time it takes to turn scarce human judgment into high-quality, auditable signals.

We continuously improve our annotation platform to ensure it's user-friendly and minimizes manual effort required from annotators. We integrate automation wherever possible, optimize user experiences to reduce correction time, and route attention to the data slices where human judgment matters most. We've also developed processes that reduce unnecessary stress, including structured onboarding processes that allow annotators to learn our platform and project guidelines in a stress-free environment.

Legally binding agreements

Legally binding agreements help ensure annotators receive fair conditions and healthy work-life balance. At Kognic, we maintain a comprehensive Code of Conduct that we share with partners and include in legal contracts, detailing our expectations across multiple domains.

While it's impossible to anticipate every scenario, establishing clear expectations and confirming that partners share your values regarding fair working conditions is essential for building sustainable, high-quality annotation operations.

Building the future of autonomy annotation together

As annotation evolves beyond drawing bounding boxes toward more complex tasks—judging behavioral intent, ranking trajectories, resolving social negotiations in traffic, and validating machine decisions against human expectations—the need for skilled, fairly treated annotators becomes even more critical.

At Kognic, we're pushing the frontier of annotation for autonomy while maintaining ethical standards. We believe that fair working conditions and cost-efficiency are not trade-offs—they're complementary requirements for sustainable, high-quality human feedback at scale. This is how we deliver on our promise: customers get the most annotated autonomy data for their budget, through our unique combination of Platform, People, and Processes.

We encourage all companies in the autonomy industry to view the current lack of universal standards not as an excuse, but as an opportunity to establish new standards that advance both technology and human dignity. The question every company should ask: How are you ensuring that your annotators have the right working conditions to deliver the expert judgment your autonomous systems depend on?

We continuously work to improve our processes and the environment we provide for our workforce. We know there's always room for improvement, and we welcome dialogue with others committed to advancing both annotation quality and ethical standards.

Let's build the future of autonomy annotation together—with humans leading the way. 🚀