The Highest Order “bit”
We believe AI is the most important technology we can work on in our time and its impact will be comparable to that of the industrial and scientific revolutions. With the exponential increase in available computational power and data, rapid and continuing progress in AI is well underway, but the end results are beginning to miss the mark - machines are making poor judgments leading to bad events.
And Embodied AI is the next frontier
Embodied AI is AI that controls a physical system, like a robot arm or an autonomous vehicle. It is able to move through the world and affect a physical environment with its actions, similar to the way a person does. In contrast, most language models live on some cloud server and are vastly different from the complex data that our customers must manage.
We need AI aligned with human values and intent
For builders of AI to deliver safe and high-performing products aligned with human intentions, machines must understand our messy and unstructured world. Human feedback is central to this fine-tuning process, but how do we decide what an AI system learns and how do we manage complex data behind it all? This is a hard thing to get right.
It starts with capturing expectations from feedback
Understanding expectations is the first challenge on the path toward accelerating alignment. Imagine playing darts but not agreeing on what the dart board looks like or what you get points for. If it’s not clear what the objective is, we will get fuzzy and useless output. That will make it harder to score (and it won’t be much fun).
Embracing a new approach to software
The evolution from programming by code to programming with data – as driven by machine learning – now places the evolving dataset at the center. Gaining fast and accurate visibility to this data is the criticial objective which can accelerate alignment and greater model performance in embodied AI. Bounding boxes now form the binary code of embodied AI, enabling automation at low-levels, so the race to create compelling products at defensible margins will come down to your ability to fine-tune models based on unique, proprietary knowledge.
What tomorrow (and today, for Kognic customers) looks like
Embodied AI will now depend on datasets versus code. And being a great developer in the future will require the ability to align stakeholders and explain preferences in iterative ways. This new product development paradigm requires expressing and aligning expectations, fine-tuning training data, and validating models. And that is our dedication.
How will we do this? The Kognic Platform, that's how.
What we're reading now
To stay up to date in a constantly changing paradigm, you need to listen to the buzz.
NeRFs and Embeddings to the rescue
Unlike existing indoor 3D detection methods that struggle to model scene geometry, this paper outlines new methods that makes novel use of NeRFs in an end-to-end manner to explicitly estimate 3D geometry, thereby improving 3D detection performance.
Read more here.
Grokking - the new, new
Do Machine Learning Models Memorize or Generalize? This is a wonderful article diving into the primary factors within ML processes. Grokking, where generalization seems to happen abruptly and long after fitting the training data, is becoming a real artifact to investigate.
Read more here.
Alignment Research from OpenAI
A broader view to AI Alignment, via OpenAI's LLM lens, is important to understand. For their perspective, "LLMs are particularly well-suited for automating alignment research because they come 'preloaded' with a lot of knowledge and information about human values from reading the internet."
Read more here.