The Segment Anything Model – SAM

We’ve seen some wonderful advancements in machine learning enabled by foundation models and this is a biggie. As announced and released by Meta AI, their SAM is a big step forward for open, accessible and deep datasets. Now integrated inside Kognic’s Data Platform, our users can further accelerate the automated / autonomous driving evolution.



The quick story

 

SAM (and its accompanying 1 billion mask dataset – SA-1B) is a foundation model for image segmentation that has been trained on a great amount of diverse data. To quote Meta, “…with Segment Anything, we set out to simultaneously develop a general, promptable segmentation model and use it to create a segmentation dataset of unprecedented scale.”

Since SAM has learned a broad understanding of what basic objects are and how to then generate masks within images – including objects and image types that it had not encountered during training – there are multiple upsides for integrating SAM into the Kognic workflow.

A simple and clear example: when a common object is encountered within an image or sequence, a user simply needs to mark a single point on the object or draw a quick box to define coarse object bounding, then click. Boom. SAM returns a positive identification of the object and provides a correct mask for confirmation and use within the Kognic App.

 

 
screen-recording-2023-05-16-720p-2-
Sam Integration 
 

Thanks to our platform’s flexibility to quickly integrate models that provide new and unique value, our engineering and product teams jumped on this opportunity to plug in SAM and build a feature-level integration within our core Kognic App experience. The App UI just sees SAM as another source and with smart pre-compute processes to address dataset sizing concerns, we’re able to see round-trip SAM calls averaging 200ms. The predictions are nearly instant.

The SAM integration joins our set of Machine Assisted Tools, such as our feature for automatically generating accurate 3D cuboids from simple user prompts (a favorite across the Annotator community). With SAM being a 2D-based tool, Kognic provides a comprehensive set of options driven by a promptable framework.

 

So, what does this mean?

 

Significant efficiency gains. The benefits of the Segment Anything Model include its high level of accuracy and speed. It can process images quickly and accurately, even when dealing with complex or highly detailed objects. For all our customers developing perception systems, the largest single contributor to cost (and consequent time to market) is the time it takes to annotate. Labeling from scratch makes no sense for repeated, common objects but many teams are still forced to do so.

 

Best of both worlds. As SAM provides quick segmentation of common and repeated large objects within an image, the time saved can be directed at the more difficult and smaller objects that become a challenge in downsampled latent space. This is where the Kognic toolset is wonderfully adept – such as the Small Object Polygon tool – and can focus workforce assignments on the objects that need more time for definition. And by spending more time to get the tough objects right, we can more quickly impact model performance.

To sum up, our platform retains the full, effective capabilities to edit to the performance-critical quality level demanded in automotive. And the best news – Kognic’s SAM integration is now available for all teams that use our platform.

 

Would you like to see SAM in action within our tool? Click on the button “Schedule a Demo” located at the top of our webpage. We would be very happy to show you!