The Color Point Cloud - greater clarity for your dataset.

Our product team is very excited to introduce a new feature within our core toolset - introducing new Color Point Clouds from images. This Color Point Cloud mode will transform your point cloud into a colored view based on the selected camera image. Using the sensor calibrations associated with the selected sensors and additional ego vehicle data, any camera image can now be depicted in the point cloud in full color.

Central to Kognic’s platform mission to accelerate engineering velocity for both internal and external development teams, we’ve prioritized this update for a few key reasons:

  • Allow users to get a better understanding of a 3D scene using 2D information in a new way. This highlights our platform’s leading support of multi-sensor fusion and its value in enabling our platform users to “see” a truer picture of real-world dynamics.
  • Accelerates a view and understanding of calibration issues – such as time-sync issues – for  dataset engineers and annotators using our platform.

How does it work? Powered by live computation from the source 2D image, the new mode will pick the colors from the selected active camera. And should you switch the selected camera view, the color image it will recompute for that camera on the fly.


Our platform’s live computation can color many millions of points in an aggregated view. This affords tremendous sharpening of the point cloud image as a result. This is important to reveal further image detail that may be of interest. And since working with 3D point clouds can prove to be difficult at first, the Color Point Cloud mode allows for faster cognitive recognition of the objects for mapping – valuable for expansion of your workforce orientation.

One use case that benefits from this new color mode are Lane Markings. In rainy scenes the reflectivity of lane markings can be greatly diminished and the views are less valuable. However, with this new mode enabled, color allows better discernment of the lane markings within the overall image. Getting features like this to flawlessly compute across many sources in real-time (in some cases encompassing nine camera feeds) is a testament to Kognic’s efficient development and some very smart programming.

Learn more about this and our wider capabilities in the Kognic  toolset.
Contact our team today to learn more.