- Beamr Imaging (NASDAQ: BMR) launches a GPU-accelerated video compression solution for autonomous vehicles, reducing video storage by up to 50% while maintaining quality.
- The new technology addresses critical storage and cost challenges in autonomous vehicle development, as a single vehicle generates terabytes of data daily.
- Beamr's solution is designed to work with real-world ADAS recordings and synthetic video generated by platforms like NVIDIA Omniverse.
Beamr Imaging Ltd. (NASDAQ: BMR), an industry leader in video optimization technologies, has announced the launch of its GPU-accelerated video compression solution aimed at the autonomous vehicle market. This significant development will be showcased at NVIDIA GTC Paris during the Viva Technology 2025 event, scheduled from June 10-12, 2025.
The company's proprietary Content-Adaptive Bitrate (CABR) technology, driven by NVIDIA accelerated computing, effectively reduces video storage requirements by up to 50% without compromising on visual quality or critical features essential for training autonomous driving models. This technological advancement offers a solution to one of the most pressing infrastructure challenges faced by autonomous vehicle developers: managing the massive volumes of video data produced by a single vehicle on a daily basis.
The solution has demonstrated effectiveness in industry benchmark tests, maintaining high machine learning performance while facilitating significant storage savings. It supports both real-world Advanced Driver Assistance Systems (ADAS) recordings and synthetic video created through NVIDIA platforms like Omniverse and Cosmos, reflecting its versatility and integration ease into existing autonomous vehicle systems.
Sharon Carmel, CEO of Beamr, emphasized the importance of this technology in reducing operational costs and addressing the rising demands for video storage in the burgeoning autonomous vehicle industry. By leveraging GPU acceleration aligned with existing computing architecture, Beamr's solution offers substantial cost reductions in storage, compute, and bandwidth, addressing key constraints in AI development pipelines.