Jaws dropped at the autonomous driving advances unveiled at the Consumer Electronics Show, with Daimler in particular raising eyebrows for a $570 million investment that it says will get it close to full automation in a decade.
But things are not as rosy as it sounds, as it is starting to dawn on the OEMs that their pursuit towards creating a fully autonomous vehicle is too big a task for one company. Fully autonomous vehicles require complex engineering in mechanical design and artificial intelligence, in addition to the neural networks and computer vision technologies that process data, recognize objects and adapt to road systems that don’t always follow easy-to-program rules.
There are noticeable chinks in the armor, as several players in the industry have already declared delays in their autonomous driving programs, as for one, the development is more expensive than estimated, and is not something that can be fully developed in-house.
Ronny Cohen, CEO and co-founder of Israeli-based perception systems startup VAYAVISION, contended that these companies are in a delicate position – keen on joining hands with external niche companies, while also looking to maintain their position as technology owners and integrators.
“This is where VAYAVISION would come in as this is close to our business model. We are in a position where we can provide the best environmental model in the market, and thus this is a trend that is positive in our point of view,” said Cohen.
At CES, VAYAVISION unveiled their VAYADrive 2.0, an autonomous vehicle perception software based on raw data fusion. “We are doing fusion and perception. On the fusion side, we cover LIDARs and RADARs, and we fuse them to get a perfectly calibrated and synchronized uniform representation of the world called the RGB-D image,” said Cohen. “We are also capable of taking the 3D samples of LIDAR and RADAR which is usually in low resolution, and upsampling them to the resolution of the camera.”
Though detection and classification of different objects on the road is a primary prerogative of a perception engine, VAYADrive 2.0 would also be able to track and monitor them, especially objects that have been classified as critical – like cars, pedestrians, trucks, and motorcycles. This lets the engine create a full environmental model which would be fed to the system real-time.
“With VAYADrive 2.0, we have progressed in two aspects – performance and the level of integration. There is improved performance, accuracy, and tracking. Every object we track gets an ID, and we can monitor each one separately, and know exactly where each one is going by re-detecting every single time. This is one of the ways through which we get to higher levels of detection, frame after frame,” said Cohen.
Integration levels have been steadily improving too, as the environmental model is more comprehensive and is inherently safer in terms of functionality. VAYADrive 2.0 now detects everything in parallel across two different sets of algorithms – one through AI and deep learning neural networks, where detection is done through classification, and the other through computer vision and 3D analytics.
Though neural networks are good at classification, they cannot be expected to detect something that is not part of the training sequence. This is where computer vision becomes critical, as it can identify any physical object on the road – even unexpected obstacles like cargo sticking out of a truck, or a bird flying out of the blue towards the vehicle. Fusing data coming from both the streams would bolster the final environmental model and improve its reliability.
In essence, VAYADrive 2.0 provides autonomous vehicles with crucial information on an object’s size and shape, accurately define the shape of obstacles, understand obstacles and their movement on the road to steer clear of them. This would increase detection accuracy, Cohen remarked, and reduce the high rate of false-positive alarms that the industry is grappling with today.