Science

Giving self-driving cars the gift of sight

While self-driving cars may be able to see more of the road than humans, the trick is teaching them to interpret all of that data as good as — and eventually, better than — humans can.

Chip companies are trying to make sensors and software that see the world as humans do — only better

This is how a sensor commonly used in self-driving cars saw visitors to Ford's booth at CES 2017. (Matthew Braga/CBC News)

When humans drive, they typically do it with two eyes. When cars drive themselves, they currently rely on several.

But while self-driving cars may be able to see more of the road than humans, the trick is teaching them to interpret all of that data as well as — and eventually, better than — humans can.

At CES, the annual consumer technology show in Las Vegas last week, there were a handful of companies attempting to bring us closer to this reality. Some are the makers of sensors and chips that underpin many of the devices we use each day, and others are not yet household names.

There are companies such as Velodyne and Quanergy, which have been developing LIDAR (Light Detection and Ranging) sensors that use lasers to measure the distance, shape, and size of objects near and far.

Others, such as Mobileye, make cameras that have traditionally been used for rear and side-view camera systems, but are increasingly used for object detection and 3D mapping as well. (Its sensors are perhaps most well known as the eyes behind electric auto maker Tesla's Autopilot hands-free driving feature.)

The MobileEye booth at CES, the annual consumer electronics show in Las Vegas. The company makes sensors commonly used for rear-view camera systems, but has also lent its expertise to Intel and BMW's still in-development self-driving car. (Matthew Braga/CBC News)

And then there are the chip companies — Intel, NVIDIA, and even BlackBerry — that are partnering with more traditional car companies to provide the processing power and next-generation internet connectivity required to make self-driving cars a reality.

James Kuffner, the chief technology officer of Toyota's artificial intelligence and self-driving vehicle research institute, said in an interview at CES that there's room for current sensors to improve in accuracy, range and cost.

"And I think we're going to see a dramatic shift in the next ten years as this technology starts to mature."

Cameras, maps, and semantic cues

In the near term, both Mobileye and Toyota have plans to leverage data from existing car cameras in an attempt to build more up-to-date 3D maps of the world.

By harvesting images from cars that are already on the road, or soon to be on the road, the two companies hope that they can more cheaply and regularly crowdsource the data required for self-driving vehicles to safely navigate ever-changing urban areas.

Erez Dagan, Mobileye's senior vice president of advanced development and strategy, says the company's discussions are in "very advanced stages with multiple car manufacturers" to not only harvest data from their cars' cameras, but share that data with other automakers globally.

But longer term, cameras could be used for more nuanced types of sensing. Another area that still requires work is the understanding of semantic cues — all of the human behaviours that are easily recognizable to drivers, but that computers don't fully understand.

A sensor made by LIDAR company Velodyne is being used in Ford's fleet of self-driving test vehicles. (Matthew Braga/CBC News)

Today, the posture of a pedestrian, a cyclist's gaze, or the direction a parked car's wheels are difficult for self-driving cars to discern and understand the way humans do.

"Car sensor systems are not yet sensitive enough to be able to interpret body language. But we do," explained Melissa Cefkin, Nissan's in-house design anthropologist and principal researcher. "We can tell that somebody standing like this" — here Cefkin mimes a pedestrian looking down at her phone — "is not about to run across the street."

"So part of what we would like to do is continue to bring these insights," she added. "[But] we're kind of ahead of what the technology is capable of in terms of what we would try to teach it."

Long-range lasers

While camera sensors are good for some tasks, most opt for LIDAR when it comes to detecting things like size, shape, depth and speed — everything from a car's proximity to nearby cyclists to the velocity of a wayward soccer ball. 

What companies want is "additional range, they want additional resolution, they want additional quality within the data," said Michael Jellen, president and chief operating officer of Velodyne, perhaps the most well-known LIDAR company. Its sensors have been used by companies ranging from Google to Ford.

"A human can see all the things some of the time, or some of the things all the time, but it's never going to see all the things, all the time. And that's what our perception is going to enable you to do," said Jellen.

Inside the trunk of a self-driving Ford Fusion hybrid, which is packed full with computer equipment and high-end graphics chips from NVIDIA. (Matthew Braga/CBC News)

Quanergy uses a type of LIDAR sensor that the company says can be re-focused on-the-fly to identify objects with greater accuracy and detail. "You can say, 'Okay, this is floating like a plastic bag, so I'm going to drive through it,'" explained Louay Eldada, the company's co-founder and CEO, or "'This is a moose, and I'm not going to mess with that.'"

The company says that it is working with Mercedes, Hyundai-Kia and Renault-Nissan, in addition to other manufacturers it declined to name.

But one perennial challenge is weather.

"When we think about what sensors and software it would take to drive in snow, or heavy rain, we're not there yet. And no one really is," says Kuffner of Toyota. "But I think there's a lot of sensing modalities you can explore that might be able to enable that."

Driving data

Another important piece of the self-driving puzzle is what to do with all of the data these sensors produce. 

Chipmaker Intel's current focus is on building next-generation computer systems that can ingest an "enormous amounts of data from a variety of sensors," explained Bridget Karlin, managing director of the company's Internet of Things Group. "And not just vision sensors, not just cameras. We are looking at motion, we're looking at temperature, LIDAR, radar."

Intel, in partnership with BMW and Mobileye, says it plans to have 40 autonomous vehicles on the road — with a human watching behind the wheel — in the second half of 2017, and a fully driverless car on the road by 2021.

Hyundai unveiled a version of its IONIQ vehicle at CES with less-expensive self driving technology aimed at consumers, but no release data. (Matthew Braga/CBC News)

Competing chipmaker NVIDIA announced a similar platform at the show, and a partnership with Audi to have "advanced AI cars on the road starting in 2020," according to a press release.

And BlackBerry, once one of the world's most well-known smartphone makers, has been trying to re-position itself into a similar role with its automotive operating system, QNX, which the company says can be found in more than 60 million vehicles worldwide.

"When you think of the sensors, they're like your eyes and ears. Software will be the brains, powered by very high performance computer platforms," said Grant Courville, a senior director at QNX. "We want to provide the software platform, not only for those autonomous driving safety systems, but in fact, for the whole car."

ABOUT THE AUTHOR

Matthew Braga

Senior Technology Reporter

Matthew Braga is the senior technology reporter for CBC News, where he covers stories about how data is collected, used, and shared. You can contact him via email at matthew.braga@cbc.ca. For particularly sensitive messages or documents, consider using Secure Drop, an anonymous, confidential system for sharing encrypted information with CBC News.