Nervous humans are GM’s secret weapon for self-driving cars
Popular Science...
I slide behind the wheel of an all-electric Cadillac Lyriq and fasten my seat belt. Blue skies and fluffy clouds surround me. Ironically, I can see a sign for a BP gas station nearby, advertising per-gallon prices that are temptingly low—but since I’m driving an EV, it’s a moot point. It’s all a mirage, anyway. The Lyriq I’m “driving” is actually a vehicle buck–a physical representation of the car–in GM’s research lab in Warren, Michigan. The road, the clouds, even the BP sign are all digitally represented, beamed down from seven projectors onto a curved screen. The seat belt and controls were all real.
Sensors are affixed to my head and hands, and a pulse oximeter is attached to one of my fingers; each sends data about my reactions. My eye movements, heartbeat, and even perspiration sends a message about how I’m feeling during the driving simulation, then analysts use AI to break down the results and improve its autonomous driving technology.
Luckily, I don’t get motion sick very easily. That is a consideration for some test subjects, as the virtual reality environment can throw off sensitive stomachs.
Every day, GM runs millions of simulations that roughly mimic tens of thousands of human driving days. Engineers constantly tweak the system to throw new (virtual) wrenches in the cogs to keep its Super Cruise as up to date as possible with the oddest and unlikeliest scenarios. Because truth on the road is often stranger than fiction.
Virtual wind tunnels, AI, and more cost-effective EV batteries
At GM headquarters, where the massive company represents about 4 percent of Michigan’s entire GDP, there’s more technology than anyone might imagine from this legacy automaker. GM was founded in 1908, but the company’s technology and research is futuristic and led by visionary people. Case in point: Linda Cadwell Stancin is the executive director and head of GM’s research and development, leading technology advancement in autonomous driving. Her background is mostly in aerospace engineering and technology, including 20-plus years between Boeing and Lockheed Martin.
In another building, GM is building LMR (Lithium Manganese-Rich) batteries that have less energy density but cost less and use fewer rare minerals, like cobalt. Across campus, Cadillac tests F1 technology. Technicians poke and prod every component at test benches attached to custom boards with thousands of nodes. Others study atomistic simulations that allow them to model changes in battery chemistry and almost instantly understand the impact.
Of course, artificial intelligence is infused into everything GM does on the research and development side. For example, the company developed an AI-powered virtual wind tunnel that offers instantaneous aerodynamic drag predictions. The reason this is important, says Lab Group Manager for R&D Alok Warey, is because aerodynamic drag consumes a lot of battery energy in EVs—up to 50 percent.
“To predict the drag, you usually have to run very computationally intensive simulations,” Wary says. “Once the design is complete, computational dynamics engineers run simulations that could take weeks to return feedback to the designer. Our AI-powered virtual wind tunnel allows us to get instantaneous feedback and get real-time feedback on aerodynamic drag.”
Detecting human emotions and stress
GM plans to debut its hands-off, eyes-off driving in 2028 with the all-electric Cadillac Escalade IQ. During my test run, senior researcher Akilesh Rajan sat in the Lyriq with me to tell me what to do. When it was time to test the eyes-off section, Rajan pointed me to a tablet in the center console with the game Candy Crush already cued up. I didn’t know how to play Candy Crush, but I learned how on the fly in the name of science. The goal, Rajan told me, was to find out how to transition the driver from eyes-off to eyes-on. It takes a mental shift, and GM wants to have all the data it can gather about the best ways to enact that shift.
After my session in the virtual Lyriq, I joined the rest of the team in the control room. Various types of charts tracked my eye movement as I scanned my field of view and noted how much steering input I used, whether I activated the turn signals, and how much I pressed my foot on the brake. Plus, the team could see the biosensing data gathered by GM’s wearable tech.
As Omer Tsimhoni, aGM technical fellow for information display and optics research, dissects the data for me, he throws out terms like “pupillometry” (a non-invasive, objective technique measuring pupil size, reactivity, and velocity) and “galvanics” (essentially, electric current). It’s so advanced, even pupil dilation is measured by a machine learning algorithm. GM is also using Emotional AI, which analyzes, interprets, and simulates human emotions using data from facial expressions and voice tone, to predict and understand human emotions.
Basically, Tsimhoni says, the testing acts as a lie detector. In fact, the sensor setup is very similar to what a lie detector would include. If a testing subject fills out the survey and says, “Everything was great and I liked everything,” but the data showed they were stressed, unhappy, or confused, the team will know how they truly felt. The data doesn’t lie.
The post Nervous humans are GM’s secret weapon for self-driving cars appeared first on Popular Science.