Research trio advocate more work on AI security
What if someone hacked a traffic sign with a few well-placed dots, so your self-driving car did something dangerous, such as going straight when it should have turned right?
Donāt think itās unlikely ā itās already happened ā and an ĀŅĀ×ŗ£½Ē professor and his colleagues from France are among those saying that researchers have to invest more effort in system design and security to deal with hacks and security issues.
A research paper, co-authored by ĀŅĀ×ŗ£½Ē Computer Science Professor Dr. Youry Khmelevsky, and presented recently at an international conference held by the Institute of Electrical and Electronics Engineers (the worldās largest technical professional society), summarizes the research that has already been done into the threats and dangers associated with the machine-learning processes that underpin autonomous systems, such as self-driving cars.
Their paper also points to the needs to take research and tool development for ādeep learningā to a new level. (Deep Learning ā DL - is what makes facial recognition, voice recognition, and self-driving cars possible. Deep Learning systems mimic neural networks ā like your brain ā that can take data and process it based on information processing and communication patterns. For a good description of how artificial intelligence, machine learning and deep learning connect to each other and the role they play in our daily lives, see )
The paper was authored by Dr. GaĆ©tan Hains, Arvid Jakobsson (of Huawei Parallel and Distributed Algorithms Lab at the Huawei Paris Research Centre) and Khmelevsky. āSafety of DL systems is a serious requirement for real-life systems and the research community is addressing this need with mathematically-sound but low-level methods of high computational complexity,ā notes the trioās paper. They point to the need for significant work yet to be done on security, software, and verification to ensure that systems relying on deep learning are as safe as they could be.
āIt sounds very abstract,ā says Khmelelvsky, ābut it isnāt. Itās here today whether itās in your car or a device that recognizes your voice and commands.ā
"Deep Learning-based artificial intelligence has had immense success in applications like image recognition and is already implemented in consumer products,ā notes Jakobsson. āBut the power of these techniques comes at an important cost compared to āclassic algorithmsā: it is harder to understand why they work, and harder to verify that they work correctly. Before deploying DL based AI in safety critical domains, we need better tools for understanding and exhaustively exploring the behaviour of these systems, and this paper is a work in this direction."
Do Hains, Jakobsson and Khmelevsky have the answer to prevent hacks that could send your car going straight, when it should go left? Not yet, but they are developing some research proposals that could help ensure that your car, and its systems based on artificial intelligence, donāt get fooled.
āSafe AI is an important research topic attracting more and more attention worldwide,ā says Hains. āDr. Khmelevsky brings software engineering expertise to complement my team's know-how in software correctness techniques. We expect to produce new knowledge and basic techniques to support this new trend in the industry.ā
Tags: Computer Science