Visible to the public "Using Quantum Computing to Protect AI From Attack"Conflict Detection Enabled

Machine Learning (ML)-based frameworks remain highly vulnerable to adversarial attacks, which involve malicious data tampering that causes them to fail in unexpected ways, despite their successes and increased adoption. A study by researchers at the University of Melbourne suggests that quantum ML models may be more resistant to adversarial attacks launched through classical computers. Identifying and exploiting the features an ML model uses is how adversarial attacks function. However, the features used by generic quantum ML models are inaccessible to classical computers and, therefore, hidden from an adversary equipped with only classical computing resources. According to the researchers, these concepts could also be used to detect adversarial attacks by using both classical and quantum networks. This article continues to discuss the potential protection of Artificial Intelligence (AI) from attacks using quantum computing.

The University of Melbourne reports "Using Quantum Computing to Protect AI From Attack"