Logo

Nana Liu团队在Physical Review A发表最新研究成果:量子分类对抗性扰动的脆弱点Vulnerability of quantum classification to adversarial perturbations

2020年6月22日,上海交通大学自然科学研究院Nana Liu及其合作者关于对抗性量子学习的最新研究成果在Physical Review A发表。这篇题为“Vulnerability of quantum classification to adversarial perturbations”的论文被编辑遴选为Editor’s Suggestion,作为PRA官网主页的亮点文章 。

在量子计算机上运行的量子计算可被设计用于分类问题,例如帮助机器学习蚂蚁和蝉的图片的区别,这对于在实验室中创建的量子数据(例如量子态本身)更为有效。使用量子计算机进行机器学习在安全方面也有更重要的应用,例如帮助机器学习信用卡是欺诈性的还是合法的。我们可能会认为我们使用这些设备是安全的,但是当一个对抗样本试图攻击量子计算机使量子学习者做出错误的预测时,会发生什么呢?当机器学习算法受到攻击时,量子计算机有多脆弱呢?

最近由Nana Liu教授领导的一项研究工作在对抗性量子学习的新领域迈出了第一步。这项工作证明了一个一般理论上界来阐述量子机器学习设备在分类问题上受到攻击容易程度。该论文论证了如果我们对所想要学习的数据一无所知,那么随着数据的增长,量子计算机变得越来越容易受到攻击。此漏洞可能损害量子设备可提供的任何加速优势。但是,如果我们有更多关于我们想要分类的数据的信息,那么这种对数据大小的依赖性就会大大降低。这意味着在数据输入量子计算机之前,我们可以通过一些对数据的学习来保护自己。

On 22th June 2020, Physical Review A published a new research result on adversarial quantum learning by Prof. Nana Liu and her collaborator. This paper, titled ‘Vulnerability of quantum classification to adversarial perturbations’ was selected as Editor’s Suggestion and highlighted on PRA’s official website.

Quantum algorithms that run on quantum computers can be designed for classification problems, like helping machines learn the difference between pictures of ants and cicadas through training examples. They are even more effective for quantum data like quantum states themselves that are created in the laboratory. Machine learning using quantum computers also have more important applications for security, like helping machines learn whether a credit card transaction is fraudulent or legitimate. We might think we are safe with these devices, but what happens when an adversary tries to attack the quantum computer so the quantum learner makes a wrong prediction? How vulnerable are quantum computers to attacks on its machine learning algorithms?

This recent work led by Prof. Nana Liu takes one of the first steps in this new area called adversarial quantum learning. Here a general theoretical bound is proved to show the vulnerability of quantum machine learning devices for classification problems in the presence of attackers. This paper demonstrates that if we know nothing about the data that we want to learn from, then the quantum computer becomes more and more vulnerable to attacks as the size of the data grows. This vulnerability can compromise any speedup advantages that quantum devices can offer. However, if we are given more information about the data we want to classify, then this dependence on the size of the data can greatly diminish. This means we can learn to protect ourselves by doing some of our learning before inputting the data into a quantum computer.

论文链接:https://journals.aps.org/pra/abstract/10.1103/PhysRevA.101.062331