This article is cross-posted from

For your eyes only

August 10 | UC Berkeley's College of Engineering

 

istock-cybersecurity.engineering
Berkeley researchers have devised a practical way to keep data secure while training neural networks. (Photo by iStock)

Keeping sensitive data safe has sometimes come at the expense of speed when training machines to perform automated tasks like biometric authentication and financial fraud detection. Now, Berkeley researchers have solved this issue by devising a practical way to keep data secure while training neural networks.

In a study presented at the 2022 USENIX Security Symposium, Raluca Ada Popa, associate professor of electrical engineering and computer sciences, and her Ph.D. student, Jean-Luc Watson, described their innovative privacy-preserving approach to machine learning. They introduced a new platform, dubbed Piranha, that harnesses the speed of graphics processing units (GPUs) to train a realistic neural network on encrypted data for the first time.

Neural networks make good machine-learning models because they act like neurons in our brains to recognize patterns in data. But neural networks are very complex and, until now, training them with encrypted data would require an unfeasible amount of computing power.

raclucadapopa.engineering.81022
EECS professor Raluca Ada Popa poses for a portrait in her office at Soda Hall in Berkeley, Calif. on Tuesday, Jan. 21, 2020. (Photo/ Adam Lau/Berkeley Engineering)

“Even though people have wanted to do this for at least 20 years, training a realistic neural network model while keeping the data encrypted has not been practical,” said Popa. “The key was to make GPUs work with encrypted computation.”

GPUs can process large amounts of data simultaneously, making them ideal for high-performance computing and deep-learning applications. While they can be used to quickly train neural networks on plain text, they do not work with encrypted data. Encrypted data is incompatible with GPUs because it uses integers instead of floats — another kind of numerical data — and accesses memory in non-standard ways.

Piranha addresses these issues with a three-layer architecture that allows applications to interoperate with any cryptographic protocol.

jeanluc.engineering.81022
Jean-Luc Watson (Photo courtesy Jean-Luc Watson)

The researchers showed that they could train a realistic neural network, end to end, on encrypted data in a little over a day, a significant performance gain over previous approaches. They estimated that accomplishing the same task on Falcon, a state-of-the-art predecessor to Piranha, would have required 14 days, making it prohibitively expensive and impractical.

“With Piranha, we not only trained a realistic network for the first time with encrypted data, but we also improved performance by 16 to 48 times,” said Popa.

According to Watson, lead author of the study, Piranha delivers another advantage in addition to speed: Users do not need GPU expertise.

“All you have to do is bring your cryptographic protocols to Piranha and then program on top of it. That’s why we say it’s really a platform,” said Watson. “In this study, we implemented three protocols on top of Piranha to show that it works, but you could implement any protocols you wanted on top of it and still get the same acceleration benefits.”

For Popa and her team, which also included co-author and postdoctoral researcher Sameer Wagh, Piranha is the first step toward making secure computation accessible to a host of applications that involve training machines on sensitive data, from healthcare to cryptocurrency.

“Our contribution goes even beyond training the first realistic network more than an order of magnitude faster,” said Popa. “We showed how encrypted computation in general can take advantage of GPUs, which unlock faster encrypted computation for many other use cases.”

Article Source
UC Berkeley's College of Engineering