Menu Close

Session: Privacy & Anonymity

Session chair: Zekeriya Erkin, TU Delft (Netherlands)

Wednesday, December 8 (11:10 AM – 12:30 PM, UTC+1)

Attend online with Zoom

Link: https://umontpellier-fr.zoom.us/j/99517561563 – Meeting ID: 995 1756 1563
Compare Before You Buy: Privacy-Preserving Selection of Threat Intelligence Providers (11:10 AM – 11:30 AM)
Jelle Vos (Delft University of Technology), Zekeriya Erkin (TU Delft) and Christian Doerr (Hasso Plattner Institute, University of Potsdam) – On-site presentation
In their pursuit to maximize their return on investment, cybercriminals will likely reuse as much as possible between their campaigns. Not only will the same phishing mail be sent to tens of thousands of targets, but reuse of the tools and infrastructure across attempts will lower their costs of doing business. This reuse, however, creates an effective angle for mitigation, as defenders can recognize domain names, attachments, tools, or systems used in a previous compromisation attempt, significantly increasing the cost to the adversary as it would become necessary to recreate the attack infrastructure each time. However, generating such cyber threat intelligence (CTI) is resource-intensive, so organizations often turn to CTI providers that commercially sell feeds with such indicators. As providers have different sources and methods to obtain their data, the coverage and relevance of feeds will vary for each of them. To cover the multitude of threats one organization faces, they are best served by obtaining feeds from multiple providers. However, these feeds may overlap, causing an organization to pay for indicators they already obtained through another provider. This paper presents a privacy-preserving protocol that allows an organization to query the databases of multiple data providers to obtain an estimate of their total coverage without revealing the data they store. In this way, a customer can make a more informed decision on their choice of CTI providers. We implement this protocol in Rust to validate its performance experimentally: When performed between three CTI providers who collectively have 20,000 unique indicators, our protocol takes less than 6 seconds to execute. The code for our experiments is freely available.
On the Recognition Performance of BioHashing on state-of-the-art Face Recognition models (11:30 AM – 11:50 AM)
Hatef Otroshi Shahreza (Idiap Research Institute), Vedrana Krivokuca (Idiap) and Sébastien Marcel (IDIAP) – Virtual presentation
Face recognition has become a popular authentication tool in recent years. Modern state-of-the-art (SOTA) face recognition methods rely on deep neural networks, which extract discriminative features from face images. Although these methods have high recognition performance, the extracted features contain privacy-sensitive information. Hence, the users’ privacy would be jeopardized if the features stored in the face recognition system were compromised. Accordingly, protecting the extracted face features (templates) is an essential task in face recognition systems. In this paper, we use BioHashing for face template protection and aim to establish the minimum BioHash length that would be required in order to maintain the recognition accuracy achieved by the corresponding unprotected system. We consider two hypotheses and experimentally show that the performance depends on the value of the BioHash length (as opposed to the ratio of the BioHash length to the dimension of the original features). To eliminate bias in our experiments, we use several SOTA face recognition models with different network structures, loss functions, and training datasets, and we evaluate these models on two different datasets (LFW and MOBIO). We provide an open-source implementation of all the experiments presented in this paper so that other researchers can verify our findings and build upon our work.
Secure Collaborative Editing Using Secret Sharing (11:50 AM – 12:10 PM)
Shashank Arora (University at Albany, SUNY) and Pradeep K. Atrey (University at Albany, SUNY) – Virtual presentation
With the advent of cloud-based collaborative editing, there have been security and privacy concerns about user data since the users are not the sole owners of the data stored over the cloud. Most secure collaborative editing solutions thus employ the use of AES to secure user content. In this work, we explore the use of secret sharing to maintain the confidentiality of user data in a collaborative document. We establish that using secret sharing provides an average increase of 56.01% in performance over AES with a single set of coefficients and an average performance increase of 30.37% with multiple sets of coefficients, while not requiring maintenance and distribution of symmetric keys as in the case of AES. We discuss the incorporation of keyword-based search with the proposed framework and present the operability and security analysis.
Differentially Private Generative Adversarial Networks with Model Inversion (12:10 PM – 12:30 PM)
Dongjie Chen (University of California, Davis), Sen-ching S Cheung (University of Kentucky), Chen-Nee Chuah (University of California Davis) and Sally Ozonoff (University of California Davis) – Virtual presentation
To protect sensitive data in training a Generative Adversarial Network (GAN), the standard approach is to use differentially private (DP) stochastic gradient descent method in which controlled noise is added to the gradients. The quality of the output synthetic samples can be adversely affected and the training of the network may not even converge in the presence of these noises. We propose Differentially Private Model Inversion (DPMI) method where the private data is first mapped to the latent space via a public generator, followed by a lower-dimensional DP-GAN with better convergent properties. Experimental results on standard datasets CIFAR10 and SVHN as well as on a facial landmark dataset for Autism screening show that our approach outperforms the standard DP-GAN method based on Inception Score, Frechet Inception Distance, and classification accuracy under the same privacy guarantee.