Visible to the public Biblio

Found 2348 results

Filters: Keyword is privacy  [Clear All Filters]
2021-01-28
Javed, M. U., Jamal, A., Javaid, N., Haider, N., Imran, M..  2020.  Conditional Anonymity enabled Blockchain-based Ad Dissemination in Vehicular Ad-hoc Network. 2020 International Wireless Communications and Mobile Computing (IWCMC). :2149—2153.

Advertisement sharing in vehicular network through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication is a fascinating in-vehicle service for advertisers and the users due to multiple reasons. It enable advertisers to promote their product or services in the region of their interest. Also the users get to receive more relevant ads. Usually, users tend to contribute in dissemination of ads if their privacy is preserved and if some incentive is provided. Recent researches have focused on enabling both of the parameters for the users by developing fair incentive mechanism which preserves privacy by using Zero-Knowledge Proof of Knowledge (ZKPoK) (Ming et al., 2019). However, the anonymity provided by ZKPoK can introduce internal attacker scenarios in the network due to which authenticated users can disseminate fake ads in the network without payment. As the existing scheme uses certificate-less cryptography, due to which malicious users cannot be removed from the network. In order to resolve these challenges, we employed conditional anonymity and introduced Monitoring Authority (MA) in the system. In our proposed scheme, the pseudonyms are assigned to the vehicles while their real identities are stored in Certification Authority (CA) in encrypted form. The pseudonyms are updated after a pre-defined time threshold to prevent behavioural privacy leakage. We performed security and performance analysis to show the efficiency of our proposed system.

Wang, N., Song, H., Luo, T., Sun, J., Li, J..  2020.  Enhanced p-Sensitive k-Anonymity Models for Achieving Better Privacy. 2020 IEEE/CIC International Conference on Communications in China (ICCC). :148—153.

To our best knowledge, the p-sensitive k-anonymity model is a sophisticated model to resist linking attacks and homogeneous attacks in data publishing. However, if the distribution of sensitive values is skew, the model is difficult to defend against skew attacks and even faces sensitive attacks. In practice, the privacy requirements of different sensitive values are not always identical. The “one size fits all” unified privacy protection level may cause unnecessary information loss. To address these problems, the paper quantifies privacy requirements with the concept of IDF and concerns more about sensitive groups. Two enhanced anonymous models with personalized protection characteristic, that is, (p,αisg) -sensitive k-anonymity model and (pi,αisg)-sensitive k-anonymity model, are then proposed to resist skew attacks and sensitive attacks. Furthermore, two clustering algorithms with global search and local search are designed to implement our models. Experimental results show that the two enhanced models have outstanding advantages in better privacy at the expense of a little data utility.

Zhang, M., Wei, T., Li, Z., Zhou, Z..  2020.  A service-oriented adaptive anonymity algorithm. 2020 39th Chinese Control Conference (CCC). :7626—7631.

Recently, a large amount of research studies aiming at the privacy-preserving data publishing have been conducted. We find that most K-anonymity algorithms fail to consider the characteristics of attribute values distribution in data and the contribution value differences in quasi-identifier attributes when service-oriented. In this paper, the importance of distribution characteristics of attribute values and the differences in contribution value of quasi-identifier attributes to anonymous results are illustrated. In order to maximize the utility of released data, a service-oriented adaptive anonymity algorithm is proposed. We establish a model of reaction dispersion degree to quantify the characteristics of attribute value distribution and introduce the concept of utility weight related to the contribution value of quasi-identifier attributes. The priority coefficient and the characterization coefficient of partition quality are defined to optimize selection strategies of dimension and splitting value in anonymity group partition process adaptively, which can reduce unnecessary information loss so as to further improve the utility of anonymized data. The rationality and validity of the algorithm are verified by theoretical analysis and multiple experiments.

Li, Y., Chen, J., Li, Q., Liu, A..  2020.  Differential Privacy Algorithm Based on Personalized Anonymity. 2020 5th IEEE International Conference on Big Data Analytics (ICBDA). :260—267.

The existing anonymized differential privacy model adopts a unified anonymity method, ignoring the difference of personal privacy, which may lead to the problem of excessive or insufficient protection of the original data [1]. Therefore, this paper proposes a personalized k-anonymity model for tuples (PKA) and proposes a differential privacy data publishing algorithm (DPPA) based on personalized anonymity, firstly based on the tuple personality factor set by the user in the original data set. The values are classified and the corresponding privacy protection relevance is calculated. Then according to the tuple personality factor classification value, the data set is clustered by clustering method with different anonymity, and the quasi-identifier attribute of each cluster is aggregated and noise-added to realize anonymized differential privacy; finally merge the subset to get the data set that meets the release requirements. In this paper, the correctness of the algorithm is analyzed theoretically, and the feasibility and effectiveness of the proposed algorithm are verified by comparison with similar algorithms.

Fan, M., Yu, L., Chen, S., Zhou, H., Luo, X., Li, S., Liu, Y., Liu, J., Liu, T..  2020.  An Empirical Evaluation of GDPR Compliance Violations in Android mHealth Apps. 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE). :253—264.

The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.

Inshi, S., Chowdhury, R., Elarbi, M., Ould-Slimane, H., Talhi, C..  2020.  LCA-ABE: Lightweight Context-Aware Encryption for Android Applications. 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1—6.

The evolving of context-aware applications are becoming more readily available as a major driver of the growth of future connected smart, autonomous environments. However, with the increasing of security risks in critical shared massive data capabilities and the increasing regulation requirements on privacy, there is a significant need for new paradigms to manage security and privacy compliances. These challenges call for context-aware and fine-grained security policies to be enforced in such dynamic environments in order to achieve efficient real-time authorization between applications and connected devices. We propose in this work a novel solution that aims to provide context-aware security model for Android applications. Specifically, our proposition provides automated context-aware access control model and leverages Attribute-Based Encryption (ABE) to secure data communications. Thorough experiments have been performed and the evaluation results demonstrate that the proposed solution provides an effective lightweight adaptable context-aware encryption model.

2021-01-25
Chen, J., Lin, X., Shi, Z., Liu, Y..  2020.  Link Prediction Adversarial Attack Via Iterative Gradient Attack. IEEE Transactions on Computational Social Systems. 7:1081–1094.
Increasing deep neural networks are applied in solving graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep models can be revealed using carefully crafted adversarial examples generated by various adversarial attack methods. To explore this security problem, we define the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) strategy using the gradient information in the trained graph autoencoder (GAE) model. Not surprisingly, GAE can be fooled by an adversarial graph with a few links perturbed on the clean one. The results on comprehensive experiments of different real-world graphs indicate that most deep models and even the state-of-the-art link prediction algorithms cannot escape the adversarial attack, such as GAE. We can benefit the attack as an efficient privacy protection tool from the link prediction of unknown violations. On the other hand, the adversarial attack is a robust evaluation metric for current link prediction algorithms of their defensibility.
2021-01-18
Pattanayak, S., Ludwig, S. A..  2019.  Improving Data Privacy Using Fuzzy Logic and Autoencoder Neural Network. 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Data privacy is a very important problem to address while sharing data among multiple organizations and has become very crucial in the health sectors since multiple organizations such as hospitals are storing data of patients in the form of Electronic Health Records. Stored data is used with other organizations or research analysts to improve the health care of patients. However, the data records contain sensitive information such as age, sex, and date of birth of the patients. Revealing sensitive data can cause a privacy breach of the individuals. This has triggered research that has led to many different privacy preserving techniques being introduced. Thus, we designed a technique that not only encrypts / hides the sensitive information but also sends the data to different organizations securely. To encrypt sensitive data we use different fuzzy logic membership functions. We then use an autoencoder neural network to send the modified data. The output data of the autoencoder can then be used by different organizations for research analysis.
Sebbah, A., Kadri, B..  2020.  A Privacy and Authentication Scheme for IoT Environments Using ECC and Fuzzy Extractor. 2020 International Conference on Intelligent Systems and Computer Vision (ISCV). :1–5.
The internet of things (IoT) is consisting of many complementary elements which have their own specificities and capacities. These elements are gaining new application and use cases in our lives. Nevertheless, they open a negative horizon of security and privacy issues which must be treated delicately before the deployment of any IoT. Recently, different works emerged dealing with the same branch of issues, like the work of Yuwen Chen et al. that is called LightPriAuth. LightPriAuth has several drawbacks and weakness against various popular attacks such as Insider attack and stolen smart card. Our objective in this paper is to propose a novel solution which is “authentication scheme with three factor using ECC and fuzzy extractor” to ensure security and privacy. The obtained results had proven the superiority of our scheme's performances compared to that of LightPriAuth which, additionally, had defeated the weaknesses left by LightPriAuth.
2021-01-15
Pete, I., Hughes, J., Chua, Y. T., Bada, M..  2020.  A Social Network Analysis and Comparison of Six Dark Web Forums. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :484—493.

With increasing monitoring and regulation by platforms, communities with criminal interests are moving to the dark web, which hosts content ranging from whistle-blowing and privacy, to drugs, terrorism, and hacking. Using post discussion data from six dark web forums we construct six interaction graphs and use social network analysis tools to study these underground communities. We observe the structure of each network to highlight structural patterns and identify nodes of importance through network centrality analysis. Our findings suggest that in the majority of the forums some members are highly connected and form hubs, while most members have a lower number of connections. When examining the posting activities of central nodes we found that most of the central nodes post in sub-forums with broader topics, such as general discussions and tutorials. These members play different roles in the different forums, and within each forum we identified diverse user profiles.

Ebrahimi, M., Samtani, S., Chai, Y., Chen, H..  2020.  Detecting Cyber Threats in Non-English Hacker Forums: An Adversarial Cross-Lingual Knowledge Transfer Approach. 2020 IEEE Security and Privacy Workshops (SPW). :20—26.

The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.

Liu, Y., Lin, F. Y., Ahmad-Post, Z., Ebrahimi, M., Zhang, N., Hu, J. L., Xin, J., Li, W., Chen, H..  2020.  Identifying, Collecting, and Monitoring Personally Identifiable Information: From the Dark Web to the Surface Web. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

Personally identifiable information (PII) has become a major target of cyber-attacks, causing severe losses to data breach victims. To protect data breach victims, researchers focus on collecting exposed PII to assess privacy risk and identify at-risk individuals. However, existing studies mostly rely on exposed PII collected from either the dark web or the surface web. Due to the wide exposure of PII on both the dark web and surface web, collecting from only the dark web or the surface web could result in an underestimation of privacy risk. Despite its research and practical value, jointly collecting PII from both sources is a non-trivial task. In this paper, we summarize our effort to systematically identify, collect, and monitor a total of 1,212,004,819 exposed PII records across both the dark web and surface web. Our effort resulted in 5.8 million stolen SSNs, 845,000 stolen credit/debit cards, and 1.2 billion stolen account credentials. From the surface web, we identified and collected over 1.3 million PII records of the victims whose PII is exposed on the dark web. To the best of our knowledge, this is the largest academic collection of exposed PII, which, if properly anonymized, enables various privacy research inquiries, including assessing privacy risk and identifying at-risk populations.

2021-01-11
Johnson, N., Near, J. P., Hellerstein, J. M., Song, D..  2020.  Chorus: a Programming Framework for Building Scalable Differential Privacy Mechanisms. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :535–551.
Differential privacy is fast becoming the gold standard in enabling statistical analysis of data while protecting the privacy of individuals. However, practical use of differential privacy still lags behind research progress because research prototypes cannot satisfy the scalability requirements of production deployments. To address this challenge, we present Chorus, a framework for building scalable differential privacy mechanisms which is based on cooperation between the mechanism itself and a high-performance production database management system (DBMS). We demonstrate the use of Chorus to build the first highly scalable implementations of complex mechanisms like Weighted PINQ, MWEM, and the matrix mechanism. We report on our experience deploying Chorus at Uber, and evaluate its scalability on real-world queries.
Jiang, P., Liao, S..  2020.  Differential Privacy Online Learning Based on the Composition Theorem. 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC). :200–203.
Privacy protection is becoming more and more important in the era of big data. Differential privacy is a rigorous and provable privacy protection method that can protect privacy for a single piece of data. But existing differential privacy online learning methods have great limitations in the scope of application and accuracy. Aiming at this problem, we propose a more general and accurate algorithm, named DPOL-CT, for differential privacy online learning. We first distinguish the difference in differential privacy protection between offline learning and online learning. Then we prove that the DPOL-CT algorithm achieves (∊, δ)-differential privacy for online learning under the Gaussian, the Laplace and the Staircase mechanisms and enjoys a sublinear expected regret bound. We further discuss the trade-off between the differential privacy level and the regret bound. Theoretical analysis and experimental results show that the DPOL-CT algorithm has good performance guarantees.
Lobo-Vesga, E., Russo, A., Gaboardi, M..  2020.  A Programming Framework for Differential Privacy with Accuracy Concentration Bounds. 2020 IEEE Symposium on Security and Privacy (SP). :411–428.
Differential privacy offers a formal framework for reasoning about privacy and accuracy of computations on private data. It also offers a rich set of building blocks for constructing private data analyses. When carefully calibrated, these analyses simultaneously guarantee the privacy of the individuals contributing their data, and the accuracy of the data analyses results, inferring useful properties about the population. The compositional nature of differential privacy has motivated the design and implementation of several programming languages aimed at helping a data analyst in programming differentially private analyses. However, most of the programming languages for differential privacy proposed so far provide support for reasoning about privacy but not for reasoning about the accuracy of data analyses. To overcome this limitation, in this work we present DPella, a programming framework providing data analysts with support for reasoning about privacy, accuracy and their trade-offs. The distinguishing feature of DPella is a novel component which statically tracks the accuracy of different data analyses. In order to make tighter accuracy estimations, this component leverages taint analysis for automatically inferring statistical independence of the different noise quantities added for guaranteeing privacy. We evaluate our approach by implementing several classical queries from the literature and showing how data analysts can figure out the best manner to calibrate privacy to meet the accuracy requirements.
Li, Y., Chang, T.-H., Chi, C.-Y..  2020.  Secure Federated Averaging Algorithm with Differential Privacy. 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). :1–6.
Federated learning (FL), as a recent advance of distributed machine learning, is capable of learning a model over the network without directly accessing the client's raw data. Nevertheless, the clients' sensitive information can still be exposed to adversaries via differential attacks on messages exchanged between the parameter server and clients. In this paper, we consider the widely used federating averaging (FedAvg) algorithm and propose to enhance the data privacy by the differential privacy (DP) technique, which obfuscates the exchanged messages by properly adding Gaussian noise. We analytically show that the proposed secure FedAvg algorithm maintains an O(l/T) convergence rate, where T is the total number of stochastic gradient descent (SGD) updates for local model parameters. Moreover, we demonstrate how various algorithm parameters can impact on the algorithm communication efficiency. Experiment results are presented to justify the obtained analytical results on the performance of the proposed algorithm in terms of testing accuracy.
Farokhi, F..  2020.  Temporally Discounted Differential Privacy for Evolving Datasets on an Infinite Horizon. 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS). :1–8.
We define discounted differential privacy, as an alternative to (conventional) differential privacy, to investigate privacy of evolving datasets, containing time series over an unbounded horizon. We use privacy loss as a measure of the amount of information leaked by the reports at a certain fixed time. We observe that privacy losses are weighted equally across time in the definition of differential privacy, and therefore the magnitude of privacy-preserving additive noise must grow without bound to ensure differential privacy over an infinite horizon. Motivated by the discounted utility theory within the economics literature, we use exponential and hyperbolic discounting of privacy losses across time to relax the definition of differential privacy under continual observations. This implies that privacy losses in distant past are less important than the current ones to an individual. We use discounted differential privacy to investigate privacy of evolving datasets using additive Laplace noise and show that the magnitude of the additive noise can remain bounded under discounted differential privacy. We illustrate the quality of privacy-preserving mechanisms satisfying discounted differential privacy on smart-meter measurement time-series of real households, made publicly available by Ausgrid (an Australian electricity distribution company).
Wu, N., Farokhi, F., Smith, D., Kaafar, M. A..  2020.  The Value of Collaboration in Convex Machine Learning with Differential Privacy. 2020 IEEE Symposium on Security and Privacy (SP). :304–317.
In this paper, we apply machine learning to distributed private data owned by multiple data owners, entities with access to non-overlapping training datasets. We use noisy, differentially-private gradients to minimize the fitness cost of the machine learning model using stochastic gradient descent. We quantify the quality of the trained model, using the fitness cost, as a function of privacy budget and size of the distributed datasets to capture the trade-off between privacy and utility in machine learning. This way, we can predict the outcome of collaboration among privacy-aware data owners prior to executing potentially computationally-expensive machine learning algorithms. Particularly, we show that the difference between the fitness of the trained machine learning model using differentially-private gradient queries and the fitness of the trained machine model in the absence of any privacy concerns is inversely proportional to the size of the training datasets squared and the privacy budget squared. We successfully validate the performance prediction with the actual performance of the proposed privacy-aware learning algorithms, applied to: financial datasets for determining interest rates of loans using regression; and detecting credit card frauds using support vector machines.
Lyu, L..  2020.  Lightweight Crypto-Assisted Distributed Differential Privacy for Privacy-Preserving Distributed Learning. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
The appearance of distributed learning allows multiple participants to collaboratively train a global model, where instead of directly releasing their private training data with the server, participants iteratively share their local model updates (parameters) with the server. However, recent attacks demonstrate that sharing local model updates is not sufficient to provide reasonable privacy guarantees, as local model updates may result in significant privacy leakage about local training data of participants. To address this issue, in this paper, we present an alternative approach that combines distributed differential privacy (DDP) with a three-layer encryption protocol to achieve a better privacy-utility tradeoff than the existing DP-based approaches. An unbiased encoding algorithm is proposed to cope with floating-point values, while largely reducing mean squared error due to rounding. Our approach dispenses with the need for any trusted server, and enables each party to add less noise to achieve the same privacy and similar utility guarantees as that of the centralized differential privacy. Preliminary analysis and performance evaluation confirm the effectiveness of our approach, which achieves significantly higher accuracy than that of local differential privacy approach, and comparable accuracy to the centralized differential privacy approach.
Wang, J., Wang, A..  2020.  An Improved Collaborative Filtering Recommendation Algorithm Based on Differential Privacy. 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). :310–315.
In this paper, differential privacy protection method is applied to matrix factorization method that used to solve the recommendation problem. For centralized recommendation scenarios, a collaborative filtering recommendation model based on matrix factorization is established, and a matrix factorization mechanism satisfying ε-differential privacy is proposed. Firstly, the potential characteristic matrix of users and projects is constructed. Secondly, noise is added to the matrix by the method of target disturbance, which satisfies the differential privacy constraint, then the noise matrix factorization model is obtained. The parameters of the model are obtained by the stochastic gradient descent algorithm. Finally, the differential privacy matrix factorization model is used for score prediction. The effectiveness of the algorithm is evaluated on the public datasets including Movielens and Netflix. The experimental results show that compared with the existing typical recommendation methods, the new matrix factorization method with privacy protection can recommend within a certain range of recommendation accuracy loss while protecting the users' privacy information.
Dikii, D. I..  2020.  Remote Access Control Model for MQTT Protocol. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :288–291.
The author considers the Internet of Things security problems, namely, the organization of secure access control when using the MQTT protocol. Security mechanisms and methods that are employed or supported by the MQTT protocol have been analyzed. Thus, the protocol employs authentication by the login and password. In addition, it supports cryptographic processing over transferring data via the TLS protocol. Third-party services on OAuth protocol can be used for authentication. The authorization takes place by configuring the ACL-files or via third-party services and databases. The author suggests a device discretionary access control model of machine-to-machine interaction under the MQTT protocol, which is based on the HRU-model. The model entails six operators: the addition and deletion of a subject, the addition and deletion of an object, the addition and deletion of access privileges. The access control model is presented in a form of an access matrix and has three types of privileges: read, write, ownership. The model is composed in a way that makes it compatible with the protocol of a widespread version v3.1.1. The available types of messages in the MQTT protocol allow for the adjustment of access privileges. The author considered an algorithm with such a service data unit build that the unit could easily be distinguished in the message body. The implementation of the suggested model will lead to the minimization of administrator's involvement due to the possibility for devices to determine access privileges to the information resource without human involvement. The author suggests recommendations for security policies, when organizing an informational exchange in accordance with the MQTT protocol.
Huang, K., Yang, T..  2020.  Additive and Subtractive Cuckoo Filters. 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS). :1–10.
Bloom filters (BFs) are fast and space-efficient data structures used for set membership queries in many applications. BFs are required to satisfy three key requirements: low space cost, high-speed lookups, and fast updates. Prior works do not satisfy these requirements at the same time. The standard BF does not support deletions of items and the variants that support deletions need additional space or performance overhead. The state-of-the-art cuckoo filters (CF) has high performance with seemingly low space cost. However, the CF suffers a critical issue of varying space cost per item. This is because the exclusive-OR (XOR) operation used by the CF requires the total number of buckets to be a power of two, leading to the space inflation. To address the issue, in this paper we propose a scalable variant of the cuckoo filter called additive and subtractive cuckoo filter (ASCF). We aim to improve the space efficiency while sustaining comparably high performance. The ASCF uses the addition and subtraction (ADD/SUB) operations instead of the XOR operation to compute an item's two candidate bucket indexes based on its fingerprint. Experimental results show that the ASCF achieves both low space cost and high performance. Compared to the CF, the ASCF reduces up to 1.9x space cost per item while maintaining the same lookup and update throughput. In addition, the ASCF outperforms other filters in both space cost and performance.
Zhang, H., Zhang, D., Chen, H., Xu, J..  2020.  Improving Efficiency of Pseudonym Revocation in VANET Using Cuckoo Filter. 2020 IEEE 20th International Conference on Communication Technology (ICCT). :763–769.
In VANETs, pseudonyms are often used to replace the identity of vehicles in communication. When vehicles drive out of the network or misbehave, their pseudonym certificates need to be revoked by the certificate authority (CA). The certificate revocation lists (CRLs) are usually used to store the revoked certificates before their expiration. However, using CRLs would incur additional storage, communication and computation overhead. Some existing schemes have proposed to use Bloom Filter to compress the original CRLs, but they are unable to delete the expired certificates and introduce the false positive problem. In this paper, we propose an improved pseudonym certificates revocation scheme, using Cuckoo Filter for compression to reduce the impact of these problems. In order to optimize deletion efficiency, we propose the concept of Certificate Expiration List (CEL) which can be implemented with priority queue. The experimental results show that our scheme can effectively reduce the storage and communication overhead of pseudonym certificates revocation, while retaining moderately low false positive rates. In addition, our scheme can also greatly improve the lookup performance on CRLs, and reduce the revocation operation costs by allowing deletion.
Awad, M. A., Ashkiani, S., Porumbescu, S. D., Owens, J. D..  2020.  Dynamic Graphs on the GPU. 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS). :739–748.
We present a fast dynamic graph data structure for the GPU. Our dynamic graph structure uses one hash table per vertex to store adjacency lists and achieves 3.4-14.8x faster insertion rates over the state of the art across a diverse set of large datasets, as well as deletion speedups up to 7.8x. The data structure supports queries and dynamic updates through both edge and vertex insertion and deletion. In addition, we define a comprehensive evaluation strategy based on operations, workloads, and applications that we believe better characterize and evaluate dynamic graph data structures.
Cao, S., Zou, J., Du, X., Zhang, X..  2020.  A Successive Framework: Enabling Accurate Identification and Secure Storage for Data in Smart Grid. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
Due to malicious eavesdropping, forgery as well as other risks, it is challenging to dispose and store collected power data from smart grid in secure manners. Blockchain technology has become a novel method to solve the above problems because of its de-centralization and tamper-proof characteristics. It is especially well known that data stored in blockchain cannot be changed, so it is vital to seek out perfect mechanisms to ensure that data are compliant with high quality (namely, accuracy of the power data) before being stored in blockchain. This will help avoid losses due to low-quality data modification or deletion as needed in smart grid. Thus, we apply the parallel vision theory on the identification of meter readings to realize accurate power data. A cloud-blockchain fusion model (CBFM) is proposed for the storage of accurate power data, allowing for secure conducting of flexible transactions. Only power data calculated by parallel visual system instead of image data collected originally via robot would be stored in blockchain. Hence, we define the quality assurance before data uploaded to blockchain and security guarantee after data stored in blockchain as a successive framework, which is a brand new solution to manage efficiency and security as a whole for power data and data alike in other scenes. Security analysis and performance evaluations are performed, which prove that CBFM is highly secure and efficient impressively.