Biblio
Filters: Keyword is data deletion [Clear All Filters]
Block-chain based cloud storage integrity verifycation scheme for recoverable data. 2022 7th International Conference on Intelligent Informatics and Biomedical Science (ICIIBMS). 7:280—285.
.
2022. With the advent of the era of big data, the files that need to be stored in the storage system will increase exponentially. Cloud storage has become the most popular data storage method due to its powerful convenience and storage capacity. However, in order to save costs, some cloud service providers, Malicious deletion of the user's infrequently accessed data causes the user to suffer losses. Aiming at data integrity and privacy issues, a blockchain-based cloud storage integrity verification scheme for recoverable data is proposed. The scheme uses the Merkle tree properties, anonymity, immutability and smart contracts of the blockchain to effectively solve the problems of cloud storage integrity verification and data damage recovery, and has been tested and analyzed that the scheme is safe and effective.
Research on the Application of Computer Big Data Technology in the Health Monitoring of the Bridge Body of Cross-river Bridge. 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). :1516—1520.
.
2022. This article proposes a health monitoring system platform for cross-river bridges based on big data. The system can realize regionalized bridge operation and maintenance management. The system has functions such as registration modification and deletion of sensor equipment, user registration modification and deletion, real-time display and storage of sensor monitoring data, and evaluation and early warning of bridge structure safety. The sensor is connected to the lower computer through the serial port, analog signal, fiber grating signal, etc. The lower computer converts a variety of signals into digital signals through the single-chip A/D sampling and demodulator, etc., and transmits it to the upper computer through the serial port. The upper computer uses ARMCortex-A9 Run the main program to realize multi-threaded network communication. The system platform is to test the validity of the model, and a variety of model verification methods are used for evaluation to ensure the reliability of the big data analysis method.
Missing Values for Classification of Machine Learning in Medical data. 2022 5th International Conference on Artificial Intelligence and Big Data (ICAIBD). :101—106.
.
2022. Missing values are an unavoidable problem for classification tasks of machine learning in medical data. With the rapid development of the medical system, large scale medical data is increasing. Missing values increase the difficulty of mining hidden but useful information in these medical datasets. Deletion and imputation methods are the most popular methods for dealing with missing values. Existing studies ignored to compare and discuss the deletion and imputation methods of missing values under the row missing rate and the total missing rate. Meanwhile, they rarely used experiment data sets that are mixed-type and large scale. In this work, medical data sets of various sizes and mixed-type are used. At the same time, performance differences of deletion and imputation methods are compared under the MCAR (Missing Completely At Random) mechanism in the baseline task using LR (Linear Regression) and SVM (Support Vector Machine) classifiers for classification with the same row and total missing rates. Experimental results show that under the MCAR missing mechanism, the performance of two types of processing methods is related to the size of datasets and missing rates. As the increasing of missing rate, the performance of two types for processing missing values decreases, but the deletion method decreases faster, and the imputation methods based on machine learning have more stable and better classification performance on average. In addition, small data sets are easily affected by processing methods of missing values.
Data Imputation Techniques: An Empirical Study using Chronic Kidney Disease and Life Expectancy Datasets. 2022 International Conference on Innovative Trends in Information Technology (ICITIIT). :1—7.
.
2022. Data is a collection of information from the activities of the real world. The file in which such data is stored after transforming into a form that machines can process is generally known as data set. In the real world, many data sets are not complete, and they contain various types of noise. Missing values is of one such kind. Thus, imputing data of these missing values is one of the significant task of data pre-processing. This paper deals with two real time health care data sets namely life expectancy (LE) dataset and chronic kidney disease (CKD) dataset, which are very different in their nature. This paper provides insights on various data imputation techniques to fill missing values by analyzing them. When coming to Data imputation, it is very common to impute the missing values with measure of central tendencies like mean, median, mode Which can represent the central value of distribution but choosing the apt choice is real challenge. In accordance with best of our knowledge this is the first and foremost paper which provides the complete analysis of impact of basic data imputation techniques on various data distributions which can be classified based on the size of data set, number of missing values, type of data (categorical/numerical), etc. This paper compared and analyzed the original data distribution with the data distribution after each imputation in terms of their skewness, outliers and by various descriptive statistic parameters.
Dynamic Functional Dependency Discovery with Dynamic Hitting Set Enumeration. 2022 IEEE 38th International Conference on Data Engineering (ICDE). :286—298.
.
2022. Functional dependencies (FDs) are widely applied in data management tasks. Since FDs on data are usually unknown, FD discovery techniques are studied for automatically finding hidden FDs from data. In this paper, we develop techniques to dynamically discover FDs in response to changes on data. Formally, given the complete set Σ of minimal and valid FDs on a relational instance r, we aim to find the complete set Σ$^\textrm\textbackslashprime$ of minimal and valid FDs on røplus\textbackslashDelta r, where \textbackslashDelta r is a set of tuple insertions and deletions. Different from the batch approaches that compute Σ$^\textrm\textbackslashprime$ on røplus\textbackslashDelta r from scratch, our dynamic method computes Σ$^\textrm\textbackslashprime$ in response to \textbackslashtriangle\textbackslashuparrow. by leveraging the known Σ on r, and avoids processing the whole of r for each update from \textbackslashDelta r. We tackle dynamic FD discovery on røplus\textbackslashDelta r by dynamic hitting set enumeration on the difference-set of røplus\textbackslashDelta r. Specifically, (1) leveraging auxiliary structures built on r, we first present an efficient algorithm to update the difference-set of r to that of røplus\textbackslashDelta r. (2) We then compute Σ$^\textrm\textbackslashprime$, by recasting dynamic FD discovery as dynamic hitting set enumeration on the difference-set of røplus\textbackslashDelta r and developing novel techniques for dynamic hitting set enumeration. (3) We finally experimentally verify the effectiveness and efficiency of our approaches, using real-life and synthetic data. The results show that our dynamic FD discovery method outperforms the batch counterparts on most tested data, even when \textbackslashDelta r is up to 30 % of r.
ATDD: Fine-Grained Assured Time-Sensitive Data Deletion Scheme in Cloud Storage. ICC 2022 - IEEE International Conference on Communications. :3448—3453.
.
2022. With the rapid development of general cloud services, more and more individuals or collectives use cloud platforms to store data. Assured data deletion deserves investigation in cloud storage. In time-sensitive data storage scenarios, it is necessary for cloud platforms to automatically destroy data after the data owner-specified expiration time. Therefore, assured time-sensitive data deletion should be sought. In this paper, a fine-grained assured time-sensitive data deletion (ATDD) scheme in cloud storage is proposed by embedding the time trapdoor in Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Time-sensitive data is self-destructed after the data owner-specified expiration time so that the authorized users cannot get access to the related data. In addition, a credential is returned to the data owner for data deletion verification. This proposed scheme provides solutions for fine-grained access control and verifiable data self-destruction. Detailed security and performance analysis demonstrate the security and the practicability of the proposed scheme.
A Review on Cloud Data Assured Deletion. 2022 Global Conference on Robotics, Artificial Intelligence and Information Technology (GCRAIT). :451—457.
.
2022. At present, cloud service providers control the direct management rights of cloud data, and cloud data cannot be effectively and assured deleted, which may easily lead to security problems such as data residue and user privacy leakage. This paper analyzes the related research work of cloud data assured deletion in recent years from three aspects: encryption key deletion, multi-replica association deletion, and verifiable deletion. The advantages and disadvantages of various deletion schemes are analysed in detail, and finally the prospect of future research on assured deletion of cloud data is given.
A Secure and Efficient Fine-Grained Deletion Approach over Encrypted Data. 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC). :1123—1128.
.
2022. Documents are a common method of storing infor-mation and one of the most conventional forms of expression of ideas. Cloud servers store a user's documents with thousands of other users in place of physical storage devices. Indexes corresponding to the documents are also stored at the cloud server to enable the users to retrieve documents of their interest. The index includes keywords, document identities in which the keywords appear, along with Term Frequency-Inverse Document Frequency (TF-IDF) values which reflect the keywords' relevance scores of the dataset. Currently, there are no efficient methods to delete keywords from millions of documents over cloud servers while avoiding any compromise to the user's privacy. Most of the existing approaches use algorithms that divide a bigger problem into sub-problems and then combine them like divide and conquer problems. These approaches don't focus entirely on fine-grained deletion. This work is focused on achieving fine-grained deletion of keywords by keeping the size of the TF-IDF matrix constant after processing the deletion query, which comprises of keywords to be deleted. The experimental results of the proposed approach confirm that the precision of ranked search still remains very high after deletion without recalculation of the TF-IDF matrix.
CPSD: A data security deletion algorithm based on copyback command. 2022 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :1036—1041.
.
2022. Data secure deletion operation in storage media is an important function of data security management. The internal physical properties of SSDs are different from hard disks, and data secure deletion of disks can not apply to SSDs directly. Copyback operation is used to improve the data migration performance of SSDs but is rarely used due to error accumulation issue. We propose a data securely deletion algorithm based on copyback operation, which improves the efficiency of data secure deletion without affecting the reliability of data. First, this paper proves that the data secure delete operation takes a long time on the channel bus, increasing the I/O overhead, and reducing the performance of the SSDs. Secondly, this paper designs an efficient data deletion algorithm, which can process read requests quickly. The experimental results show that the proposed algorithm can reduce the response time of read requests by 21% and the response time of delete requests by 18.7% over the existing algorithm.
Multi-authoritative Users Assured Data Deletion Scheme in Cloud Computing. 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :147—154.
.
2022. With the rapid development of cloud storage technology, an increasing number of enterprises and users choose to store data in the cloud, which can reduce the local overhead and ensure safe storage, sharing, and deletion. In cloud storage, safe data deletion is a critical and challenging problem. This paper proposes an assured data deletion scheme based on multi-authoritative users in the semi-trusted cloud storage scenario (MAU-AD), which aims to realize the secure management of the key without introducing any trusted third party and achieve assured deletion of cloud data. MAU-AD uses access policy graphs to achieve fine-grained access control and data sharing. Besides, the data security is guaranteed by mutual restriction between authoritative users, and the system robustness is improved by multiple authoritative users jointly managing keys. In addition, the traceability of misconduct in the system can be realized by blockchain technology. Through simulation experiments and comparison with related schemes, MAU-AD is proven safe and effective, and it provides a novel application scenario for the assured deletion of cloud storage data.
Support Forward Secure Smart Grid Data Deduplication and Deletion Mechanism. 2021 2nd Asia Symposium on Signal Processing (ASSP). :67–76.
.
2021. With the vigorous development of the Internet and the widespread popularity of smart devices, the amount of data it generates has also increased exponentially, which has also promoted the generation and development of cloud computing and big data. Given cloud computing and big data technology, cloud storage has become a good solution for people to store and manage data at this stage. However, when cloud storage manages and regulates massive amounts of data, its security issues have become increasingly prominent. Aiming at a series of security problems caused by a malicious user's illegal operation of cloud storage and the loss of all data, this paper proposes a threshold signature scheme that is signed by a private key composed of multiple users. When this method performs key operations of cloud storage, multiple people are required to sign, which effectively prevents a small number of malicious users from violating data operations. At the same time, the threshold signature method in this paper uses a double update factor algorithm. Even if the attacker obtains the key information at this stage, he can not calculate the complete key information before and after the time period, thus having the two-way security and greatly improving the security of the data in the cloud storage.
Facilitating the Efficiency of Secure File Data and Metadata Deletion on SMR-based Ext4 File System. 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC). :728–733.
.
2021. The efficiency of secure deletion is highly dependent on the data layout of underlying storage devices. In particular, owing to the sequential-write constraint of the emerging Shingled Magnetic Recording (SMR) technology, an improper data layout could lead to serious write amplification and hinder the performance of secure deletion. The performance degradation of secure deletion on SMR drives is further aggravated with the need to securely erase the file system metadata of deleted files due to the small-size nature of file system metadata. Such an observation motivates us to propose a secure-deletion and SMR-aware space allocation (SSSA) strategy to facilitate the process of securely erasing both the deleted files and their metadata simultaneously. The proposed strategy is integrated within the widely-used extended file system 4 (ext4) and is evaluated through a series of experiments to demonstrate the effectiveness of the proposed strategy. The evaluation results show that the proposed strategy can reduce the secure deletion latency by 91.3% on average when compared with naive SMR-based ext4 file system.
A Simple Data Augmentation Method to Improve the Performance of Named Entity Recognition Models in Medical Domain. 2021 6th International Conference on Computer Science and Engineering (UBMK). :763–768.
.
2021. Easy Data Augmentation is originally developed for text classification tasks. It consists of four basic methods: Synonym Replacement, Random Insertion, Random Deletion, and Random Swap. They yield accuracy improvements on several deep neural network models. In this study we apply these methods to a new domain. We augment Named Entity Recognition datasets from medical domain. Although the augmentation task is much more difficult due to the nature of named entities which consist of word or word groups in the sentences, we show that we can improve the named entity recognition performance.
WAS-Deletion: Workload-Aware Secure Deletion Scheme for Solid-State Drives. 2021 IEEE 39th International Conference on Computer Design (ICCD). :244–247.
.
2021. Due to the intrinsic properties of Solid-State Drives (SSDs), invalid data remain in SSDs before erased by a garbage collection process, which increases the risk of being attacked by adversaries. Previous studies use erase and cryptography based schemes to purposely delete target data but face extremely large overhead. In this paper, we propose a Workload-Aware Secure Deletion scheme, called WAS-Deletion, to reduce the overhead of secure deletion by three major components. First, the WAS-Deletion scheme efficiently splits invalid and valid data into different blocks based on workload characteristics. Second, the WAS-Deletion scheme uses a new encryption allocation scheme, making the encryption follow the same direction as the write on multiple blocks and vertically encrypts pages with the same key in one block. Finally, a new adaptive scheduling scheme can dynamically change the configurations of different regions to further reduce secure deletion overhead based on the current workload. The experimental results indicate that the newly proposed WAS-Deletion scheme can reduce the secure deletion cost by about 1.2x to 12.9x compared to previous studies.
Set Reconciliation for Blockchains with Slepian-Wolf Coding: Deletion Polar Codes. 2021 13th International Conference on Wireless Communications and Signal Processing (WCSP). :1–5.
.
2021. In this paper, we propose a polar coding based scheme for set reconciliation between two network nodes. The system is modeled as a well-known Slepian-Wolf setting induced by a fixed number of deletions. The set reconciliation process is divided into two phases: 1) a deletion polar code is employed to help one node to identify the possible deletion indices, which may be larger than the number of genuine deletions; 2) a lossless compression polar code is then designed to feedback those indices with minimum overhead. Our scheme can be viewed as a generalization of polar codes to some emerging network-based applications such as the package synchronization in blockchains. The total overhead is linear to the number of packages, and immune to the package size.
Where is our data? A Blockchain-based Information Chain of Custody Model for Privacy Improvement 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD). :329–334.
.
2021. The advancement of Information and Communication Technologies has brought numerous facilities and benefits to society. In this environment, surrounded by technologies, data, and personal information, have become an essential and coveted tool for many sectors. In this scenario, where a large amount of data has been collected, stored, and shared, privacy concerns arise, especially when dealing with sensitive data such as health data. The information owner generally has no control over his information, which can bring serious consequences such as increases in health insurance prices or put the individual in an uncomfortable situation with disclosing his physical or mental health. While privacy regulations, like the General Data Protection Regulation (GDPR), make it clear that the information owner must have full control and management over their data, disparities have been observed in most systems and platforms. Therefore, they are often not able to give consent or have control and management over their data. For the users to exercise their right to privacy and have sufficient control over their data, they must know everything that happens to them, where their data is, and where they have been. It is necessary that the entire life cycle, from generation to deletion of data, is managed by its owner. To this end, this article presents an Information Chain of Custody Model based on Blockchain technology, which allows from the traceability of information to the offer of tools that will enable the effective management of data, offering total control to its owner. The result showed that the prototype was very useful in the traceability of the information. With that it became clear the technical feasibility of this research.
Deletion Error Correction based on Polar Codes in Skyrmion Racetrack Memory. 2021 IEEE Wireless Communications and Networking Conference (WCNC). :1–6.
.
2021. Skyrmion racetrack memory (Sk-RM) is a new storage technology in which skyrmions are used to represent data bits to provide high storage density. During the reading procedure, the skyrmion is driven by a current and sensed by a fixed read head. However, synchronization errors may happen if the skyrmion does not pass the read head on time. In this paper, a polar coding scheme is proposed to correct the synchronization errors in the Sk-RM. Firstly, we build two error correction models for the reading operation of Sk-RM. By connecting polar codes with the marker codes, the number of deletion errors can be determined. We also redesign the decoding algorithm to recover the information bits from the readout sequence, where a tighter bound of the segmented deletion errors is derived and a novel parity check strategy is designed for better decoding performance. Simulation results show that the proposed coding scheme can efficiently improve the decoding performance.
Data Wiping Tool: ByteEditor Technique. 2021 3rd International Cyber Resilience Conference (CRC). :1–6.
.
2021. This Wiping Tool is an anti-forensic tool that is built to wipe data permanently from laptop's storage. This tool is capable to ensure the data from being recovered with any recovery tools. The objective of building this wiping tool is to maintain the confidentiality and integrity of the data from unauthorized access. People tend to delete the file in normal way, however, the file face the risk of being recovered. Hence, the integrity and confidentiality of the deleted file cannot be protected. Through wiping tools, the files are overwritten with random strings to make the files no longer readable. Thus, the integrity and the confidentiality of the file can be protected. Regarding wiping tools, nowadays, lots of wiping tools face issue such as data breach because the wiping tools are unable to delete the data permanently from the devices. This situation might affect their main function and a threat to their users. Hence, a new wiping tool is developed to overcome the problem. A new wiping tool named Data Wiping tool is applying two wiping techniques. The first technique is Randomized Data while the next one is enhancing wiping technique, known as ByteEditor. ByteEditor is a combination of two different techniques, byte editing and byte deletion. With the implementation of Object-Oriented methodology, this wiping tool is built. This methodology consists of analyzing, designing, implementation and testing. The tool is analyzed and compared with other wiping tools before the designing of the tool start. Once the designing is done, implementation phase take place. The code of the tool is created using Visual Studio 2010 with C\# language and being tested their functionality to ensure the developed tool meet the objectives of the project. This tool is believed able to contribute to the development of wiping tools and able to solve problems related to other wiping tools.
Deletion-Compliance in the Absence of Privacy. 2021 18th International Conference on Privacy, Security and Trust (PST). :1–10.
.
2021. Garg, Goldwasser and Vasudevan (Eurocrypt 2020) invented the notion of deletion-compliance to formally model the “right to be forgotten’, a concept that confers individuals more control over their digital data. A requirement of deletion-compliance is strong privacy for the deletion requesters since no outside observer must be able to tell if deleted data was ever present in the first place. Naturally, many real world systems where information can flow across users are automatically ruled out.The main thesis of this paper is that deletion-compliance is a standalone notion, distinct from privacy. We present an alternative definition that meaningfully captures deletion-compliance without any privacy implications. This allows broader class of data collectors to demonstrate compliance to deletion requests and to be paired with various notions of privacy. Our new definition has several appealing properties:•It is implied by the stronger definition of Garg et al. under natural conditions, and is equivalent when we add a strong privacy requirement.•It is naturally composable with minimal assumptions.•Its requirements are met by data structure implementations that do not reveal the order of operations, a concept known as history-independence.Along the way, we discuss the many challenges that remain in providing a universal definition of compliance to the “right to be forgotten.”
Hybrid Data Fast Distribution Algorithm for Wireless Sensor Networks in Visual Internet of Things. 2021 International Conference on Big Data Analysis and Computer Science (BDACS). :166–169.
.
2021. With the maturity of Internet of things technology, massive data transmission has become the focus of research. In order to solve the problem of low speed of traditional hybrid data fast distribution algorithm for wireless sensor networks, a hybrid data fast distribution algorithm for wireless sensor networks based on visual Internet of things is designed. The logic structure of mixed data input gate in wireless sensor network is designed through the visual Internet of things. The objective function of fast distribution of mixed data in wireless sensor network is proposed. The number of copies of data to be distributed is dynamically calculated and the message deletion strategy is determined. Then the distribution parameters are calibrated, and the fitness ranking is performed according to the distribution quantity to complete the algorithm design. The experimental results show that the distribution rate of the designed algorithm is significantly higher than that of the control group, which can solve the problem of low speed of traditional data fast distribution algorithm.
Pattern Recognition and Reconstruction: Detecting Malicious Deletions in Textual Communications. 2021 IEEE International Conference on Big Data (Big Data). :2574–2582.
.
2021. Digital forensic artifacts aim to provide evidence from digital sources for attributing blame to suspects, assessing their intents, corroborating their statements or alibis, etc. Textual data is a significant source of artifacts, which can take various forms, for instance in the form of communications. E-mails, memos, tweets, and text messages are all examples of textual communications. Complex statistical, linguistic and other scientific procedures can be manually applied to this data to uncover significant clues that point the way to factual information. While expert investigators can undertake this task, there is a possibility that critical information is missed or overlooked. The primary objective of this work is to aid investigators by partially automating the detection of suspicious e-mail deletions. Our approach consists in building a dynamic graph to represent the temporal evolution of communications, and then using a Variational Graph Autoencoder to detect possible e-mail deletions in this graph. Our model uses multiple types of features for representing node and edge attributes, some of which are based on metadata of the messages and the rest are extracted from the contents using natural language processing and text mining techniques. We use the autoencoder to detect missing edges, which we interpret as potential deletions; and to reconstruct their features, from which we emit hypotheses about the topics of deleted messages. We conducted an empirical evaluation of our model on the Enron e-mail dataset, which shows that our model is able to accurately detect a significant proportion of missing communications and to reconstruct the corresponding topic vectors.
Remote Access Control Model for MQTT Protocol. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :288–291.
.
2020. The author considers the Internet of Things security problems, namely, the organization of secure access control when using the MQTT protocol. Security mechanisms and methods that are employed or supported by the MQTT protocol have been analyzed. Thus, the protocol employs authentication by the login and password. In addition, it supports cryptographic processing over transferring data via the TLS protocol. Third-party services on OAuth protocol can be used for authentication. The authorization takes place by configuring the ACL-files or via third-party services and databases. The author suggests a device discretionary access control model of machine-to-machine interaction under the MQTT protocol, which is based on the HRU-model. The model entails six operators: the addition and deletion of a subject, the addition and deletion of an object, the addition and deletion of access privileges. The access control model is presented in a form of an access matrix and has three types of privileges: read, write, ownership. The model is composed in a way that makes it compatible with the protocol of a widespread version v3.1.1. The available types of messages in the MQTT protocol allow for the adjustment of access privileges. The author considered an algorithm with such a service data unit build that the unit could easily be distinguished in the message body. The implementation of the suggested model will lead to the minimization of administrator's involvement due to the possibility for devices to determine access privileges to the information resource without human involvement. The author suggests recommendations for security policies, when organizing an informational exchange in accordance with the MQTT protocol.
Additive and Subtractive Cuckoo Filters. 2020 IEEE/ACM 28th International Symposium on Quality of Service (IWQoS). :1–10.
.
2020. Bloom filters (BFs) are fast and space-efficient data structures used for set membership queries in many applications. BFs are required to satisfy three key requirements: low space cost, high-speed lookups, and fast updates. Prior works do not satisfy these requirements at the same time. The standard BF does not support deletions of items and the variants that support deletions need additional space or performance overhead. The state-of-the-art cuckoo filters (CF) has high performance with seemingly low space cost. However, the CF suffers a critical issue of varying space cost per item. This is because the exclusive-OR (XOR) operation used by the CF requires the total number of buckets to be a power of two, leading to the space inflation. To address the issue, in this paper we propose a scalable variant of the cuckoo filter called additive and subtractive cuckoo filter (ASCF). We aim to improve the space efficiency while sustaining comparably high performance. The ASCF uses the addition and subtraction (ADD/SUB) operations instead of the XOR operation to compute an item's two candidate bucket indexes based on its fingerprint. Experimental results show that the ASCF achieves both low space cost and high performance. Compared to the CF, the ASCF reduces up to 1.9x space cost per item while maintaining the same lookup and update throughput. In addition, the ASCF outperforms other filters in both space cost and performance.
Improving Efficiency of Pseudonym Revocation in VANET Using Cuckoo Filter. 2020 IEEE 20th International Conference on Communication Technology (ICCT). :763–769.
.
2020. In VANETs, pseudonyms are often used to replace the identity of vehicles in communication. When vehicles drive out of the network or misbehave, their pseudonym certificates need to be revoked by the certificate authority (CA). The certificate revocation lists (CRLs) are usually used to store the revoked certificates before their expiration. However, using CRLs would incur additional storage, communication and computation overhead. Some existing schemes have proposed to use Bloom Filter to compress the original CRLs, but they are unable to delete the expired certificates and introduce the false positive problem. In this paper, we propose an improved pseudonym certificates revocation scheme, using Cuckoo Filter for compression to reduce the impact of these problems. In order to optimize deletion efficiency, we propose the concept of Certificate Expiration List (CEL) which can be implemented with priority queue. The experimental results show that our scheme can effectively reduce the storage and communication overhead of pseudonym certificates revocation, while retaining moderately low false positive rates. In addition, our scheme can also greatly improve the lookup performance on CRLs, and reduce the revocation operation costs by allowing deletion.
Dynamic Graphs on the GPU. 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS). :739–748.
.
2020. We present a fast dynamic graph data structure for the GPU. Our dynamic graph structure uses one hash table per vertex to store adjacency lists and achieves 3.4-14.8x faster insertion rates over the state of the art across a diverse set of large datasets, as well as deletion speedups up to 7.8x. The data structure supports queries and dynamic updates through both edge and vertex insertion and deletion. In addition, we define a comprehensive evaluation strategy based on operations, workloads, and applications that we believe better characterize and evaluate dynamic graph data structures.