Biblio

Found 7524 results

Filters: Keyword is Metrics  [Clear All Filters]
2020-05-15
Lian, Mengyun, Wang, Jian, Lu, Jinzhi.  2018.  A New Hardware Logic Circuit for Evaluating Multi-Processor Chip Security. 2018 Eighth International Conference on Instrumentation Measurement, Computer, Communication and Control (IMCCC). :1571—1574.
NoC (Network-on-Chip) is widely considered and researched by academic communities as a new inter-core interconnection method that replaces the bus. Nowadays, the complexity of on-chip systems is increasing, requiring better communication performance and scalability. Therefore, the optimization of communication performance has become one of the research hotspots. While the NoC is rapidly developing, it is threatened by hardware Trojans inserted during the design or manufacturing processes. This leads to that the attackers can exploit NoC's vulnerability to attack the on-chip systems. To solve the problem, we design and implement a replay-type hardware Trojan inserted into the NoC, aiming to provide a benchmark test set to promote the defense strategies for NoC hardware security. The experiment proves that the power consumption of the designed Trojan accounts for less than one thousandth of the entire NoC power consumption and area. Besides, simulation experiments reveal that this replaytype hardware Trojan can reduce the network throughput.
2019-02-18
Gu, Bin, Yuan, Xiao-Tong, Chen, Songcan, Huang, Heng.  2018.  New Incremental Learning Algorithm for Semi-Supervised Support Vector Machine. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :1475–1484.
Semi-supervised learning is especially important in data mining applications because it can make use of plentiful unlabeled data to train the high-quality learning models. Semi-Supervised Support Vector Machine (S3VM) is a powerful semi-supervised learning model. However, the high computational cost and non-convexity severely impede the S3VM method in large-scale applications. Although several learning algorithms were proposed for S3VM, scaling up S3VM is still an open problem. To address this challenging problem, in this paper, we propose a new incremental learning algorithm to scale up S3VM (IL-S3VM) based on the path following technique in the framework of Difference of Convex (DC) programming. The traditional DC programming based algorithms need multiple outer loops and are not suitable for incremental learning, and traditional path following algorithms are limited to convex problems. Our new IL-S3VM algorithm based on the path-following technique can directly update the solution of S3VM to converge to a local minimum within one outer loop so that the efficient incremental learning can be achieved. More importantly, we provide the finite convergence analysis for our new algorithm. To the best of our knowledge, our new IL-S3VM algorithm is the first efficient path following algorithm for a non-convex problem (i.e., S3VM) with local minimum convergence guarantee. Experimental results on a variety of benchmark datasets not only confirm the finite convergence of IL-S3VM, but also show a huge reduction of computational time compared with existing batch and incremental learning algorithms, while retaining the similar generalization performance.
2019-12-18
Mohammed, Saif Saad, Hussain, Rasheed, Senko, Oleg, Bimaganbetov, Bagdat, Lee, JooYoung, Hussain, Fatima, Kerrache, Chaker Abdelaziz, Barka, Ezedin, Alam Bhuiyan, Md Zakirul.  2018.  A New Machine Learning-based Collaborative DDoS Mitigation Mechanism in Software-Defined Network. 2018 14th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). :1–8.
Software Defined Network (SDN) is a revolutionary idea to realize software-driven network with the separation of control and data planes. In essence, SDN addresses the problems faced by the traditional network architecture; however, it may as well expose the network to new attacks. Among other attacks, distributed denial of service (DDoS) attacks are hard to contain in such software-based networks. Existing DDoS mitigation techniques either lack in performance or jeopardize the accuracy of the attack detection. To fill the voids, we propose in this paper a machine learning-based DDoS mitigation technique for SDN. First, we create a model for DDoS detection in SDN using NSL-KDD dataset and then after training the model on this dataset, we use real DDoS attacks to assess our proposed model. Obtained results show that the proposed technique equates favorably to the current techniques with increased performance and accuracy.
2019-02-25
Yi, Weiming, Dong, Peiwu, Wang, Jing.  2018.  Node Risk Propagation Capability Modeling of Supply Chain Network Based on Structural Attributes. Proceedings of the 2018 9th International Conference on E-business, Management and Economics. :50–54.
This paper firstly defines the importance index of several types of nodes from the local and global attributes of the supply chain network, analyzes the propagation effect of the nodes after the risk is generated from the perspective of the network topology, and forms multidimensional structural attributes that describe node risk propagation capabilities of the supply chain network. Then the indicators of the structure attributes of the supply chain network are simplified based on PCA (Principal Component Analysis). Finally, a risk assessment model of node risk propagation is constructed using BP neural network. This paper also takes 4G smart phone industry chain data as an example to verify the validity of the proposed model.
2019-01-21
Tang, Yutao, Li, Ding, Li, Zhichun, Zhang, Mu, Jee, Kangkook, Xiao, Xusheng, Wu, Zhenyu, Rhee, Junghwan, Xu, Fengyuan, Li, Qun.  2018.  NodeMerge: Template Based Efficient Data Reduction For Big-Data Causality Analysis. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1324–1337.
Today's enterprises are exposed to sophisticated attacks, such as Advanced Persistent Threats\textbackslashtextasciitilde(APT) attacks, which usually consist of stealthy multiple steps. To counter these attacks, enterprises often rely on causality analysis on the system activity data collected from a ubiquitous system monitoring to discover the initial penetration point, and from there identify previously unknown attack steps. However, one major challenge for causality analysis is that the ubiquitous system monitoring generates a colossal amount of data and hosting such a huge amount of data is prohibitively expensive. Thus, there is a strong demand for techniques that reduce the storage of data for causality analysis and yet preserve the quality of the causality analysis. To address this problem, in this paper, we propose NodeMerge, a template based data reduction system for online system event storage. Specifically, our approach can directly work on the stream of system dependency data and achieve data reduction on the read-only file events based on their access patterns. It can either reduce the storage cost or improve the performance of causality analysis under the same budget. Only with a reasonable amount of resource for online data reduction, it nearly completely preserves the accuracy for causality analysis. The reduced form of data can be used directly with little overhead. To evaluate our approach, we conducted a set of comprehensive evaluations, which show that for different categories of workloads, our system can reduce the storage capacity of raw system dependency data by as high as 75.7 times, and the storage capacity of the state-of-the-art approach by as high as 32.6 times. Furthermore, the results also demonstrate that our approach keeps all the causality analysis information and has a reasonably small overhead in memory and hard disk.
Ahmed, Chuadhry Mujeeb, Zhou, Jianying, Mathur, Aditya P..  2018.  Noise Matters: Using Sensor and Process Noise Fingerprint to Detect Stealthy Cyber Attacks and Authenticate Sensors in CPS. Proceedings of the 34th Annual Computer Security Applications Conference. :566–581.
A novel scheme is proposed to authenticate sensors and detect data integrity attacks in a Cyber Physical System (CPS). The proposed technique uses the hardware characteristics of a sensor and physics of a process to create unique patterns (herein termed as fingerprints) for each sensor. The sensor fingerprint is a function of sensor and process noise embedded in sensor measurements. Uniqueness in the noise appears due to manufacturing imperfections of a sensor and due to unique features of a physical process. To create a sensor's fingerprint a system-model based approach is used. A noise-based fingerprint is created during the normal operation of the system. It is shown that under data injection attacks on sensors, noise pattern deviations from the fingerprinted pattern enable the proposed scheme to detect attacks. Experiments are performed on a dataset from a real-world water treatment (SWaT) facility. A class of stealthy attacks is designed against the proposed scheme and extensive security analysis is carried out. Results show that a range of sensors can be uniquely identified with an accuracy as high as 98%. Extensive sensor identification experiments are carried out on a set of sensors in SWaT testbed. The proposed scheme is tested on a variety of attack scenarios from the reference literature which are detected with high accuracy
2019-02-14
Eclarin, Bobby A., Fajardo, Arnel C., Medina, Ruji P..  2018.  A Novel Feature Hashing With Efficient Collision Resolution for Bag-of-Words Representation of Text Data. Proceedings of the 2Nd International Conference on Natural Language Processing and Information Retrieval. :12-16.
Text Mining is widely used in many areas transforming unstructured text data from all sources such as patients' record, social media network, insurance data, and news, among others into an invaluable source of information. The Bag Of Words (BoW) representation is a means of extracting features from text data for use in modeling. In text classification, a word in a document is assigned a weight according to its frequency and frequency between different documents; therefore, words together with their weights form the BoW. One way to solve the issue of voluminous data is to use the feature hashing method or hashing trick. However, collision is inevitable and might change the result of the whole process of feature generation and selection. Using the vector data structure, the lookup performance is improved while resolving collision and the memory usage is also efficient.
2019-10-07
Monge, Marco Antonio Sotelo, Vidal, Jorge Maestre, Villalba, Luis Javier García.  2018.  A Novel Self-Organizing Network Solution Towards Crypto-ransomware Mitigation. Proceedings of the 13th International Conference on Availability, Reliability and Security. :48:1–48:10.
In the last decade, crypto-ransomware evolved from a family of malicious software with scarce repercussion in the research community, to a sophisticated and highly effective intrusion method positioned in the spotlight of the main organizations for cyberdefense. Its modus operandi is characterized by fetching the assets to be blocked, their encryption, and triggering an extortion process that leads the victim to pay for the key that allows their recovery. This paper reviews the evolution of crypto-ransomware focusing on the implication of the different advances in communication technologies that empowered its popularization. In addition, a novel defensive approach based on the Self-Organizing Network paradigm and the emergent communication technologies (e.g. Software-Defined Networking, Network Function Virtualization, Cloud Computing, etc.) is proposed. They enhance the orchestration of smart defensive deployments that adapt to the status of the monitoring environment and facilitate the adoption of previously defined risk management policies. In this way it is possible to efficiently coordinate the efforts of sensors and actuators distributed throughout the protected environment without supervision by human operators, resulting in greater protection with increased viability
2019-02-18
Zhu, Mengeheng, Shi, Hong.  2018.  A Novel Support Vector Machine Algorithm for Missing Data. Proceedings of the 2Nd International Conference on Innovation in Artificial Intelligence. :48–53.
Missing data problem often occurs in data analysis. The most common way to solve this problem is imputation. But imputation methods are only suitable for dealing with a low proportion of missing data, when assuming that missing data satisfies MCAR (Missing Completely at Random) or MAR (Missing at Random). In this paper, considering the reasons for missing data, we propose a novel support vector machine method using a new kernel function to solve the problem with a relatively large proportion of missing data. This method makes full use of observed data to reduce the error caused by filling a large number of missing values. We validate our method on 4 data sets from UCI Repository of Machine Learning. The accuracy, F-score, Kappa statistics and recall are used to evaluate the performance. Experimental results show that our method achieve significant improvement in terms of classification results compared with common imputation methods, even when the proportion of missing data is high.
2019-11-18
Hall-Andersen, Mathias, Wong, David, Sullivan, Nick, Chator, Alishah.  2018.  nQUIC: Noise-Based QUIC Packet Protection. Proceedings of the Workshop on the Evolution, Performance, and Interoperability of QUIC. :22–28.
We present nQUIC, a variant of QUIC-TLS that uses the Noise protocol framework for its key exchange and basis of its packet protector with no semantic transport changes. nQUIC is designed for deployment in systems and for applications that assert trust in raw public keys rather than PKI-based certificate chains. It uses a fixed key exchange algorithm, compromising agility for implementation and verification ease. nQUIC provides mandatory server and optional client authentication, resistance to Key Compromise Impersonation attacks, and forward and future secrecy of traffic key derivation, which makes it favorable to QUIC-TLS for long-lived QUIC connections in comparable applications. We developed two interoperable prototype implementations written in Go and Rust. Experimental results show that nQUIC finishes its handshake in a comparable amount of time as QUIC-TLS.
2019-02-08
Isaacson, D. M..  2018.  The ODNI-OUSD(I) Xpress Challenge: An Experimental Application of Artificial Intelligence Techniques to National Security Decision Support. 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC). :104-109.
Current methods for producing and disseminating analytic products contribute to the latency of relaying actionable information and analysis to the U.S. Intelligence Community's (IC's) principal customers, U.S. policymakers and warfighters. To circumvent these methods, which can often serve as a bottleneck, we report on the results of a public prize challenge that explored the potential for artificial intelligence techniques to generate useful analytic products. The challenge tasked solvers to develop algorithms capable of searching and processing nearly 15,000 unstructured text files into a 1-2 page analytic product without human intervention; these analytic products were subsequently evaluated and scored using established IC methodologies and criteria. Experimental results from this challenge demonstrate the promise for the ma-chine generation of analytic products to ensure that the IC warns and informs in a more timely fashion.
2019-05-01
Gautier, Adam M., Andel, Todd R., Benton, Ryan.  2018.  On-Device Detection via Anomalous Environmental Factors. Proceedings of the 8th Software Security, Protection, and Reverse Engineering Workshop. :5:1–5:8.
Embedded Systems (ES) underlie society's critical cyberinfrastructure and comprise the vast majority of consumer electronics, making them a prized target for dangerous malware and hardware Trojans. Malicious intrusion into these systems present a threat to national security and economic stability as globalized supply chains and tight network integration make ES more susceptible to attack than ever. High-end ES like the Xilinx Zynq-7020 system on a chip are widely used in the field and provide a representative platform for investigating the methods of cybercriminals. This research suggests a novel anomaly detection framework that could be used to detect potential zero-day exploits, undiscovered rootkits, or even maliciously implanted hardware by leveraging the Zynq architecture and real-time device-level measurements of thermal side-channels. The results of an initial investigation showed different processor workloads produce distinct thermal fingerprints that are detectable by out-of-band, digital logic-based thermal sensors.
2019-08-05
Hu, Xinyi, Zhao, Yaqun.  2018.  One to One Identification of Cryptosystem Using Fisher's Discriminant Analysis. Proceedings of the 6th ACM/ACIS International Conference on Applied Computing and Information Technology. :7–12.
Distinguishing analysis is an important part of cryptanalysis. It is an important content of discriminating analysis that how to identify ciphertext is encrypted by which cryptosystems when it knows only ciphertext. In this paper, Fisher's discriminant analysis (FDA), which is based on statistical method and machine learning, is used to identify 4 stream ciphers and 7 block ciphers one to one by extracting 9 different features. The results show that the accuracy rate of the FDA can reach 80% when identifying files that are encrypted by the stream cipher and the block cipher in ECB mode respectively, and files encrypted by the block cipher in ECB mode and CBC mode respectively. The average one to one identification accuracy rates of stream ciphers RC4, Grain, Sosemanuk are more than 55%. The maximum accuracy rate can reach 60% when identifying SMS4 from block ciphers in CBC mode one to one. The identification accuracy rate of entropy-based features is apparently higher than the probability-based features.
2019-01-21
Solanki, Deepak.  2018.  Optical Wireless Communication. Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. :858–860.
Data is the new currency impacting everybody's lives. As the modern world receives & sends millions of Terabytes of data every day, the present-day wireless data communication technologies comprising of Wi-Fi & 4G-LTE is on the verge of becoming partially inept for information exchange as they suffer from spectrum congestion in both controlled and uncontrolled environments. Li-Fi, also known as light fidelity, is a full duplex communication network enabling transmittal of data. The potency of bidirectional Visible Light Communication allows us to build an ideal medium, independent of congested radio frequencies and interference from electromagnetic waves, thus, resulting in faster data transfer. Inception of LED technology for lighting in 90's paved the way for high growth trajectory for LED Lighting industry which we have witnessed from the last 2 decades. As semiconductors, LEDs were poised to develop much bigger applications like integrated sensors apart from normal dimming and ambient lighting. Li-Fi is a technology which creates a bridge between the world of data communication & LED Lighting. Multiple forward & backward integration are poised to happen in coming years when lighting players will develop enterprise communication enabled lighting products. Even system integrators will look forward to Li-Fi enabled luminaires for establishing wireless networks. Li-Fi is being seen as a big step forward in enabling 5G telecommunication networks. Security benefits and outdoor long-range communication capabilities Li-Fi a potential technology for Defence & Smart Cities applications. Li-Fi uses the visible and invisible frequency band (380nm - 1500nm) which is 10,000 times broader than usable RF frequency band. The property of light spectrum to be unlicensed and free from any health regulations makes it even more desirable for us. Its applications can extend in areas where the RF technology lacks its presence like aircrafts and hospitals (operation theatres), power plants and various other areas, where electromagnetic (Radio) interference is of great concern for safety and security of equipment's and people. Since there is no potential health hazard associated with light, it can be used safely in such locations or areas. Li-Fi / OWC has applications in both indoor (≅) and outdoor ( ) scenarios.
Lian, J., Wang, X., Noshad, M., Brandt-Pearce, M..  2018.  Optical Wireless Interception Vulnerability Analysis of Visible Light Communication System. 2018 IEEE International Conference on Communications (ICC). :1–6.
Visible light communication is a solution for high-security wireless data transmission. In this paper, we first analyze the potential vulnerability of the system from eavesdropping outside the room. By setting up a signal to noise ratio threshold, we define a vulnerable area outside of the room through a window. We compute the receiver aperture needed to capture the signal and what portion of the space is most vulnerable to eavesdropping. Based on the analysis, we propose a solution to improve the security by optimizing the modulation efficiency of each LED in the indoor lamp. The simulation results show that the proposed solution can improve the security considerably while maintaining the indoor communication performance.
2019-08-26
Chiu, Pei-Ling, Lee, Kai-Hui.  2018.  Optimization Based Adaptive Tagged Visual Cryptography. Proceedings of the Genetic and Evolutionary Computation Conference Companion. :33–34.
The Tagged Visual Cryptography Scheme (TVCS)1 adds tag images to the noise-like shares generated by the traditional VCS to improve the shares management of the traditional VCS. However, the existing TVCSs suffers visual quality of the recovered secret image may be degraded and there may be pixel expansion. This study proposes a Threshold Adaptive Tagged Visual Cryptography Scheme ((k, n)-ATVCS) to solve the above-mentioned problems. The ATVCS encryption problem is formulated in a mathematical optimization model, and an evolutionary algorithm is developed to find the optimal solution to the problem. The proposed (k, n)-ATVCS enables the encryptor to adjust the visual quality between the tag image and the secret image by tuning parameters. Experimental results show the correctness and effectiveness of this study.
2019-01-21
Han, Xu, Tian, Daxin, Duan, Xuting, Sheng, Zhengguo, Wang, Yunpeng, Leung, Victor C.M..  2018.  Optimized Anonymity Updating in VANET Based on Information and Privacy Joint Metrics. Proceedings of the 8th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications. :63–69.
With the continuous development of the vehicular ad hoc network (VANET), many challenges related to network security have come one after another, among which privacy issues are particularly prominent. To help each network user decide when and where to protect their privacy, we suggest creating a user-centric privacy computing system in VANET. A risk assessment function and a set of decision weights are proposed to simulate the driver's decision-making intent in the vehicle network. Besides, proposed information and privacy joint metrics are used as the key indicators for dynamic selection of Mix-zone. Finally, by considering three influencing factors: maximum road capacity, user-centric quantitative privacy and attacker information measurement, defined mixzone creation mechanism to achieve privacy protection in VANET.
2020-04-20
Liu, Kai-Cheng, Kuo, Chuan-Wei, Liao, Wen-Chiuan, Wang, Pang-Chieh.  2018.  Optimized Data de-Identification Using Multidimensional k-Anonymity. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1610–1614.
In the globalized knowledge economy, big data analytics have been widely applied in diverse areas. A critical issue in big data analysis on personal information is the possible leak of personal privacy. Therefore, it is necessary to have an anonymization-based de-identification method to avoid undesirable privacy leak. Such method can prevent published data form being traced back to personal privacy. Prior empirical researches have provided approaches to reduce privacy leak risk, e.g. Maximum Distance to Average Vector (MDAV), Condensation Approach and Differential Privacy. However, previous methods inevitably generate synthetic data of different sizes and is thus unsuitable for general use. To satisfy the need of general use, k-anonymity can be chosen as a privacy protection mechanism in the de-identification process to ensure the data not to be distorted, because k-anonymity is strong in both protecting privacy and preserving data authenticity. Accordingly, this study proposes an optimized multidimensional method for anonymizing data based on both the priority weight-adjusted method and the mean difference recommending tree method (MDR tree method). The results of this study reveal that this new method generate more reliable anonymous data and reduce the information loss rate.
Liu, Kai-Cheng, Kuo, Chuan-Wei, Liao, Wen-Chiuan, Wang, Pang-Chieh.  2018.  Optimized Data de-Identification Using Multidimensional k-Anonymity. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1610–1614.
In the globalized knowledge economy, big data analytics have been widely applied in diverse areas. A critical issue in big data analysis on personal information is the possible leak of personal privacy. Therefore, it is necessary to have an anonymization-based de-identification method to avoid undesirable privacy leak. Such method can prevent published data form being traced back to personal privacy. Prior empirical researches have provided approaches to reduce privacy leak risk, e.g. Maximum Distance to Average Vector (MDAV), Condensation Approach and Differential Privacy. However, previous methods inevitably generate synthetic data of different sizes and is thus unsuitable for general use. To satisfy the need of general use, k-anonymity can be chosen as a privacy protection mechanism in the de-identification process to ensure the data not to be distorted, because k-anonymity is strong in both protecting privacy and preserving data authenticity. Accordingly, this study proposes an optimized multidimensional method for anonymizing data based on both the priority weight-adjusted method and the mean difference recommending tree method (MDR tree method). The results of this study reveal that this new method generate more reliable anonymous data and reduce the information loss rate.
2019-01-16
Pan, Cheng, Hu, Xiameng, Zhou, Lan, Luo, Yingwei, Wang, Xiaolin, Wang, Zhenlin.  2018.  PACE: Penalty Aware Cache Modeling with Enhanced AET. Proceedings of the 9th Asia-Pacific Workshop on Systems. :19:1–19:8.
Past cache modeling techniques are typically limited to a cache system with a fixed cache line/block size. This limitation is not a problem for a hardware cache where the cache line size is uniform. However, modern in-memory software caches, such as Memcached and Redis, are able to cache varied-size data objects. A software cache supports update and delete operations in addition to only reads and writes for a hardware cache. Moreover, existing cache models often assume that the penalty for each cache miss is identical, which is not true especially for software cache targeting web services, and past cache management policies that aim to improve cache hit rate are no longer sufficient. We propose a more general cache model that can handle varied cache block sizes, nonuniform miss penalties, and diverse cache operations. In this paper, we first extend a state-of-the-art cache model to accurately predict cache miss ratios for variable cache sizes when object size, updates and deletions are considered. We then apply this model to drive cache management when miss penalty is brought into consideration. Our approach delivers better results than a recent penalty-aware cache management scheme, Hyperbolic Caching, especially when cache budget is tight. Another advantage of our approach is that it provides predictable and controllable cache management on cache space allocation, especially when multiple applications share the cache space.
2019-09-23
Ramijak, Dusan, Pal, Amitangshu, Kant, Krishna.  2018.  Pattern Mining Based Compression of IoT Data. Proceedings of the Workshop Program of the 19th International Conference on Distributed Computing and Networking. :12:1–12:6.
The increasing proliferation of the Internet of Things (IoT) devices and systems result in large amounts of highly heterogeneous data to be collected. Although at least some of the collected sensor data is often consumed by the real-time decision making and control of the IoT system, that is not the only use of such data. Invariably, the collected data is stored, perhaps in some filtered or downselected fashion, so that it can be used for a variety of lower-frequency operations. It is expected that in a smart city environment with numerous IoT deployments, the volume of such data can become enormous. Therefore, mechanisms for lossy data compression that provide a trade-off between compression ratio and data usefulness for offline statistical analysis becomes necessary. In this paper, we discuss several simple pattern mining based compression strategies for multi-attribute IoT data streams. For each method, we evaluate the compressibility of the method vs. the level of similarity between original and compressed time series in the context of the home energy management system.
2019-02-18
Afsharinejad, Armita, Hurley, Neil.  2018.  Performance Analysis of a Privacy Constrained kNN Recommendation Using Data Sketches. Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. :10–18.
This paper evaluates two algorithms, BLIP and JLT, for creating differentially private data sketches of user profiles, in terms of their ability to protect a kNN collaborative filtering algorithm from an inference attack by third-parties. The transformed user profiles are employed in a user-based top-N collaborative filtering system. For the first time, a theoretical analysis of the BLIP is carried out, to derive expressions that relate its parameters to its performance. This allows the two techniques to be fairly compared. The impact of deploying these approaches on the utility of the system—its ability to make good recommendations, and on its privacy level—the ability of third-parties to make inferences about the underlying user preferences, is examined. An active inference attack is evaluated, that consists of the injection of a number of tailored sybil profiles into the system database. User profile data of targeted users is then inferred from the recommendations made to the sybils. Although the differentially private sketches are designed to allow the transformed user profiles to be published without compromising privacy, the attack we examine does not use such information and depends only on some pre-existing knowledge of some user preferences as well as the neighbourhood size of the kNN algorithm. Our analysis therefore assesses in practical terms a relatively weak privacy attack, which is extremely simple to apply in systems that allow low-cost generation of sybils. We find that, for a given differential privacy level, the BLIP injects less noise into the system, but for a given level of noise, the JLT offers a more compact representation.
2019-10-15
Vyakaranal, S., Kengond, S..  2018.  Performance Analysis of Symmetric Key Cryptographic Algorithms. 2018 International Conference on Communication and Signal Processing (ICCSP). :0411–0415.
Data's security being important aspect of the today's internet is gaining more importance day by day. With the increase in online data exchange, transactions and payments; secure payment and secure data transfers have become an area of concern. Cryptography makes the data transmission over the internet secure by various methods, algorithms. Cryptography helps in avoiding the unauthorized people accessing the data by authentication, confidentiality, integrity and non-repudiation. In order to securely transmit the data many cryptographic algorithms are present, but the algorithm to be used should be robust, efficient, cost effective, high performance and easily deployable. Choosing an algorithm which suits the customer's requirement is an utmost important task. The proposed work discusses different symmetric key cryptographic algorithms like DES, 3DES, AES and Blowfish by considering encryption time, decryption time, entropy, memory usage, throughput, avalanche effect and energy consumption by practical implementation using java. Practical implementation of algorithms has been highlighted in proposed work considering tradeoff performance in terms of cost of various parameters rather than mere theoretical concepts. Battery consumption and avalanche effect of algorithms has been discussed. It reveals that AES performs very well in overall performance analysis among considered algorithms.
2019-05-01
Georgiadis, Ioannis, Dossis, Michael, Kontogiannis, Sotirios.  2018.  Performance Evaluation on IoT Devices Secure Data Delivery Processes. Proceedings of the 22Nd Pan-Hellenic Conference on Informatics. :306–311.
This paper presents existing cryptographic technologies used by the IoT industry. Authors review security capabilities of existing IoT protocols such as LoRaWAN, IEE802.15.4, BLE and RF based. Authors also experiment with the cryptographic efficiency and energy consumption of existing cryptography algorithms, implemented on embedded systems. Authors evaluate the performance of 32bit single ARM cortex microprocessor, Atmel ATmega32u4 8-bit micro-controller and Parallella Xillix Zynq FPGA parallel co-processors. From the experimental results, authors signify the requirements of the next generation IoT security protocols and from their experimental results provide useful guidelines.
2019-03-18
Jia, Xiaoqi, He, Yun, Wu, Xiyao, Sun, Huiqi.  2018.  Performing Trusted Computing Actively Using Isolated Security Processor. Proceedings of the 1st Workshop on Security-Oriented Designs of Computer Architectures and Processors. :2–7.
Trusted computing is one of the main development trend in information security. However, there are still two limitations in existing trusted computing model. One is that the measurement process of the existing trusted computing model can be bypassed. Another is it lacks of effective runtime detection methods to protect the system, even the measurement process itself. In this paper, we introduce an active trusted model which can solve those two problems. Our active trusted computing model is comprised of two components: normal computation world and isolated security world. All the security tasks of active trusted computing model are assigned to the isolated security world. In this model, the static trusted measurement measures BIOS and operating system at the start-up of the computer system; and the dynamic trusted measurement measures the code segment, the data segment, and other critical structures actively and periodically at runtime. We have implemented a prototype of the active trusted computing model and done preliminary evaluation. Our experimental results show that this prototype can perform trusted computing on-the-fly effectively with an acceptable performance overhead.