Biblio
Filters: Keyword is Libraries [Clear All Filters]
Privacy-Preserving Biometric Matching Using Homomorphic Encryption. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :494–505.
.
2021. Biometric matching involves storing and processing sensitive user information. Maintaining the privacy of this data is thus a major challenge, and homomorphic encryption offers a possible solution. We propose a privacy-preserving biometrics-based authentication protocol based on fully homomorphic en-cryption, where the biometric sample for a user is gathered by a local device but matched against a biometric template by a remote server operating solely on encrypted data. The design ensures that 1) the user's sensitive biometric data remains private, and 2) the user and client device are securely authenticated to the server. A proof-of-concept implementation building on the TFHE library is also presented, which includes the underlying basic operations needed to execute the biometric matching. Performance results from the implementation show how complex it is to make FHE practical in this context, but it appears that, with implementation optimisations and improvements, the protocol could be used for real-world applications.
LiONv2: An Experimental Network Construction Tool Considering Disaggregation of Network Configuration and Device Configuration. 2021 IEEE 7th International Conference on Network Softwarization (NetSoft). :171–175.
.
2021. An experimental network environment plays an important role to examine new systems and protocols. We have developed an experimental network construction tool called LiONv1 (Lightweight On-Demand Networking, ver.1). LiONv1 satisfies the following four requirements: programmer-friendly configuration file based on Infrastructure as Code, multiple virtualization technologies for virtual nodes, physical topology conscious virtual node placement, and L3 protocol agnostic virtual networks. None of existing experimental network environments satisfy all the four requirements. In this paper, we develop LiONv2 which satisfies three more requirements: diversity of available network devices, Internet-scale deployment, and disaggregation of network configuration and device configuration. LiONv2 employs NETCONF and YANG to achieve diversity of available network devices and Internet-scale deployment. LiONv2 also defines two YANG models which disaggregate network configuration and device configuration. LiONv2 is implemented in Go and C languages with public libraries for Go. Measurement results show that construction time of a virtual network is irrelevant to the number of virtual nodes if a single virtual node is created per physical node.
Research on Data Classification of Intelligent Connected Vehicles Based on Scenarios. 2021 International Conference on E-Commerce and E-Management (ICECEM). :153–158.
.
2021. The intelligent connected vehicle industry has entered a period of opportunity, industry data is accumulating rapidly, and the formulation of industry standards to regulate big data management and application is imminent. As the basis of data security, data classification has received unprecedented attention. By combing through the research and development status of data classification in various industries, this article combines industry characteristics and re-examines the framework of industry data classification from the aspects of information security and data assetization, and tries to find the balance point between data security and data value. The intelligent networked automobile industry provides support for big data applications, this article combines the characteristics of the connected vehicle industry, re-examines the data characteristics of the intelligent connected vehicle industry from the 2 aspects as information security and data assetization, and eventually proposes a scene-based hierarchical framework. The framework includes the complete classification process, model, and quantifiable parameters, which provides a solution and theoretical endorsement for the construction of a big data automatic classification system for the intelligent connected vehicle industry and safe data open applications.
PDGraph: A Large-Scale Empirical Study on Project Dependency of Security Vulnerabilities. 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :161–173.
.
2021. The reuse of libraries in software development has become prevalent for improving development efficiency and software quality. However, security vulnerabilities of reused libraries propagated through software project dependency pose a severe security threat, but they have not yet been well studied. In this paper, we present the first large-scale empirical study of project dependencies with respect to security vulnerabilities. We developed PDGraph, an innovative approach for analyzing publicly known security vulnerabilities among numerous project dependencies, which provides a new perspective for assessing security risks in the wild. As a large-scale software collection in dependency, we find 337,415 projects and 1,385,338 dependency relations. In particular, PDGraph generates a project dependency graph, where each node is a project, and each edge indicates a dependency relationship. We conducted experiments to validate the efficacy of PDGraph and characterized its features for security analysis. We revealed that 1,014 projects have publicly disclosed vulnerabilities, and more than 67,806 projects are directly dependent on them. Among these, 42,441 projects still manifest 67,581 insecure dependency relationships, indicating that they are built on vulnerable versions of reused libraries even though their vulnerabilities are publicly known. During our eight-month observation period, only 1,266 insecure edges were fixed, and corresponding vulnerable libraries were updated to secure versions. Furthermore, we uncovered four underlying dependency risks that can significantly reduce the difficulty of compromising systems. We conducted a quantitative analysis of dependency risks on the PDGraph.
One Layer for All: Efficient System Security Monitoring for Edge Servers. 2021 IEEE International Performance, Computing, and Communications Conference (IPCCC). :1–8.
.
2021. Edge computing promises higher bandwidth and lower latency to end-users. However, edge servers usually have limited computing resources and are geographically distributed over the edge. This imposes new challenges for efficient system monitoring and control of edge servers.In this paper, we propose EdgeVMI, a framework to monitor and control services running on edge servers with lightweight virtual machine introspection(VMI). The key of our technique is to run the monitor in a lightweight virtual machine which can leverage hardware events for monitoring memory read and writes. In addition, the small binary size and memory footprints of the monitor could reduce the start/stop time of service, the runtime overhead, as well as the deployment efforts.Inspired by unikernels, we build our monitor with only the necessary system modules, libraries, and functionalities of a specific monitor task. To reduce the security risk of the monitoring behavior, we separate the monitor into two isolated modules: one acts as a sensor to collect security information and another acts as an actuator to conduct control commands. Our evaluation shows the effectiveness and the efficiency of the monitoring system, with an average performance overhead of 2.7%.
Enhancing Cloud Data Privacy Using Pre-Internet Data Encryption. 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). :446–449.
.
2021. Cloud computing is one of the greatest and authoritative paradigms in computing as it provides access and use of various third-party services at a lower cost. However, there exist various security challenges facing cloud computing especially in the aspect of data privacy and this is more critical when dealing with sensitive personal or organization's data. Cloud service providers encrypt data during transfer from the local hard drive to the cloud server and at the server-side, the only problem is that the encryption key is stored by the service provider meaning they can decrypt your data. This paper discusses how cloud security can be enhanced by using client-side data encryption (pre-internet encryption), this will allow the clients to encrypt data before uploading to the cloud and store the key themselves. This means that data will be rendered to the cloud in an unreadable and secure format that cannot be accessed by unauthorized persons.
MiniMod: A Modular Miniapplication Benchmarking Framework for HPC. 2021 IEEE International Conference on Cluster Computing (CLUSTER). :12–22.
.
2021. The HPC application community has proposed many new application communication structures, middleware interfaces, and communication models to improve HPC application performance. Modifying proxy applications is the standard practice for the evaluation of these novel methodologies. Currently, this requires the creation of a new version of the proxy application for each combination of the approach being tested. In this article, we present a modular proxy-application framework, MiniMod, that enables evaluation of a combination of independently written computation kernels, data transfer logic, communication access, and threading libraries. MiniMod is designed to allow rapid development of individual modules which can be combined at runtime. Through MiniMod, developers only need a single implementation to evaluate application impact under a variety of scenarios.We demonstrate the flexibility of MiniMod’s design by using it to implement versions of a heat diffusion kernel and the miniFE finite element proxy application, along with a variety of communication, granularity, and threading modules. We examine how changing communication libraries, communication granularities, and threading approaches impact these applications on an HPC system. These experiments demonstrate that MiniMod can rapidly improve the ability to assess new middleware techniques for scientific computing applications and next-generation hardware platforms.
Scalable Call Graph Constructor for Maven. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :99—101.
.
2021. As a rich source of data, Call Graphs are used for various applications including security vulnerability detection. Despite multiple studies showing that Call Graphs can drastically improve the accuracy of analysis, existing ecosystem-scale tools like Dependabot do not use Call Graphs and work at the package-level. Using Call Graphs in ecosystem use cases is not practical because of the scalability problems that Call Graph generators have. Call Graph generation is usually considered to be a "full program analysis" resulting in large Call Graphs and expensive computation. To make an analysis applicable to ecosystem scale, this pragmatic approach does not work, because the number of possible combinations of how a particular artifact can be combined in a full program explodes. Therefore, it is necessary to make the analysis incremental. There are existing studies on different types of incremental program analysis. However, none of them focuses on Call Graph generation for an entire ecosystem. In this paper, we propose an incremental implementation of the CHA algorithm that can generate Call Graphs on-demand, by stitching together partial Call Graphs that have been extracted for libraries before. Our preliminary evaluation results show that the proposed approach scales well and outperforms the most scalable existing framework called OPAL.
An Exploration of Microprocessor Self-Test Optimisation Based On Safe Faults. 2021 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT). :1—6.
.
2021. Microprocessor software test libraries (STLs) must provide maximum fault coverage with minimum overhead. Pruning safe faults, which cannot cause errors in the output of the processor, from the fault list can increase fault coverage without adding test overhead. Applying more application-specific constraints can lead to the identification of more safe faults, and some such constraints are yet to be explored. This work explores the use of signal combination-based constraints alongside well-known constant signal-based constraints for identifying safe faults. Also, for the first time, information on safe faults is utilised during test compaction in order to further minimise test overhead. Results for an OpenRISC processor design show up to 2.33% improvement in fault coverage with the use of the proposed constraints. In one test program, a code segment contributing only to the coverage of safe faults is identified, with its removal providing a 1.09 % code size reduction on top of existing compaction techniques. The results may vary for a larger and more complex commercial design with greater scope for redundant logic. This work explores the use of signal combination-based constraints alongside well-known constant signal-based constraints for identifying safe faults. Also, for the first time, information on safe faults is utilised during test compaction in order to further minimise test overhead. Results for an OpenRISC processor design show up to 2.33% improvement in fault coverage with the use of the proposed constraints. In one test program, a code segment contributing only to the coverage of safe faults is identified, with its removal providing a 1.09 % code size reduction on top of existing compaction techniques. The results may vary for a larger and more complex commercial design with greater scope for redundant logic.
High-Assurance Cryptography in the Spectre Era. 2021 IEEE Symposium on Security and Privacy (SP). :1884–1901.
.
2021. High-assurance cryptography leverages methods from program verification and cryptography engineering to deliver efficient cryptographic software with machine-checked proofs of memory safety, functional correctness, provable security, and absence of timing leaks. Traditionally, these guarantees are established under a sequential execution semantics. However, this semantics is not aligned with the behavior of modern processors that make use of speculative execution to improve performance. This mismatch, combined with the high-profile Spectre-style attacks that exploit speculative execution, naturally casts doubts on the robustness of high-assurance cryptography guarantees. In this paper, we dispel these doubts by showing that the benefits of high-assurance cryptography extend to speculative execution, costing only a modest performance overhead. We build atop the Jasmin verification framework an end-to-end approach for proving properties of cryptographic software under speculative execution, and validate our approach experimentally with efficient, functionally correct assembly implementations of ChaCha20 and Poly1305, which are secure against both traditional timing and speculative execution attacks.
Secure Compilation of Constant-Resource Programs. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1–12.
.
2021. Observational non-interference (ONI) is a generic information-flow policy for side-channel leakage. Informally, a program is ONI-secure if observing program leakage during execution does not reveal any information about secrets. Formally, ONI is parametrized by a leakage function l, and different instances of ONI can be recovered through different instantiations of l. One popular instance of ONI is the cryptographic constant-time (CCT) policy, which is widely used in cryptographic libraries to protect against timing and cache attacks. Informally, a program is CCT-secure if it does not branch on secrets and does not perform secret-dependent memory accesses. Another instance of ONI is the constant-resource (CR) policy, a relaxation of the CCT policy which is used in Amazon's s2n implementation of TLS and in several other security applications. Informally, a program is CR-secure if its cost (modelled by a tick operator over an arbitrary semi-group) does not depend on secrets.In this paper, we consider the problem of preserving ONI by compilation. Prior work on the preservation of the CCT policy develops proof techniques for showing that main compiler optimisations preserve the CCT policy. However, these proof techniques critically rely on the fact that the semi-group used for modelling leakage satisfies the property: l1+ l1' = l2+l2'$\Rightarrow$l1=l2$\wedge$ l1' = l2' Unfortunately, this non-cancelling property fails for the CR policy, because its underlying semi-group is ($\backslash$mathbbN, +) and it is currently not known how to extend existing techniques to policies that do not satisfy non-cancellation.We propose a methodology for proving the preservation of the CR policy during a program transformation. We present an implementation of some elementary compiler passes, and apply the methodology to prove the preservation of these passes. Our results have been mechanically verified using the Coq proof assistant.
Good Bot, Bad Bot: Characterizing Automated Browsing Activity. 2021 IEEE Symposium on Security and Privacy (SP). :1589—1605.
.
2021. As the web keeps increasing in size, the number of vulnerable and poorly-managed websites increases commensurately. Attackers rely on armies of malicious bots to discover these vulnerable websites, compromising their servers, and exfiltrating sensitive user data. It is, therefore, crucial for the security of the web to understand the population and behavior of malicious bots.In this paper, we report on the design, implementation, and results of Aristaeus, a system for deploying large numbers of "honeysites", i.e., websites that exist for the sole purpose of attracting and recording bot traffic. Through a seven-month-long experiment with 100 dedicated honeysites, Aristaeus recorded 26.4 million requests sent by more than 287K unique IP addresses, with 76,396 of them belonging to clearly malicious bots. By analyzing the type of requests and payloads that these bots send, we discover that the average honeysite received more than 37K requests each month, with more than 50% of these requests attempting to brute-force credentials, fingerprint the deployed web applications, and exploit large numbers of different vulnerabilities. By comparing the declared identity of these bots with their TLS handshakes and HTTP headers, we uncover that more than 86.2% of bots are claiming to be Mozilla Firefox and Google Chrome, yet are built on simple HTTP libraries and command-line tools.
Extending Chromium: Memento-Aware Browser. 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL). :310—311.
.
2021. Users rely on their web browser to provide information about the websites they are visiting, such as the security state of the web page their viewing. Current browsers do not differentiate between the live Web and the past Web. If a user loads an archived web page, known as a memento, they have to rely on user interface (UI) elements within the page itself to inform them that the page they are viewing is not the live Web. Memento-awareness extends beyond recognizing a page that has already been archived. The browser should give users the ability to easily archive live web pages as they are browsing. This report presents a proof-of-concept browser that is memento-aware and is created by extending Google's open-source web browser Chromium.
Deep Reinforcement Learning for Mitigating Cyber-Physical DER Voltage Unbalance Attacks. 2021 American Control Conference (ACC). :2861–2867.
.
2021. The deployment of DER with smart-inverter functionality is increasing the controllable assets on power distribution networks and, consequently, the cyber-physical attack surface. Within this work, we consider the use of reinforcement learning as an online controller that adjusts DER Volt/Var and Volt/Watt control logic to mitigate network voltage unbalance. We specifically focus on the case where a network-aware cyber-physical attack has compromised a subset of single-phase DER, causing a large voltage unbalance. We show how deep reinforcement learning successfully learns a policy minimizing the unbalance, both during normal operation and during a cyber-physical attack. In mitigating the attack, the learned stochastic policy operates alongside legacy equipment on the network, i.e. tap-changing transformers, adjusting optimally predefined DER control-logic.
APIScanner - Towards Automated Detection of Deprecated APIs in Python Libraries. 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :5–8.
.
2021. Python libraries are widely used for machine learning and scientific computing tasks today. APIs in Python libraries are deprecated due to feature enhancements and bug fixes in the same way as in other languages. These deprecated APIs are discouraged from being used in further software development. Manually detecting and replacing deprecated APIs is a tedious and time-consuming task due to the large number of API calls used in the projects. Moreover, the lack of proper documentation for these deprecated APIs makes the task challenging. To address this challenge, we propose an algorithm and a tool APIScanner that automatically detects deprecated APIs in Python libraries. This algorithm parses the source code of the libraries using abstract syntax tree (ASTs) and identifies the deprecated APIs via decorator, hard-coded warning or comments. APIScanner is a Visual Studio Code Extension that highlights and warns the developer on the use of deprecated API elements while writing the source code. The tool can help developers to avoid using deprecated API elements without the execution of code. We tested our algorithm and tool on six popular Python libraries, which detected 838 of 871 deprecated API elements. Demo of APIScanner: https://youtu.be/1hy\_ugf-iek. Documentation, tool, and source code can be found here: https://rishitha957.github.io/APIScanner.
Comparison of Full-Text Articles and Abstracts for Visual Trend Analytics through Natural Language Processing. 2020 24th International Conference Information Visualisation (IV). :360–367.
.
2020. Scientific publications are an essential resource for detecting emerging trends and innovations in a very early stage, by far earlier than patents may allow. Thereby Visual Analytics systems enable a deep analysis by applying commonly unsupervised machine learning methods and investigating a mass amount of data. A main question from the Visual Analytics viewpoint in this context is, do abstracts of scientific publications provide a similar analysis capability compared to their corresponding full-texts? This would allow to extract a mass amount of text documents in a much faster manner. We compare in this paper the topic extraction methods LSI and LDA by using full text articles and their corresponding abstracts to obtain which method and which data are better suited for a Visual Analytics system for Technology and Corporate Foresight. Based on a easy replicable natural language processing approach, we further investigate the impact of lemmatization for LDA and LSI. The comparison will be performed qualitative and quantitative to gather both, the human perception in visual systems and coherence values. Based on an application scenario a visual trend analytics system illustrates the outcomes.
Analyzing Cryptographic API Usages for Android Applications Using HMM and N-Gram. 2020 International Symposium on Theoretical Aspects of Software Engineering (TASE). :153–160.
.
2020. A recent research shows that 88 % of Android applications that use cryptographic APIs make at least one mistake. For this reason, several tools have been proposed to detect crypto API misuses, such as CryptoLint, CMA, and CogniCryptSAsT. However, these tools depend heavily on manually designed rules, which require much cryptographic knowledge and could be error-prone. In this paper, we propose an approach based on probabilistic models, namely, hidden Markov model and n-gram model, to analyzing crypto API usages in Android applications. The difficulty lies in that crypto APIs are sensitive to not only API orders, but also their arguments. To address this, we have created a dataset consisting of crypto API sequences with arguments, wherein symbolic execution is performed. Finally, we have also conducted some experiments on our models, which shows that ( i) our models are effective in capturing the usages, detecting and locating the misuses; (ii) our models perform better than the ones without symbolic execution, especially in misuse detection; and (iii) compared with CogniCryptSAsT, our models can detect several new misuses.
A Threat Analysis Methodology for Security Requirements Elicitation in Machine Learning Based Systems. 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :426–433.
.
2020. Machine learning (ML) models are now a key component for many applications. However, machine learning based systems (MLBSs), those systems that incorporate them, have proven vulnerable to various new attacks as a result. Currently, there exists no systematic process for eliciting security requirements for MLBSs that incorporates the identification of adversarial machine learning (AML) threats with those of a traditional non-MLBS. In this research study, we explore the applicability of traditional threat modeling and existing attack libraries in addressing MLBS security in the requirements phase. Using an example MLBS, we examined the applicability of 1) DFD and STRIDE in enumerating AML threats; 2) Microsoft SDL AI/ML Bug Bar in ranking the impact of the identified threats; and 3) the Microsoft AML attack library in eliciting threat mitigations to MLBSs. Such a method has the potential to assist team members, even with only domain specific knowledge, to collaboratively mitigate MLBS threats.
ACETA: Accelerating Encrypted Traffic Analytics on Network Edge. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
.
2020. Applying machine learning techniques to detect malicious encrypted network traffic has become a challenging research topic. Traditional approaches based on studying network patterns fail to operate on encrypted data, especially without compromising the integrity of encryption. In addition, the requirement of rendering network-wide intelligent protection in a timely manner further exacerbates the problem. In this paper, we propose to leverage ×86 multicore platforms provisioned at enterprises' network edge with the software accelerators to design an encrypted traffic analytics (ETA) system with accelerated speed. Specifically, we explore a suite of data features and machine learning models with an open dataset. Then we show that by using Intel DAAL and OpenVINO libraries in model training and inference, we are able to reduce the training and inference time by a maximum order of 31× and 46× respectively while retaining the model accuracy.
A Dynamic Multi-Threaded Queuing Mechanism for Reducing the Inter-Process Communication Latency on Multi-Core Chips. 2020 3rd International Conference on Data Intelligence and Security (ICDIS). :12–19.
.
2020. Reducing latency in inter-process/inter-thread communication is one of the key challenges in parallel and distributed computing. This is because as the number of threads in an application increases, the communication overhead also increases. Moreover, the presence of background load further increases the latency. Reducing communication latency can have a significant impact on multi-threaded application performance in multi-core environments. In a wide-range of applications that utilize queueing mechanism, inter-process/ inter-thread communication typically involves enqueuing and dequeuing. This paper presents a queueing techniques called eLCRQ, which is a lock-free block-when-necessary multi-producer multi-consumer (MPMC) FIFO queue. It is designed for scenarios where the queue can randomly and frequently become empty during runtime. By combining lock-free performance with blocking resource efficiency, it delivers improved performance. Specifically, it results in a 1.7X reduction in latency and a 2.3X reduction in CPU usage when compared to existing message-passing mechanisms including PIPE and Sockets while running on multi-core Linux based systems. The proposed scheme also provides a 3.4X decrease in CPU usage while maintaining comparable latency when compared to other (MPMC) lock-free queues in low load scenarios. Our work is based on open-source Linux and support libraries.
Study on Possibility of Estimating Smartphone Inputs from Tap Sounds. 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). :1425—1429.
.
2020. Side-channel attacks occur on smartphone keystrokes, where the input can be intercepted by a tapping sound. Ilia et al. reported that keystrokes can be predicted with 61% accuracy from tapping sounds listened to by the built-in microphone of a legitimate user's device. Li et al. reported that by emitting sonar sounds from an attacker smartphone's built-in speaker and analyzing the reflected waves from a legitimate user's finger at the time of tap input, keystrokes can be estimated with 90% accuracy. However, the method proposed by Ilia et al. requires prior penetration of the target smartphone and the attack scenario lacks plausibility; if the attacker's smartphone can be penetrated, the keylogger can directly acquire the keystrokes of a legitimate user. In addition, the method proposed by Li et al. is a side-channel attack in which the attacker actively interferes with the terminals of legitimate users and can be described as an active attack scenario. Herein, we analyze the extent to which a user's keystrokes are leaked to the attacker in a passive attack scenario, where the attacker wiretaps the sounds of the legitimate user's keystrokes using an external microphone. First, we limited the keystrokes to the personal identification number input. Subsequently, mel-frequency cepstrum coefficients of tapping sound data were represented as image data. Consequently, we found that the input is discriminated with high accuracy using a convolutional neural network to estimate the key input.
RetroWrite: Statically Instrumenting COTS Binaries for Fuzzing and Sanitization. 2020 IEEE Symposium on Security and Privacy (SP). :1497—1511.
.
2020. Analyzing the security of closed source binaries is currently impractical for end-users, or even developers who rely on third-party libraries. Such analysis relies on automatic vulnerability discovery techniques, most notably fuzzing with sanitizers enabled. The current state of the art for applying fuzzing or sanitization to binaries is dynamic binary translation, which has prohibitive performance overhead. The alternate technique, static binary rewriting, cannot fully recover symbolization information and hence has difficulty modifying binaries to track code coverage for fuzzing or to add security checks for sanitizers.The ideal solution for binary security analysis would be a static rewriter that can intelligently add the required instrumentation as if it were inserted at compile time. Such instrumentation requires an analysis to statically disambiguate between references and scalars, a problem known to be undecidable in the general case. We show that recovering this information is possible in practice for the most common class of software and libraries: 64-bit, position independent code. Based on this observation, we develop RetroWrite, a binary-rewriting instrumentation to support American Fuzzy Lop (AFL) and Address Sanitizer (ASan), and show that it can achieve compiler-level performance while retaining precision. Binaries rewritten for coverage-guided fuzzing using RetroWrite are identical in performance to compiler-instrumented binaries and outperform the default QEMU-based instrumentation by 4.5x while triggering more bugs. Our implementation of binary-only Address Sanitizer is 3x faster than Valgrind's memcheck, the state-of-the-art binary-only memory checker, and detects 80% more bugs in our evaluation.
Stealthy Privacy Attacks Against Mobile AR Apps. 2020 IEEE Conference on Communications and Network Security (CNS). :1—5.
.
2020. The proliferation of mobile augmented reality applications and the toolkits to create them have serious implications for user privacy. In this paper, we explore how malicious AR app developers can leverage capabilities offered by commercially available AR libraries, and describe how edge computing can be used to address this privacy problem.
A Trust-based Message Passing Algorithm against Persistent SSDF. 2020 IEEE 20th International Conference on Communication Technology (ICCT). :1112–1115.
.
2020. As a key technology in cognitive radio, cooperative spectrum sensing has been paid more and more attention. In cooperative spectrum sensing, multi-user cooperative spectrum sensing can effectively alleviate the performance degradation caused by multipath effect and shadow fading, and improve the spectrum utilization. However, as there may be malicious users in the cooperative sensing users, sending forged false messages to the fusion center or neighbor nodes to mislead them to make wrong judgments, which will greatly reduce the spectrum utilization. To solve this problem, this paper proposes an intelligent anti spectrum sensing data falsification (SSDF) attack algorithm using trust-based non consensus message passing algorithm. In this scheme, only one perception is needed, and the historical propagation path of each message is taken as the basis to calculate the reputation of each cognitive user. Every time a node receives different messages from the same cognitive user, there must be malicious users in its propagation path. We reward the nodes that appear more times in different paths with reputation value, and punish the nodes that appear less. Finally, the real value of the tampered message is restored according to the calculated reputation value. The MATLAB results show that the proposed scheme has a high recovery rate for messages and can identify malicious users in the network at the same time.
Concurrency Analysis of Go and Java. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1—6.
.
2020. There has been tremendous progress in the past few decades towards developing applications that receive data and send data concurrently. In such a day and age, there is a requirement for a language that can perform optimally in such environments. Currently, the two most popular languages in that respect are Go and Java. In this paper, we look to analyze the concurrency features of Go and Java through a complete programming language performance analysis, looking at their compile time, run time, binary sizes and the language's unique concurrency features. This is done by experimenting with the two languages using the matrix multiplication and PageRank algorithms. To the extent of our knowledge, this is the first work which used PageRank algorithm to analyse concurrency. Considering the results of this paper, application developers and researchers can hypothesize on an appropriate language to use for their concurrent programming activity.Results of this paper show that Go performs better for fewer number of computation but is soon taken over by Java as the number of computations drastically increase. This trend is shown to be the opposite when thread creation and management is considered where Java performs better with fewer computation but Go does better later on. Regarding concurrency features both Java with its Executor Service library and Go had their own advantages that made them better for specific applications.