Biblio
With the construction and implementation of the government information resources sharing mechanism, the protection of citizens' privacy has become a vital issue for government departments and the public. This paper discusses the risk of citizens' privacy disclosure related to data sharing among government departments, and analyzes the current major privacy protection models for data sharing. Aiming at the issues of low efficiency and low reliability in existing e-government applications, a statistical data sharing framework among governmental departments based on local differential privacy and blockchain is established, and its applicability and advantages are illustrated through example analysis. The characteristics of the private blockchain enhance the security, credibility and responsiveness of information sharing between departments. Local differential privacy provides better usability and security for sharing statistics. It not only keeps statistics available, but also protects the privacy of citizens.
False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.
The e-government concept and healthcare have usually been studied separately. Even when and where both e-government and healthcare systems were combined in a study, the roles of e-government in healthcare have not been examined. As a result., the complementarity of the systems poses potential challenges. The interpretive approach was applied in this study. Existing materials in the areas of healthcare and e-government were used as data from a qualitative method viewpoint. Dimension of change from the perspective of the structuration theory was employed to guide the data analysis. From the analysis., six factors were found to be the main roles of e-government in the implementation and application of e-health in the delivering of healthcare services. An understanding of the roles of e-government promotes complementarity., which enhances the healthcare service delivery to the community.
Parfait [1] is a static analysis tool originally developed to find implementation defects in C/C++ systems code. Parfait's focus is on proving both high precision (low false positives) as well as scaling to systems with millions of lines of code (typically requiring 10 minutes of analysis time per million lines). Parfait has since been extended to detect security vulnerabilities in applications code, supporting the Java EE and PL/SQL server stack. In this abstract we describe some of the challenges we encountered in this process including some of the differences seen between the applications code being analysed, our solutions that enable us to analyse a variety of applications, and a summary of the challenges that remain.
Cryptographic protocols are the basis for the security of any protected system, including the electronic voting system. One of the most effective ways to analyze protocol security is to use verifiers. In this paper, the formal verifier SPIN was used to analyze the security of the cryptographic protocol for e-voting, which is based on model checking using linear temporal logic (LTL). The cryptographic protocol of electronic voting is described. The main structural units of the Promela language used for simulation in the SPIN verifier are described. The model of the electronic voting protocol in the language Promela is given. The interacting parties, transferred data, the order of the messages transmitted between the parties are described. Security of the cryptographic protocol using the SPIN tool is verified. The simulation of the protocol with active intruder using the man in the middle attack (MITM) to substitute data is made. In the simulation results it is established that the protocol correctly handles the case of an active attack on the parties' authentication.
This Research to Practice Full Paper presents a new methodology in cybersecurity education. In the context of the cybersecurity profession, the `isolation problem' refers to the observed isolation of different knowledge units, as well as the isolation of technical and business perspectives. Due to limitations in existing cybersecurity education, professionals entering the field are often trapped in microscopic perspectives, and struggle to extend their findings to grasp the big picture in a target network scenario. Guided by a previous developed and published framework named “cross-layer situation knowledge reference model” (SKRM), which delivers comprehensive level big picture situation awareness, our new methodology targets at developing suites of teaching modules to address the above issues. The modules, featuring interactive hands-on labs that emulate real-world multiple-step attacks, will help students form a knowledge network instead of isolated conceptual knowledge units. Students will not just be required to leverage various techniques/tools to analyze breakpoints and complete individual modules; they will be required to connect logically the outputs of these techniques/tools to infer the ground truth and gain big picture awareness of the cyber situation. The modules will be able to be used separately or as a whole in a typical network security course.
Monitoring for security and well-being in highly populated areas is a critical issue for city administrators, policy makers and urban planners. As an essential part of many dynamic and critical data-driven tasks, situational awareness (SAW) provides decision-makers a deeper insight of the meaning of urban surveillance. Thus, surveillance measures are increasingly needed. However, traditional surveillance platforms are not scalable when more cameras are added to the network. In this work, a smart surveillance as an edge service has been proposed. To accomplish the object detection, identification, and tracking tasks at the edge-fog layers, two novel lightweight algorithms are proposed for detection and tracking respectively. A prototype has been built to validate the feasibility of the idea, and the test results are very encouraging.
The "aging" phenomenon occurs after the long-term running of software, with the fault rate rising and running efficiency dropping. As there is no corresponding testing type for this phenomenon among conventional software tests, "software runtime accumulative testing" is proposed. Through analyzing several examples of software aging causing serious accidents, software is placed in the system environment required for running and the occurrence mechanism of software aging is analyzed. In addition, corresponding testing contents and recommended testing methods are designed with regard to all factors causing software aging, and the testing process and key points of testing requirement analysis for carrying out runtime accumulative testing are summarized, thereby providing a method and guidance for carrying out "software runtime accumulative testing" in software engineering.
With wide applications like surveillance and imaging, securing underwater acoustic Mobile Ad-hoc NETworks (MANET) becomes a double-edged sword for oceanographic operations. Underwater acoustic MANET inherits vulnerabilities from 802.11-based MANET which renders traditional cryptographic approaches defenseless. A Trust Management Framework (TMF), allowing maintained confidence among participating nodes with metrics built from their communication activities, promises secure, efficient and reliable access to terrestrial MANETs. TMF cannot be directly applied to the underwater environment due to marine characteristics that make it difficult to differentiate natural turbulence from intentional misbehavior. This work proposes a trust model to defend underwater acoustic MANETs against attacks using a machine learning method with carefully chosen communication metrics, and a cloud model to address the uncertainty of trust in harsh underwater environments. By integrating the trust framework of communication with the cloud model to combat two kinds of uncertainties: fuzziness and randomness, trust management is greatly improved for underwater acoustic MANETs.
Due to its costly and time-consuming nature and a wide range of passive barrier elements and tools for their breaching, testing the delay time of passive barriers is only possible as an experimental tool to verify expert judgements of said delay times. The article focuses on the possibility of creating and utilizing a new method of acquiring values of delay time for various passive barrier elements using expert judgements which could add to the creation of charts where interactions between the used elements of mechanical barriers and the potential tools for their bypassing would be assigned a temporal value. The article consists of basic description of methods of expert judgements previously applied for making prognoses of socio-economic development and in other societal areas, which are called soft system. In terms of the problem of delay time, this method needed to be modified in such a way that the prospective output would be expressible by a specific quantitative value. To achieve this goal, each stage of the expert judgements was adjusted to the use of suitable scientific methods to select appropriate experts and then to achieve and process the expert data. High emphasis was placed on evaluation of quality and reliability of the expert judgements, which takes into account the specifics of expert selection such as their low numbers, specialization and practical experience.
Existing approaches to cyber defense have been inadequate at defending the targets from advanced persistent threats (APTs). APTs are stealthy and orchestrated attacks, which target both corporations and governments to exfiltrate important data. In this paper, we present a novel comprehensibility manipulation framework (CMF) to generate a haystack of hard to comprehend fake documents, which can be used for deceiving attackers and increasing the cost of data exfiltration by wasting their time and resources. CMF requires an original document as input and generates fake documents that are both believable and readable for the attacker, possess no important information, and are hard to comprehend. To evaluate CMF, we experimented with college aptitude tests and compared the performance of many readers on separate reading comprehension exercises with fake and original content. Our results showed a statistically significant difference in the correct responses to the same questions across the fake and original exercises, thus validating the effectiveness of CMF operations to mislead.
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by \$L\_0\$, \$L\_1\$, \$L\_2\$, and \$L\_$\backslash$infty \$ norms, namely, L\_0, L\_1, L\_2, and L\_$ınfty$ attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that \$L\_0\$, \$L\_1\$, \$L\_2\$, and \$L\_$\backslash$infty \$ attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100% attack success rate and the minimal distortion.
This paper advocates programming high-performance code using partial evaluation. We present a clean-slate programming system with a simple, annotation-based, online partial evaluator that operates on a CPS-style intermediate representation. Our system exposes code generation for accelerators (vectorization/parallelization for CPUs and GPUs) via compiler-known higher-order functions that can be subjected to partial evaluation. This way, generic implementations can be instantiated with target-specific code at compile time. In our experimental evaluation we present three extensive case studies from image processing, ray tracing, and genome sequence alignment. We demonstrate that using partial evaluation, we obtain high-performance implementations for CPUs and GPUs from one language and one code base in a generic way. The performance of our codes is mostly within 10%, often closer to the performance of multi man-year, industry-grade, manually-optimized expert codes that are considered to be among the top contenders in their fields.
Passive radio frequency identification (RFID) tags are ubiquitous today due to their low cost (a few cents), relatively long communication range (\$$\backslash$sim\$7-11\textasciitildem), ease of deployment, lack of battery, and small form factor. Hence, they are an attractive foundation for environmental sensing. Although RFID-based sensors have been studied in the research literature and are also available commercially, manufacturing them has been a technically-challenging task that is typically undertaken only by experienced researchers. In this paper, we show how even hobbyists can transform commodity RFID tags into sensors by physically altering (`hacking') them using COTS sensors, a pair of scissors, and clear adhesive tape. Importantly, this requires no change to commercial RFID readers. We also propose a new legacy-compatible tag reading protocol called Differential Minimum Response Threshold (DMRT) that is robust to the changes in an RF environment. To validate our vision, we develop RFID-based sensors for illuminance, temperature, touch, and gestures. We believe that our approach has the potential to open up the field of batteryless backscatter-based RFID sensing to the research community, making it an exciting area for future work.
Software-defined networking (SDN) continues to grow in popularity because of its programmable and extensible control plane realized through network applications (apps). However, apps introduce significant security challenges that can systemically disrupt network operations, since apps must access or modify data in a shared control plane state. If our understanding of how such data propagate within the control plane is inadequate, apps can co-opt other apps, causing them to poison the control plane’s integrity.
We present a class of SDN control plane integrity attacks that we call cross-app poisoning (CAP), in which an unprivileged app manipulates the shared control plane state to trick a privileged app into taking actions on its behalf. We demonstrate how role-based access control (RBAC) schemes are insufficient for preventing such attacks because they neither track information flow nor enforce information flow control (IFC). We also present a defense, ProvSDN, that uses data provenance to track information flow and serves as an online reference monitor to prevent CAP attacks. We implement ProvSDN on the ONOS SDN controller and demonstrate that information flow can be tracked with low-latency overheads.
Mining is the foundation of blockchain-based cryptocurrencies such as Bitcoin rewarding the miner for finding blocks for new transactions. The Monero currency enables mining with standard hardware in contrast to special hardware (ASICs) as often used in Bitcoin, paving the way for in-browser mining as a new revenue model for website operators. In this work, we study the prevalence of this new phenomenon. We identify and classify mining websites in 138M domains and present a new fingerprinting method which finds up to a factor of 5.7 more miners than publicly available block lists. Our work identifies and dissects Coinhive as the major browser-mining stakeholder. Further, we present a new method to associate mined blocks in the Monero blockchain to mining pools and uncover that Coinhive currently contributes 1.18% of mined blocks having turned over 1293 Moneros in June 2018.
A multitude of Channel Assignment (CA) schemes have created a paradox of plenty, making CA selection for Wireless Mesh Networks (WMNs) an onerous task. CA performance prediction (CAPP) metrics are novel tools that address the problem of appropriate CA selection. However, most CAPP metrics depend upon a variety of factors such as the WMN topology, the type of CA scheme, and connectedness of the underlying graph. In this work, we propose an improved Channel Assignment Link-Weight Metric (iCALM) that is independent of these constraints. To the best of our knowledge, iCALM is the first universal CAPP metric for WMNs. To evaluate iCALM, we design two WMN topologies that conform to the attributes of real-world mesh network deployments, and run rigorous simulations in ns-3. We compare iCALM to four existing CAPP metrics, and demonstrate that it performs exceedingly well, regardless of the CA type, and the WMN layout.
The number of new malware and new malware variants have been increasing continuously. Security experts analyze malware to capture the malicious properties of malware and to generate signatures or detection rules, but the analysis overheads keep increasing with the increasing number of malware. To analyze a large amount of malware, various kinds of automatic analysis methods are in need. Recently, deep learning techniques such as convolutional neural network (CNN) and recurrent neural network (RNN) have been applied for malware classifications. The features used in the previous approches are mostly based on API (Application Programming Interface) information, and the API invocation information can be obtained through dynamic analysis. However, the invocation information may not reflect malicious behaviors of malware because malware developers use various analysis avoidance techniques. Therefore, deep learning-based malware analysis using other features still need to be developed to improve malware analysis performance. In this paper, we propose a malware classification method using the deep learning algorithm based on byte information. Our proposed method uses images generated from malware byte information that can reflect malware behavioral context, and the convolutional neural network-based sentence analysis is used to process the generated images. We performed several experiments to show the effecitveness of our proposed method, and the experimental results show that our method showed higher accuracy than the naive CNN model, and the detection accuracy was about 99%.
Mobile two-factor authentication (2FA) has become commonplace along with the popularity of mobile devices. Current mobile 2FA solutions all require some form of user effort which may seriously affect the experience of mobile users, especially senior citizens or those with disability such as visually impaired users. In this paper, we propose Proximity-Proof, a secure and usable mobile 2FA system without involving user interactions. Proximity-Proof automatically transmits a user's 2FA response via inaudible OFDM-modulated acoustic signals to the login browser. We propose a novel technique to extract individual speaker and microphone fingerprints of a mobile device to defend against the powerful man-in-the-middle (MiM) attack. In addition, Proximity-Proof explores two-way acoustic ranging to thwart the co-located attack. To the best of our knowledge, Proximity-Proof is the first mobile 2FA scheme resilient to the MiM and co-located attacks. We empirically analyze that Proximity-Proof is at least as secure as existing mobile 2FA solutions while being highly usable. We also prototype Proximity-Proof and confirm its high security, usability, and efficiency through comprehensive user experiments.
Successive interference cancellation (SIC) receiver is adopted by power domain non-orthogonal multiple access (NOMA) at the receiver side as the baseline receiver scheme taking the forthcoming expected mobile device evolution into account. Development technologies and advanced techniques are boldly being considered in order to achieve power saving in many networks, to reach sustainability and reliability in communication due to envisioned huge amount of data delivery. In this paper, we propose a novel scheme of NOMA-SIC for the sake of balancing the trade-off between system performance and complexity. In the proposed scheme, each SIC level is comprised by a matching filter (MF), a MF detector and a regenerator. In simulations, the proposed scheme demonstrates the best performance on power saving, of which energy efficiency increases with an increase in the number of NOMA device pairs.
While existing research has explored the trade-off between security and performance, these efforts primarily focus on software consumers and often overlook the effectiveness and productivity of software producers. In this paper, we highlight an established security practice, air-gap isolation, and some challenges it uniquely instigates. To better understand and start quantifying the impacts of air-gap isolation on software development productivity, we conducted a survey at a commercial software company: Analytical Graphics, Inc. Based on our insights of dealing with air-gap isolation daily, we suggest some possible directions for future research. Our goal is to bring attention to this neglected area of research and to start a discussion in the SE community about the struggles faced by many commercial and governmental organizations.
Deep Learning Models are vulnerable to adversarial inputs, samples modified in order to maximize error of the system. We hereby introduce Spartan Networks, Deep Learning models that are inherently more resistant to adverarial examples, without doing any input preprocessing out of the network or adversarial training. These networks have an adversarial layer within the network designed to starve the network of information, using a new activation function to discard data. This layer trains the neural network to filter-out usually-irrelevant parts of its input. These models thus have a slightly lower precision, but report a higher robustness under attack than unprotected models.
As a valuable source of information, Word Of Mouth1 has always been valued by consumers and business marketers. The Internet provides a new medium for Word Of Mouth communication. Consumers share their views and comments on products, services, brands and enterprises through online platforms, thus forming Internet Word Of Mouth, which will be of great importance to B2C enterprises. However, disturbing and even false information as well as uncertainties and risks existing in the online communication environment lead to the crisis of online trust. Accordingly, this study constructs a trust mechanism model of Internet Word Of Mouth effect, which shows that the professionalism of communicators, online relationship strength, communication channels, and product involvement are key factors significantly affecting the Word Of Mouth effect. This model can provide theoretical guidance in the word-of-mouth marketing and the operation of B2C e-commerce enterprises.
This paper aims to explain static analysis techniques in detail, and to highlight the weaknesses and challenges which face it. To this end, more than 80 static analysis-based framework have been studied, and in their light, the process of detecting malicious applications has been divided into four phases that were explained in a schematic manner. Also, the features that is used in static analysis were discussed in detail by dividing it into four categories namely, Manifest-based features, code-based features, semantic features and app's metadata-based features. Also, the challenges facing methods based on static analysis were discussed in detail. Finally, a case study was conducted to test the strength of some known commercial antivirus and one of the stat-of-art academic static analysis frameworks against obfuscation techniques used by developers of malicious applications. The results showed a significant impact on the performance of the most tested antiviruses and frameworks, which is reflecting the urgent need for more accurately tools.