Biblio
Computing systems today have a large number of security configuration settings that enforce security properties. However, vulnerabilities and incorrect configuration increase the potential for attacks. Provable verification and simulation tools have been introduced to eliminate configuration conflicts and weaknesses, which can increase system robustness against attacks. Most of these tools require special knowledge in formal methods and precise specification for requirements in special languages, in addition to their excessive need for computing resources. Video games have been utilized by researchers to make educational software more attractive and engaging. Publishing these games for crowdsourcing can also stimulate competition between players and increase the game educational value. In this paper we introduce a game interface, called NetMaze, that represents the network configuration verification problem as a video game and allows for attack analysis. We aim to make the security analysis and hardening usable and accurately achievable, using the power of video games and the wisdom of crowdsourcing. Players can easily discover weaknesses in network configuration and investigate new attack scenarios. In addition, the gameplay scenarios can also be used to analyze and learn attack attribution considering human factors. In this paper, we present a provable mapping from the network configuration to 3D game objects.
The newly emerging cyber-physical systems (CPS) discover events from multiple, distributed sources with multiple levels of detail and heterogeneous data format, which may not be compare and integrate, and turn to hardly combined determination for action. While existing efforts have mainly focused on investigating a uniform CPS event representation with spatio-temporal attributes, in this paper we propose a new event model with two-layer structure, Basic Event Model (BEM) and Extended Information Set (EIS). A BEM could be extended with EIS by semantic adaptor for spatio-temporal and other attribution enhancement. In particular, we define the event process functions, like event attribution extraction and composition determination, for CPS action trigger exploit the Complex Event Process (CEP) engine Esper. Examples show that such event model provides several advantages in terms of extensibility, flexibility and heterogeneous support, and lay the foundations of event-based system design in CPS.
By representing large corpora with concise and meaningful elements, topic-based generative models aim to reduce the dimension and understand the content of documents. Those techniques originally analyze on words in the documents, but their extensions currently accommodate meta-data such as authorship information, which has been proved useful for textual modeling. The importance of learning authorship is to extract author interests and assign authors to anonymous texts. Author-Topic (AT) model, an unsupervised learning technique, successfully exploits authorship information to model both documents and author interests using topic representations. However, the AT model simplifies that each author has equal contribution on multiple-author documents. To overcome this limitation, we assumes that authors give different degrees of contributions on a document by using a Dirichlet distribution. This automatically transforms the unsupervised AT model to Supervised Author-Topic (SAT) model, which brings a novelty of authorship prediction on anonymous texts. The SAT model outperforms the AT model for identifying authors of documents written by either single authors or multiple authors with a better Receiver Operating Characteristic (ROC) curve and a significantly higher Area Under Curve (AUC). The SAT model not only achieves competitive performance to state-of-the-art techniques e.g. Random forests but also maintains the characteristics of the unsupervised models for information discovery i.e. Word distributions of topics, author interests, and author contributions.
In interconnected power systems, dynamic model reduction can be applied to generators outside the area of interest (i.e., study area) to reduce the computational cost associated with transient stability studies. This paper presents a method of deriving the reduced dynamic model of the external area based on dynamic response measurements. The method consists of three steps, namely dynamic-feature extraction, attribution, and reconstruction (DEAR). In this method, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal “basis” of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original system. The network model is unchanged in the DEAR method. Tests on several IEEE standard systems show that the proposed method yields better reduction ratio and response errors than the traditional coherency based reduction methods.
Multiple-object tracking is an important task in automated video surveillance. In this paper, we present a multiple-human-tracking approach that takes the single-frame human detection results as input and associates them to form trajectories while improving the original detection results by making use of reliable temporal information in a closed-loop manner. It works by first forming tracklets, from which reliable temporal information is extracted, and then refining the detection responses inside the tracklets, which also improves the accuracy of tracklets' quantities. After this, local conservative tracklet association is performed and reliable temporal information is propagated across tracklets so that more detection responses can be refined. The global tracklet association is done last to resolve association ambiguities. Experimental results show that the proposed approach improves both the association and detection results. Comparison with several state-of-the-art approaches demonstrates the effectiveness of the proposed approach.
Near Field Communication (NFC)-based mobile phone services offer a lifeline to the under-appreciated multiapplication smart card initiative. The initiative could effectively replace heavy wallets full of smart cards for mundane tasks. However, the issue of the deployment model still lingers on. Possible approaches include, but are not restricted to, the User Centric Smart card Ownership Model (UCOM), GlobalPlatform Consumer Centric Model, and Trusted Service Manager (TSM). In addition, multiapplication smart card architecture can be a GlobalPlatform Trusted Execution Environment (TEE) and/or User Centric Tamper-Resistant Device (UCTD), which provide cross-device security and privacy preservation platforms to their users. In the multiapplication smart card environment, there might not be a prior off-card trusted relationship between a smart card and an application provider. Therefore, as a possible solution to overcome the absence of prior trusted relationships, this paper proposes the concept of Trusted Platform Module (TPM) for smart cards (embedded devices) that can act as a point of reference for establishing the necessary trust between the device and an application provider, and among applications.
As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.
Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
It has gradually realized in the industry that the increasing complexity of cloud computing under interaction of technology, business, society and the like, instead of being simply solved depending on research on information technology, shall be explained and researched from a systematic and scientific perspective on the basis of theory and method of a complex adaptive system (CAS). This article, for basic problems in CAS theoretical framework, makes research on definition of an active adaptive agent constituting the cloud computing system, and proposes a service agent concept and basic model through commonality abstraction from two basic levels: cloud computing technology and business, thus laying a foundation for further development of cloud computing complexity research as well as for multi-agent based cloud computing environment simulation.
Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper's contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.
Smart grids utilize computation and communication to improve the efficacy and dependability of power generation, transmission, and distribution. As such, they are among the most critical and complex cyber-physical systems. The success of smart grids in achieving their stated goals is yet to be rigorously proven. In this paper, our focus is on improvements (or lack thereof) in reliability. We discuss vulnerabilities in the smart grid and their potential impact on its reliability, both generally and for the specific example of the IEEE-14 bus system. We conclude the paper by presenting a preliminary Markov imbedded systems model for reliability of smart grids and describe how it can be evolved to capture the vulnerabilities discussed.
Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multicloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier's cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multicloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie-Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client's authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification, and public verification.
In this paper we explore the potential for securing a distributed Arabic Optical Character Recognition (OCR) system via cloud computing technology in a pervasive and mobile environment. The goal of the system is to achieve full accuracy, high speed and security when taking into account large vocabularies and amounts of documents. This issue has been resolved by integrating the recognition process and the security issue with multiprocessing and distributed computing technologies.
In this paper, parallelization and high performance computing are utilized to enable ultrafast transient stability analysis that can be used in a real-time environment to quickly perform “what-if” simulations involving system dynamics phenomena. EPRI's Extended Transient Midterm Simulation Program (ETMSP) is modified and enhanced for this work. The contingency analysis is scaled for large-scale contingency analysis using Message Passing Interface (MPI) based parallelization. Simulations of thousands of contingencies on a high performance computing machine are performed, and results show that parallelization over contingencies with MPI provides good scalability and computational gains. Different ways to reduce the Input/Output (I/O) bottleneck are explored, and findings indicate that architecting a machine with a larger local disk and maintaining a local file system significantly improve the scaling results. Thread-parallelization of the sparse linear solve is explored also through use of the SuperLU_MT library.
This paper presents a credibility model to assess trust of Web services. The model relies on consumers' ratings whose accuracy can be questioned due to different biases. A category of consumers known as strict are usually excluded from the process of reaching a majority consensus. We demonstrated that this exclusion should not be. The proposed model reduces the gap between these consumers' ratings and the current majority rating. Fuzzy clustering is used to compute consumers' credibility. To validate this model a set of experiments are carried out.
This study describes and evaluates a novel trust model for a range of collaborative applications. The model assumes that humans routinely choose to trust their peers by relying on few recurrent presumptions, which are domain independent and which form a recognisable trust expertise. We refer to these presumptions as trust schemes, a specialised version of Walton's argumentation schemes. Evidence is provided about the efficacy of trust schemes using a detailed experiment on an online community of 80,000 members. Results show how proposed trust schemes are more effective in trust computation when they are combined together and when their plausibility in the selected context is considered.
Mobile social networks (MSNs) facilitate connections between mobile users and allow them to find other potential users who have similar interests through mobile devices, communicate with them, and benefit from their information. As MSNs are distributed public virtual social spaces, the available information may not be trustworthy to all. Therefore, mobile users are often at risk since they may not have any prior knowledge about others who are socially connected. To address this problem, trust inference plays a critical role for establishing social links between mobile users in MSNs. Taking into account the nonsemantical representation of trust between users of the existing trust models in social networks, this paper proposes a new fuzzy inference mechanism, namely MobiFuzzyTrust, for inferring trust semantically from one mobile user to another that may not be directly connected in the trust graph of MSNs. First, a mobile context including an intersection of prestige of users, location, time, and social context is constructed. Second, a mobile context aware trust model is devised to evaluate the trust value between two mobile users efficiently. Finally, the fuzzy linguistic technique is used to express the trust between two mobile users and enhance the human's understanding of trust. Real-world mobile dataset is adopted to evaluate the performance of the MobiFuzzyTrust inference mechanism. The experimental results demonstrate that MobiFuzzyTrust can efficiently infer trust with a high precision.
This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.
The longstanding debate on a fundamental science of security has led to advances in systems, software, and network security. However, existing efforts have done little to inform how an environment should react to emerging and ongoing threats and compromises. The authors explore the goals and structures of a new science of cyber-decision-making in the Cyber-Security Collaborative Research Alliance, which seeks to develop a fundamental theory for reasoning under uncertainty the best possible action in a given cyber environment. They also explore the needs and limitations of detection mechanisms; agile systems; and the users, adversaries, and defenders that use and exploit them, and conclude by considering how environmental security can be cast as a continuous optimization problem.
Secure information flow guarantees the secrecy and integrity of data, preventing an attacker from learning secret information (secrecy) or injecting untrusted information (integrity). Covert channels can be used to subvert these security guarantees, for example, timing and termination channels can, either intentionally or inadvertently, violate these guarantees by modifying the timing or termination behavior of a program based on secret or untrusted data. Attacks using these covert channels have been published and are known to work in practiceâ as techniques to prevent non-covert channels are becoming increasingly practical, covert channels are likely to become even more attractive for attackers to exploit. The goal of this paper is to understand the subtleties of timing and termination-sensitive noninterference, explore the space of possible strategies for enforcing noninterference guarantees, and formalize the exact guarantees that these strategies can enforce. As a result of this effort we create a novel strategy that provides stronger security guarantees than existing work, and we clarify claims in existing work about what guarantees can be made.
The home computer user is often said to be the weakest link in computer security. They do not always follow security advice, and they take actions, as in phishing, that compromise themselves. In general, we do not understand why users do not always behave safely, which would seem to be in their best interest. This paper reviews the literature of surveys and studies of factors that influence security decisions for home computer users. We organize the review in four sections: understanding of threats, perceptions of risky behavior, efforts to avoid security breaches and attitudes to security interventions. We find that these studies reveal a lot of reasons why current security measures may not match the needs or abilities of home computer users and suggest future work needed to inform how security is delivered to this user group.