Biblio

Found 5882 results

Filters: Keyword is composability  [Clear All Filters]
2018-11-28
Biswas, Swarnendu, Cao, Man, Zhang, Minjia, Bond, Michael D., Wood, Benjamin P..  2017.  Lightweight Data Race Detection for Production Runs. Proceedings of the 26th International Conference on Compiler Construction. :11–21.

To detect data races that harm production systems, program analysis must target production runs. However, sound and precise data race detection adds too much run-time overhead for use in production systems. Even existing approaches that provide soundness or precision incur significant limitations. This work addresses the need for soundness (no missed races) and precision (no false races) by introducing novel, efficient production-time analyses that address each need separately. (1) Precise data race detection is useful for developers, who want to fix bugs but loathe false positives. We introduce a precise analysis called RaceChaser that provides low, bounded run-time overhead. (2) Sound race detection benefits analyses and tools whose correctness relies on knowledge of all potential data races. We present a sound, efficient approach called Caper that combines static and dynamic analysis to catch all data races in observed runs. RaceChaser and Caper are useful not only on their own; we introduce a framework that combines these analyses, using Caper as a sound filter for precise data race detection by RaceChaser. Our evaluation shows that RaceChaser and Caper are efficient and effective, and compare favorably with existing state-of-the-art approaches. These results suggest that RaceChaser and Caper enable practical data race detection that is precise and sound, respectively, ultimately leading to more reliable software systems.

2018-02-21
Kotel, Sonia, Zeghid, Medien, Machhout, Mohsen, Tourki, Rached.  2017.  Lightweight Encryption Algorithm Based on Modified XTEA for Low-Resource Embedded Devices. Proceedings of the 21st International Database Engineering & Applications Symposium. :192–199.

The number of resource-limited wireless devices utilized in many areas of Internet of Things is growing rapidly; there is a concern about privacy and security. Various lightweight block ciphers are proposed; this work presents a modified lightweight block cipher algorithm. A Linear Feedback Shift Register is used to replace the key generation function in the XTEA1 Algorithm. Using the same evaluation conditions, we analyzed the software implementation of the modified XTEA using FELICS (Fair Evaluation of Lightweight Cryptographic Systems) a benchmarking framework which calculates RAM footprint, ROM occupation and execution time on three largely used embedded devices: 8-bit AVR microcontroller, 16-bit MSP microcontroller and 32-bit ARM microcontroller. Implementation results show that it provides less software requirements compared to original XTEA. We enhanced the security level and the software performance.

2018-05-16
Salman, A., Diehl, W., Kaps, J. P..  2017.  A light-weight hardware/software co-design for pairing-based cryptography with low power and energy consumption. 2017 International Conference on Field Programmable Technology (ICFPT). :235–238.

Embedded electronic devices and sensors such as smartphones, smart watches, medical implants, and Wireless Sensor Nodes (WSN) are making the “Internet of Things” (IoT) a reality. Such devices often require cryptographic services such as authentication, integrity and non-repudiation, which are provided by Public-Key Cryptography (PKC). As these devices are severely resource-constrained, choosing a suitable cryptographic system is challenging. Pairing Based Cryptography (PBC) is among the best candidates to implement PKC in lightweight devices. In this research, we present a fast and energy efficient implementation of PBC based on Barreto-Naehrig (BN) curves and optimal Ate pairing using hardware/software co-design. Our solution consists of a hardware-based Montgomery multiplier, and pairing software running on an ARM Cortex A9 processor in a Zynq-7020 System-on-Chip (SoC). The multiplier is protected against simple power analysis (SPA) and differential power analysis (DPA), and can be instantiated with a variable number of processing elements (PE). Our solution improves performance (in terms of latency) over an open-source software PBC implementation by factors of 2.34 and 2.02, for 256- and 160-bit field sizes, respectively, as measured in the Zynq-7020 SoC.

2018-02-06
Pappu, Aasish, Blanco, Roi, Mehdad, Yashar, Stent, Amanda, Thadani, Kapil.  2017.  Lightweight Multilingual Entity Extraction and Linking. Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. :365–374.

Text analytics systems often rely heavily on detecting and linking entity mentions in documents to knowledge bases for downstream applications such as sentiment analysis, question answering and recommender systems. A major challenge for this task is to be able to accurately detect entities in new languages with limited labeled resources. In this paper we present an accurate and lightweight, multilingual named entity recognition (NER) and linking (NEL) system. The contributions of this paper are three-fold: 1) Lightweight named entity recognition with competitive accuracy; 2) Candidate entity retrieval that uses search click-log data and entity embeddings to achieve high precision with a low memory footprint; and 3) efficient entity disambiguation. Our system achieves state-of-the-art performance on TAC KBP 2013 multilingual data and on English AIDA CONLL data.

2017-12-12
Dai, D., Chen, Y., Carns, P., Jenkins, J., Ross, R..  2017.  Lightweight Provenance Service for High-Performance Computing. 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT). :117–129.

Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

2018-06-07
Obster, M., Kowalewski, S..  2017.  A live static code analysis architecture for PLC software. 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). :1–4.

Static code analysis is a convenient technique to support the development of software. Without prior test setup, information about a later runtime behavior can be inferred and errors in the code can be found before using a regular compiler. Solutions to apply static code analysis to PLC software following the IEC 61131-3 already exist, but using these separate tools usually creates a gap in the development process. In this paper we introduce an architecture to use static analysis directly in a development environment and give instant feedback to the developer while he is still editing the PLC software.

2018-01-16
He, Z., Zhang, T., Lee, R. B..  2017.  Machine Learning Based DDoS Attack Detection from Source Side in Cloud. 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing (CSCloud). :114–120.

Denial of service (DOS) attacks are a serious threat to network security. These attacks are often sourced from virtual machines in the cloud, rather than from the attacker's own machine, to achieve anonymity and higher network bandwidth. Past research focused on analyzing traffic on the destination (victim's) side with predefined thresholds. These approaches have significant disadvantages. They are only passive defenses after the attack, they cannot use the outbound statistical features of attacks, and it is hard to trace back to the attacker with these approaches. In this paper, we propose a DOS attack detection system on the source side in the cloud, based on machine learning techniques. This system leverages statistical information from both the cloud server's hypervisor and the virtual machines, to prevent network packages from being sent out to the outside network. We evaluate nine machine learning algorithms and carefully compare their performance. Our experimental results show that more than 99.7% of four kinds of DOS attacks are successfully detected. Our approach does not degrade performance and can be easily extended to broader DOS attacks.

2018-06-11
Anderson, Blake, McGrew, David.  2017.  Machine Learning for Encrypted Malware Traffic Classification: Accounting for Noisy Labels and Non-Stationarity. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1723–1732.

The application of machine learning for the detection of malicious network traffic has been well researched over the past several decades; it is particularly appealing when the traffic is encrypted because traditional pattern-matching approaches cannot be used. Unfortunately, the promise of machine learning has been slow to materialize in the network security domain. In this paper, we highlight two primary reasons why this is the case: inaccurate ground truth and a highly non-stationary data distribution. To demonstrate and understand the effect that these pitfalls have on popular machine learning algorithms, we design and carry out experiments that show how six common algorithms perform when confronted with real network data. With our experimental results, we identify the situations in which certain classes of algorithms underperform on the task of encrypted malware traffic classification. We offer concrete recommendations for practitioners given the real-world constraints outlined. From an algorithmic perspective, we find that the random forest ensemble method outperformed competing methods. More importantly, feature engineering was decisive; we found that iterating on the initial feature set, and including features suggested by domain experts, had a much greater impact on the performance of the classification system. For example, linear regression using the more expressive feature set easily outperformed the random forest method using a standard network traffic representation on all criteria considered. Our analysis is based on millions of TLS encrypted sessions collected over 12 months from a commercial malware sandbox and two geographically distinct, large enterprise networks.

2018-01-16
Yamacc, M., Sankur, B., Cemgil, A. T..  2017.  Malicious users discrimination in organizec attacks using structured sparsity. 2017 25th European Signal Processing Conference (EUSIPCO). :266–270.

Communication networks can be the targets of organized and distributed attacks such as flooding-type DDOS attack in which malicious users aim to cripple a network server or a network domain. For the attack to have a major effect on the network, malicious users must act in a coordinated and time correlated manner. For instance, the members of the flooding attack increase their message transmission rates rapidly but also synchronously. Even though detection and prevention of the flooding attacks are well studied at network and transport layers, the emergence and wide deployment of new systems such as VoIP (Voice over IP) have turned flooding attacks at the session layer into a new defense challenge. In this study a structured sparsity based group anomaly detection system is proposed that not only can detect synchronized attacks, but also identify the malicious groups from normal users by jointly estimating their members, structure, starting and end points. Although we mainly focus on security on SIP (Session Initiation Protocol) servers/proxies which are widely used for signaling in VoIP systems, the proposed scheme can be easily adapted for any type of communication network system at any layer.

2018-02-02
Kokaly, S..  2017.  Managing Assurance Cases in Model Based Software Systems. 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C). :453–456.

Software has emerged as a significant part of many domains, including financial service platforms, social networks and vehicle control. Standards organizations have responded to this by creating regulations to address issues such as safety and privacy. In this context, compliance of software with standards has emerged as a key issue. For software development organizations, compliance is a complex and costly goal to achieve and is often accomplished by producing so-called assurance cases, which demonstrate that the system indeed satisfies the property imposed by a standard (e.g., safety, privacy, security). As systems and standards undergo evolution for a variety of reasons, maintaining assurance cases multiplies the effort. In this work, we propose to exploit the connection between the field of model management and the problem of compliance management and propose methods that use model management techniques to address compliance scenarios such as assurance case evolution and reuse. For validation, we ground our approaches on the automotive domain and the ISO 26262 standard for functional safety of road vehicles.

2018-03-19
Runge, Isabel Madeleine, Kolla, Reiner.  2017.  MCGC: A Network Coding Approach for Reliable Large-Scale Wireless Networks. Proceedings of the First ACM International Workshop on the Engineering of Reliable, Robust, and Secure Embedded Wireless Sensing Systems. :16–23.

The application of mobile Wireless Sensor Networks (WSNs) with a big amount of participants poses many challenges. For instance, high transmission loss rates which are caused i.a. by collisions might occur. Additionally, WSNs frequently operate under harsh conditions, where a high probability of link or node failures is inherently given. This leads to reliable data maintenance being a key issue. Existing approaches which were developed to keep data dependably in WSNs often either perform well in highly dynamic or in completely static scenarios, or require complex calculations. Herein, we present the Network Coding based Multicast Growth Codes (MCGC), which represent a solution for reliable data maintenance in large-scale WSNs. MCGC are able to tolerate high fault rates and reconstruct more originally collected data in a shorter period of time than compared existing approaches. Simulation results show performance improvements of up to 75% in comparison to Growth Codes (GC). These results are achieved independently of the systems' dynamics and despite of high fault probabilities.

2018-02-02
Saarela, Marko, Hosseinzadeh, Shohreh, Hyrynsalmi, Sami, Leppänen, Ville.  2017.  Measuring Software Security from the Design of Software. Proceedings of the 18th International Conference on Computer Systems and Technologies. :179–186.

With the increasing use of mobile phones in contemporary society, more and more networked computers are connected to each other. This has brought along security issues. To solve these issues, both research and development communities are trying to build more secure software. However, there is the question that how the secure software is defined and how the security could be measured. In this paper, we study this problem by studying what kinds of security measurement tools (i.e. metrics) are available, and what these tools and metrics reveal about the security of software. As the result of the study, we noticed that security verification activities fall into two main categories, evaluation and assurance. There exist 34 metrics for measuring the security, from which 29 are assurance metrics and 5 are evaluation metrics. Evaluating and studying these metrics, lead us to the conclusion that the general quality of the security metrics are not in a satisfying level that could be suitably used in daily engineering work flows. They have both theoretical and practical issues that require further research, and need to be improved.

2018-01-10
Stoughton, A., Varia, M..  2017.  Mechanizing the Proof of Adaptive, Information-Theoretic Security of Cryptographic Protocols in the Random Oracle Model. 2017 IEEE 30th Computer Security Foundations Symposium (CSF). :83–99.

We report on our research on proving the security of multi-party cryptographic protocols using the EASYCRYPT proof assistant. We work in the computational model using the sequence of games approach, and define honest-butcurious (semi-honest) security using a variation of the real/ideal paradigm in which, for each protocol party, an adversary chooses protocol inputs in an attempt to distinguish the party's real and ideal games. Our proofs are information-theoretic, instead of being based on complexity theory and computational assumptions. We employ oracles (e.g., random oracles for hashing) whose encapsulated states depend on dynamically-made, nonprogrammable random choices. By limiting an adversary's oracle use, one may obtain concrete upper bounds on the distances between a party's real and ideal games that are expressed in terms of game parameters. Furthermore, our proofs work for adaptive adversaries, ones that, when choosing the value of a protocol input, may condition this choice on their current protocol view and oracle knowledge. We provide an analysis in EASYCRYPT of a three party private count retrieval protocol. We emphasize the lessons learned from completing this proof.

2017-12-12
Sowmyadevi, D., Karthikeyan, K..  2017.  Merkle-Hellman knapsack-side channel monitoring based secure scheme for detecting provenance forgery and selfish nodes in wireless sensor networks. 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–8.

Provenance counterfeit and packet loss assaults are measured as threats in the large scale wireless sensor networks which are engaged for diverse application domains. The assortments of information source generate necessitate promising the reliability of information such as only truthful information is measured in the decision procedure. Details about the sensor nodes play an major role in finding trust value of sensor nodes. In this paper, a novel lightweight secure provenance method is initiated for improving the security of provenance data transmission. The anticipated system comprises provenance authentication and renovation at the base station by means of Merkle-Hellman knapsack algorithm based protected provenance encoding in the Bloom filter framework. Side Channel Monitoring (SCM) is exploited for noticing the presence of selfish nodes and packet drop behaviors. This lightweight secure provenance method decreases the energy and bandwidth utilization with well-organized storage and secure data transmission. The investigational outcomes establishes the efficacy and competence of the secure provenance secure system by professionally noticing provenance counterfeit and packet drop assaults which can be seen from the assessment in terms of provenance confirmation failure rate, collection error, packet drop rate, space complexity, energy consumption, true positive rate, false positive rate and packet drop attack detection.

2018-08-23
Karvelas, Nikolaos P., Senftleben, Marius, Katzenbeisser, Stefan.  2017.  Microblogging in a Privacy-Preserving Way. Proceedings of the 12th International Conference on Availability, Reliability and Security. :48:1–48:6.

Microblogging is a popular activity within the spectrum of Online Social Networking (OSN), which allows users to quicky exchange short messages. Such systems can be based on mobile clients that exchange their group-encrypted messages utilizing local communications such as Bluetooth. Since however in such cases, users do not want to disclose their group memberships, and thus have to wait for other group members to appear in the proximity, the message spread can be slow to non-existent. In this paper, we solve this problem and facilitate a higher message spread by employing a server that stores the messages of multiple groups in an Oblivious RAM (ORAM) data structure. The server can be accessed by the clients on demand to read or write their group-encrypted messages. Thus our solution can be used to add access pattern privacy on top of existing microblogging peer-2-peer architectures, and using an ORAM is a promising candidate to use in the given application scenario.

2018-05-02
Jian, R., Chen, Y., Cheng, Y., Zhao, Y..  2017.  Millimeter Wave Microstrip Antenna Design Based on Swarm Intelligence Algorithm in 5G. 2017 IEEE Globecom Workshops (GC Wkshps). :1–6.

In order to solve the problem of millimeter wave (mm-wave) antenna impedance mismatch in 5G communication system, a optimization algorithm for Particle Swarm Ant Colony Optimization (PSACO) is proposed to optimize antenna patch parameter. It is proved that the proposed method can effectively achieve impedance matching in 28GHz center frequency, and the return loss characteristic is obviously improved. At the same time, the nonlinear regression model is used to solve the nonlinear relationship between the resonant frequency and the patch parameters. The Elman Neural Network (Elman NN) model is used to verify the reliability of PSACO and nonlinear regression model. Patch parameters optimized by PSACO were introduced into the nonlinear relationship, which obtained error within 2%. The method proposed in this paper improved efficiency in antenna design.

2018-06-07
Bui, Thang, Stoller, Scott D., Li, Jiajie.  2017.  Mining Relationship-Based Access Control Policies. Proceedings of the 22Nd ACM on Symposium on Access Control Models and Technologies. :239–246.

Relationship-based access control (ReBAC) provides a high level of expressiveness and flexibility that promotes security and information sharing. We formulate ReBAC as an object-oriented extension of attribute-based access control (ABAC) in which relationships are expressed using fields that refer to other objects, and path expressions are used to follow chains of relationships between objects. ReBAC policy mining algorithms have potential to significantly reduce the cost of migration from legacy access control systems to ReBAC, by partially automating the development of a ReBAC policy from an existing access control policy and attribute data. This paper presents an algorithm for mining ReBAC policies from access control lists (ACLs) and attribute data represented as an object model, and an evaluation of the algorithm on four sample policies and two large case studies. Our algorithm can be adapted to mine ReBAC policies from access logs and object models. It is the first algorithm for these problems.

2018-03-19
Amann, Johanna, Gasser, Oliver, Scheitle, Quirin, Brent, Lexi, Carle, Georg, Holz, Ralph.  2017.  Mission Accomplished?: HTTPS Security After Diginotar Proceedings of the 2017 Internet Measurement Conference. :325–340.

Driven by CA compromises and the risk of man-in-the-middle attacks, new security features have been added to TLS, HTTPS, and the web PKI over the past five years. These include Certificate Transparency (CT), for making the CA system auditable; HSTS and HPKP headers, to harden the HTTPS posture of a domain; the DNS-based extensions CAA and TLSA, for control over certificate issuance and pinning; and SCSV, for protocol downgrade protection. This paper presents the first large scale investigation of these improvements to the HTTPS ecosystem, explicitly accounting for their combined usage. In addition to collecting passive measurements at the Internet uplinks of large University networks on three continents, we perform the largest domain-based active Internet scan to date, covering 193M domains. Furthermore, we track the long-term deployment history of new TLS security features by leveraging passive observations dating back to 2012. We find that while deployment of new security features has picked up in general, only SCSV (49M domains) and CT (7M domains) have gained enough momentum to improve the overall security of HTTPS. Features with higher complexity, such as HPKP, are deployed scarcely and often incorrectly. Our empirical findings are placed in the context of risk, deployment effort, and benefit of these new technologies, and actionable steps for improvement are proposed. We cross-correlate use of features and find some techniques with significant correlation in deployment. We support reproducible research and publicly release data and code.

2018-03-26
Algin, Ramazan, Tan, Huseyin O., Akkaya, Kemal.  2017.  Mitigating Selective Jamming Attacks in Smart Meter Data Collection Using Moving Target Defense. Proceedings of the 13th ACM Symposium on QoS and Security for Wireless and Mobile Networks. :1–8.

In Advanced Metering Infrastructure (AMI) networks, power data collections from smart meters are static. Due to such static nature, attackers may predict the transmission behavior of the smart meters which can be used to launch selective jamming attacks that can block the transmissions. To avoid such attack scenarios and increase the resilience of the AMI networks, in this paper, we propose dynamic data reporting schedules for smart meters based on the idea of moving target defense (MTD) paradigm. The idea behind MTD-based schedules is to randomize the transmission times so that the attackers will not be able to guess these schedules. Specifically, we assign a time slot for each smart meter and in each round we shuffle the slots with Fisher-Yates shuffle algorithm that has been shown to provide secure randomness. We also take into account the periodicity of the data transmissions that may be needed by the utility company. With the proposed approach, a smart meter is guaranteed to send its data at a different time slot in each round. We implemented the proposed approach in ns-3 using IEEE 802.11s wireless mesh standard as the communication infrastructure. Simulation results showed that our protocol can secure the network from the selective jamming attacks without sacrificing performance by providing similar or even better performance for collection time, packet delivery ratio and end-to-end delay compared to previously proposed protocols.

2018-08-23
Deterding, Sebastian, Hook, Jonathan, Fiebrink, Rebecca, Gillies, Marco, Gow, Jeremy, Akten, Memo, Smith, Gillian, Liapis, Antonios, Compton, Kate.  2017.  Mixed-Initiative Creative Interfaces. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :628–635.

Enabled by artificial intelligence techniques, we are witnessing the rise of a new paradigm of computational creativity support: mixed-initiative creative interfaces put human and computer in a tight interactive loop where each suggests, produces, evaluates, modifies, and selects creative outputs in response to the other. This paradigm could broaden and amplify creative capacity for all, but has so far remained mostly confined to artificial intelligence for game content generation, and faces many unsolved interaction design challenges. This workshop therefore convenes CHI and game researchers to advance mixed-initiative approaches to creativity support.

2018-05-24
Peisert, Sean, Bishop, Matt, Talbot, Ed.  2017.  A Model of Owner Controlled, Full-Provenance, Non-Persistent, High-Availability Information Sharing. Proceedings of the 2017 New Security Paradigms Workshop. :80–89.

In this paper, we propose principles of information control and sharing that support ORCON (ORiginator COntrolled access control) models while simultaneously improving components of confidentiality, availability, and integrity needed to inherently support, when needed, responsibility to share policies, rapid information dissemination, data provenance, and data redaction. This new paradigm of providing unfettered and unimpeded access to information by authorized users, while at the same time, making access by unauthorized users impossible, contrasts with historical approaches to information sharing that have focused on need to know rather than need to (or responsibility to) share.

2018-02-27
Moore, Michael R., Bridges, Robert A., Combs, Frank L., Starr, Michael S., Prowell, Stacy J..  2017.  Modeling Inter-Signal Arrival Times for Accurate Detection of CAN Bus Signal Injection Attacks: A Data-Driven Approach to In-Vehicle Intrusion Detection. Proceedings of the 12th Annual Conference on Cyber and Information Security Research. :11:1–11:4.

Modern vehicles rely on hundreds of on-board electronic control units (ECUs) communicating over in-vehicle networks. As external interfaces to the car control networks (such as the on-board diagnostic (OBD) port, auxiliary media ports, etc.) become common, and vehicle-to-vehicle / vehicle-to-infrastructure technology is in the near future, the attack surface for vehicles grows, exposing control networks to potentially life-critical attacks. This paper addresses the need for securing the controller area network (CAN) bus by detecting anomalous traffic patterns via unusual refresh rates of certain commands. While previous works have identified signal frequency as an important feature for CAN bus intrusion detection, this paper provides the first such algorithm with experiments using three attacks in five (total) scenarios. Our data-driven anomaly detection algorithm requires only five seconds of training time (on normal data) and achieves true positive / false discovery rates of 0.9998/0.00298, respectively (micro-averaged across the five experimental tests).

2018-01-10
Vellingiri, Shanthi, Balakrishnan, Prabhakaran.  2017.  Modeling User Quality of Experience (QoE) through Position Discrepancy in Multi-Sensorial, Immersive, Collaborative Environments. Proceeding MMSys'17 Proceedings of the 8th ACM on Multimedia Systems Conference Pages 296-307 .

Users' QoE (Quality of Experience) in Multi-sensorial, Immersive, Collaborative Environments (MICE) applications is mostly measured by psychometric studies. These studies provide a subjective insight into the performance of such applications. In this paper, we hypothesize that spatial coherence or the lack of it of the embedded virtual objects among users has a correlation to the QoE in MICE. We use Position Discrepancy (PD) to model this lack of spatial coherence in MICE. Based on that, we propose a Hierarchical Position Discrepancy Model (HPDM) that computes PD at multiple levels to derive the application/system-level PD as a measure of performance.; AB@Experimental results on an example task in MICE show that HPDM can objectively quantify the application performance and has a correlation to the psychometric study-based QoE measurements. We envisage HPDM can provide more insight on the MICE application without the need for extensive user study.

2017-12-20
Althamary, I. A., El-Alfy, E. S. M..  2017.  A more secure scheme for CAPTCHA-based authentication in cloud environment. 2017 8th International Conference on Information Technology (ICIT). :405–411.

Cloud computing is a remarkable model for permitting on-demand network access to an elastic collection of configurable adaptive resources and features including storage, software, infrastructure, and platform. However, there are major concerns about security-related issues. A very critical security function is user authentication using passwords. Although many flaws have been discovered in password-based authentication, it remains the most convenient approach that people continue to utilize. Several schemes have been proposed to strengthen its effectiveness such as salted hashes, one-time password (OTP), single-sign-on (SSO) and multi-factor authentication (MFA). This study proposes a new authentication mechanism by combining user's password and modified characters of CAPTCHA to generate a passkey. The modification of the CAPTCHA depends on a secret agreed upon between the cloud provider and the user to employ different characters for some characters in the CAPTCHA. This scheme prevents various attacks including short-password attack, dictionary attack, keylogger, phishing, and social engineering. Moreover, it can resolve the issue of password guessing and the use of a single password for different cloud providers.

2018-05-24
Dey, A. K., Gel, Y. R., Poor, H. V..  2017.  Motif-Based Analysis of Power Grid Robustness under Attacks. 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :1015–1019.

Network motifs are often called the building blocks of networks. Analysis of motifs is found to be an indispensable tool for understanding local network structure, in contrast to measures based on node degree distribution and its functions that primarily address a global network topology. As a result, networks that are similar in terms of global topological properties may differ noticeably at a local level. In the context of power grids, this phenomenon of the impact of local structure has been recently documented in fragility analysis and power system classification. At the same time, most studies of power system networks still tend to focus on global topo-logical measures of power grids, often failing to unveil hidden mechanisms behind vulnerability of real power systems and their dynamic response to malfunctions. In this paper a pilot study of motif-based analysis of power grid robustness under various types of intentional attacks is presented, with the goal of shedding light on local dynamics and vulnerability of power systems.