Visible to the public Biblio

Found 1126 results

Filters: First Letter Of Title is I  [Clear All Filters]
2017-02-21
L. Thiele, M. Kurras, S. Jaeckel, S. Fähse, W. Zirwas.  2015.  "Interference-floor shaping for liquid coverage zones in coordinated 5G networks". 2015 49th Asilomar Conference on Signals, Systems and Computers. :1102-1106.

Joint transmission coordinated multi-point (CoMP) is a combination of constructive and destructive superposition of several to potentially many signal components, with the goal to maximize the desired receive-signal and at the same time to minimize mutual interference. Especially the destructive superposition requires accurate alignment of phases and amplitudes. Therefore, a 5G clean slate approach needs to incorporate the following enablers to overcome the challenging limitation for JT CoMP: accurate channel estimation of all relevant channel components, channel prediction for time-aligned precoder design, proper setup of cooperation areas corresponding to user grouping and to limit feedback overhead especially in FDD as well as treatment of out-of-cluster interference (interference floor shaping).

H. S. Jeon, H. Jung, W. Chun.  2015.  "ID Based Web Browser with P2P Property". 2015 9th International Conference on Future Generation Communication and Networking (FGCN). :41-44.

The main usage pattern of internet is shifting from traditional host-to-host central model to content dissemination model. It leads to the pretty prompt growth in Internet content. CDN and P2P are two mainstream techmologies to provide streaming content services in the current Internet. In recent years, some researchers have begun to focus on CDN-P2P-hybrid architecture and ISP-friendly P2P content delivery technology. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes ID based browser with caching in IDNet. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. Experiment shows that ID web browser with caching function can support how to disseminate content and how to find the closet network in IDNet having identical contents.

2017-02-14
S. Parimi, A. SaiKrishna, N. R. Kumar, N. R. Raajan.  2015.  "An imperceptible watermarking technique for copyright content using discrete cosine transformation". 2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015]. :1-5.

This paper is nominated for an image protection scheme in the area of government sectors based on discrete cosine transformation with digital watermarking scheme. A cover image has broken down into 8 × 8 non overlapped blocks and transformed from spatial domain into frequency domain. Apply DCT version II of the DCT family to each sub block of the original image. Then embed the watermarking image into the sub blocks. Apply IDCT of version II to send the image through communication channel with watermarked image. To recover the watermarked image, apply DCT and watermarking formula to the sub blocks. The experimental results show that the proposed watermarking procedure gives high security and watermarked image retrieved successfully.

M. Bere, H. Muyingi.  2015.  "Initial investigation of Industrial Control System (ICS) security using Artificial Immune System (AIS)". 2015 International Conference on Emerging Trends in Networks and Computer Communications (ETNCC). :79-84.

Industrial Control Systems (ICS) which among others are comprised of Supervisory Control and Data Acquisition (SCADA) and Distributed Control Systems (DCS) are used to control industrial processes. ICS have now been connected to other Information Technology (IT) systems and have as a result become vulnerable to Advanced Persistent Threats (APT). APTs are targeted attacks that use zero-day attacks to attack systems. Current ICS security mechanisms fail to deter APTs from infiltrating ICS. An analysis of possible solutions to deter APTs was done. This paper proposes the use of Artificial Immune Systems to secure ICS from APTs.

2017-02-13
M. M. Olama, M. M. Matalgah, M. Bobrek.  2015.  "An integrated signaling-encryption mechanism to reduce error propagation in wireless communications: performance analyses". 2015 IEEE International Workshop Technical Committee on Communications Quality and Reliability (CQR). :1-6.

Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

K. R. Kashwan, K. A. Dattathreya.  2015.  "Improved serial 2D-DWT processor for advanced encryption standard". 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS). :209-213.

This paper reports a research work on how to transmit a secured image data using Discrete Wavelet Transform (DWT) in combination of Advanced Encryption Standard (AES) with low power and high speed. This can have better quality secured image with reduced latency and improved throughput. A combined model of DWT and AES technique help in achieving higher compression ratio and simultaneously it provides high security while transmitting an image over the channels. The lifting scheme algorithm is realized using a single and serialized DT processor to compute up to 3-levels of decomposition for improving speed and security. An ASIC circuit is designed using RTL-GDSII to simulate proposed technique using 65 nm CMOS Technology. The ASIC circuit is implemented on an average area of about 0.76 mm2 and the power consumption is estimated in the range of 10.7-19.7 mW at a frequency of 333 MHz which is faster compared to other similar research work reported.

2017-02-03
Miles Johnson, University of Illinois at Urbana-Champaign, Navid Aghasadeghi, University of Illinois at Urbana-Champaign, Timothy Bretl, University of Illinois at Urbana-Champaign.  2013.  Inverse Optimal Control for Deterministic Continuous-time Nonlinear Systems. 52nd Conference on Decision and Control.

Inverse optimal control is the problem of computing a cost function with respect to which observed state and input trajectories are optimal. We present a new method of inverse optimal control based on minimizing the extent to which observed trajectories violate first-order necessary conditions for optimality. We consider continuous-time deterministic optimal control systems with a cost function that is a linear combination of known basis functions. We compare our approach with three prior methods of inverse optimal control. We demonstrate the performance of these methods by performing simulation experiments using a collection of nominal system models. We compare the robustness of these methods by analysing how they perform under perturbations to the system. To this purpose, we consider two scenarios: one in which we exactly know the set of basis functions in the cost function, and another in which the true cost function contains an unknown perturbation. Results from simulation experiments show that our new method is more computationally efficient than prior methods, performs similarly to prior approaches under large perturbations to the system, and better learns the true cost function under small perturbations.

2017-02-02
Andrew Clark, Quanyan Zhu, University of Illinois at Urbana-Champaign, Radha Poovendran, Tamer Başar, University of Illinois at Urbana-Champaign.  2013.  An Impact-Aware Defense against Stuxnet. IFAC American Control Conference (ACC 2013).

The Stuxnet worm is a sophisticated malware designed to sabotage industrial control systems (ICSs). It exploits vulnerabilities in removable drives, local area communication networks, and programmable logic controllers (PLCs) to penetrate the process control network (PCN) and the control system network (CSN). Stuxnet was successful in penetrating the control system network and sabotaging industrial control processes since the targeted control systems lacked security mechanisms for verifying message integrity and source authentication. In this work, we propose a novel proactive defense system framework, in which commands from the system operator to the PLC are authenticated using a randomized set of cryptographic keys. The framework leverages cryptographic analysis and controland game-theoretic methods to quantify the impact of malicious commands on the performance of the physical plant. We derive the worst-case optimal randomization strategy as a saddle-point equilibrium of a game between an adversary attempting to insert commands and the system operator, and show that the proposed scheme can achieve arbitrarily low adversary success probability for a sufficiently large number of keys. We evaluate our proposed scheme, using a linear-quadratic regulator (LQR) as a case study, through theoretical and numerical analysis.

2016-11-18
2016-10-24
2016-10-07
Pearson, C. J., Welk, A. K., Mayhorn, C. B..  2016.  In Automation We Trust? Identifying Varying Levels of Trust in Human and Automated Information Sources Human Factors and Ergonomics Society. :201-205.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that trust is an antecedent to reliance, and often influences how individuals prioritize and integrate information presented from a human and/or automated information source. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measured how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information regarding which route was safest from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs and systems for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

 

2016-10-06
Aiping Xiong, Weining Yang, Ninghui Li, Robert Proctor.  2016.  Ineffectiveness of domain highlighting as a tool to help users identify phishing webpages. 60th Annual Meeting of Human Factors and Ergonomics Society.

Domain highlighting has been implemented by popular browsers with the aim of helping users identify which sites they are visiting. But, its effectiveness in helping users identify fraudulent webpages has not been stringently tested. Thus, we conducted an online study to test the effectiveness of domain highlighting. 320 participants were recruited to evaluate the legitimacy of 6 webpages (half authentic and half fraudulent) in two study phases. In the first phase participants were instructed to determine the legitimacy based on any information on the webpage, whereas in the second phase they were instructed to focus specifically on the address bar. Webpages with domain highlighting were presented in the first block for half of the participants and in the second block for the remaining participants. Results showed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. When participants were directed to focus on the address bar, correct decisions were increased for fraudulent webpages (unsafe) but did not change significantly for the authentic webpages (safe). The percentage of correct judgments for fraudulent webpages showed no significant difference between domain highlighting and non-highlighting conditions, even when participants were directed to the address bar. Although the results showed some benefit to detecting fraudulent webpages from directing the user's attention to the address bar, the domain highlighting method itself did not provide effective protection against phishing attacks, suggesting that other measures need to be taken for successful detection of deception.

2016-03-30
Elissa M. Redmiles, Amelia R. Malone, Michelle L. Mazurek.  2016.  I Think They're Trying to Tell Me Something: Advice Sources and Selection for Digital Security. IEEE Symposium on Security and Privacy.

Users receive a multitude of digital- and physical- security advice every day. Indeed, if we implemented all the security advice we received, we would never leave our houses or use the Internet. Instead, users selectively choose some advice to accept and some (most) to reject; however, it is unclear whether they are effectively prioritizing what is most important or most useful. If we can understand from where and why users take security advice, we can develop more effective security interventions.

As a first step, we conducted 25 semi-structured interviews of a demographically broad pool of users. These interviews resulted in several interesting findings: (1) participants evaluated digital-security advice based on the trustworthiness of the advice source, but evaluated physical-security advice based on their intuitive assessment of the advice content; (2) negative-security events portrayed in well-crafted fictional narratives with relatable characters (such as those shown in TV or movies) may be effective teaching tools for both digital- and physical-security behaviors; and (3) participants rejected advice for many reasons, including finding that the advice contains too much marketing material or threatens their privacy.

2015-11-17
Zhenqi Huang, University of Illinois at Urbana-Champaign, Chuchu Fan, University of Illinois at Urbana-Champaign, Alexandru Mereacre, University of Oxford, Sayan Mitra, University of Illinois at Urbana-Champaign, Marta Kwiatkowska, University of Oxford.  2014.  Invariant Verification of Nonlinear Hybrid Automata Networks of Cardiac Cells. 26th International Conference on Computer Aided Verification (CAV 2014).

Verification algorithms for networks of nonlinear hybrid automata (HA) can aid us understand and control biological processes such as cardiac arrhythmia, formation of memory, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. First, it uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network A for constructing a low-dimensional model M. Simulations of both A and M are then used to compute the reach tubes for A. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells, where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with 5 cells (20 continuous variables, 295 locations) typically in less than 15 minutes for up to reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.

2015-11-12
Laszka, Aron, Vorobeychik, Yevgeniy, Koutsoukos, Xenofon.  2015.  Integrity Assurance in Resource-bounded Systems Through Stochastic Message Authentication. Proceedings of the 2015 Symposium and Bootcamp on the Science of Security. :1:1–1:12.

Assuring communication integrity is a central problem in security. However, overhead costs associated with cryptographic primitives used towards this end introduce significant practical implementation challenges for resource-bounded systems, such as cyber-physical systems. For example, many control systems are built on legacy components which are computationally limited but have strict timing constraints. If integrity protection is a binary decision, it may simply be infeasible to introduce into such systems; without it, however, an adversary can forge malicious messages, which can cause signi cant physical or financial harm. We propose a formal game-theoretic framework for optimal stochastic message authentication, providing provable integrity guarantees for resource-bounded systems based on an existing MAC scheme. We use our framework to investigate attacker deterrence, as well as optimal design of stochastic message authentication schemes when deterrence is impossible. Finally, we provide experimental results on the computational performance of our framework in practice.

Li, Bo, Vorobeychik, Yevgeniy, Li, Muqun, Malin, Bradley.  2015.  Iterative Classification for Sanitizing Large-Scale Datasets. SIAM International Conference on Data Mining.

Cheap ubiquitous computing enables the collectionof massive amounts of personal data in a wide variety of domains.Many organizations aim to share such data while obscuring fea-tures that could disclose identities or other sensitive information.Much of the data now collected exhibits weak structure (e.g.,natural language text) and machine learning approaches havebeen developed to identify and remove sensitive entities in suchdata. Learning-based approaches are never perfect and relyingupon them to sanitize data can leak sensitive information as aconsequence. However, a small amount of risk is permissiblein practice, and, thus, our goal is to balance the value ofdata published and the risk of an adversary discovering leakedsensitive information. We model data sanitization as a gamebetween 1) a publisher who chooses a set of classifiers to applyto data and publishes only instances predicted to be non-sensitiveand 2) an attacker who combines machine learning and manualinspection to uncover leaked sensitive entities (e.g., personal names). We introduce an iterative greedy algorithm for thepublisher that provably executes no more than a linear numberof iterations, and ensures a low utility for a resource-limitedadversary. Moreover, using several real world natural languagecorpora, we illustrate that our greedy algorithm leaves virtuallyno automatically identifiable sensitive instances for a state-of-the-art learning algorithm, while sharing over 93% of the original data, and completes after at most 5 iterations.

2015-11-11
John C. Mace, Newcastle University, Charles Morisset, Newcastle University, Aad Van Moorsel, Newcastle University.  2015.  Impact of Policy Design on Workflow Resiliency Computation Time. Quantitative Evaluation of Systems (QEST 2015).

Workflows are complex operational processes that include security constraints restricting which users can perform which tasks. An improper user-task assignment may prevent the completion of the work- flow, and deciding such an assignment at runtime is known to be complex, especially when considering user unavailability (known as the resiliency problem). Therefore, design tools are required that allow fast evaluation of workflow resiliency. In this paper, we propose a methodology for work- flow designers to assess the impact of the security policy on computing the resiliency of a workflow. Our approach relies on encoding a work- flow into the probabilistic model-checker PRISM, allowing its resiliency to be evaluated by solving a Markov Decision Process. We observe and illustrate that adding or removing some constraints has a clear impact on the resiliency computation time, and we compute the set of security constraints that can be artificially added to a security policy in order to reduce the computation time while maintaining the resiliency.

2015-05-06
Malik, O.A., Arosha Senanayake, S.M.N., Zaheer, D..  2015.  An Intelligent Recovery Progress Evaluation System for ACL Reconstructed Subjects Using Integrated 3-D Kinematics and EMG Features. Biomedical and Health Informatics, IEEE Journal of. 19:453-463.

An intelligent recovery evaluation system is presented for objective assessment and performance monitoring of anterior cruciate ligament reconstructed (ACL-R) subjects. The system acquires 3-D kinematics of tibiofemoral joint and electromyography (EMG) data from surrounding muscles during various ambulatory and balance testing activities through wireless body-mounted inertial and EMG sensors, respectively. An integrated feature set is generated based on different features extracted from data collected for each activity. The fuzzy clustering and adaptive neuro-fuzzy inference techniques are applied to these integrated feature sets in order to provide different recovery progress assessment indicators (e.g., current stage of recovery, percentage of recovery progress as compared to healthy group, etc.) for ACL-R subjects. The system was trained and tested on data collected from a group of healthy and ACL-R subjects. For recovery stage identification, the average testing accuracy of the system was found above 95% (95-99%) for ambulatory activities and above 80% (80-84%) for balance testing activities. The overall recovery evaluation performed by the proposed system was found consistent with the assessment made by the physiotherapists using standard subjective/objective scores. The validated system can potentially be used as a decision supporting tool by physiatrists, physiotherapists, and clinicians for quantitative rehabilitation analysis of ACL-R subjects in conjunction with the existing recovery monitoring systems.
 

Huaqun Wang, Qianhong Wu, Bo Qin, Domingo-Ferrer, J..  2014.  Identity-based remote data possession checking in public clouds. Information Security, IET. 8:114-121.

Checking remote data possession is of crucial importance in public cloud storage. It enables the users to check whether their outsourced data have been kept intact without downloading the original data. The existing remote data possession checking (RDPC) protocols have been designed in the PKI (public key infrastructure) setting. The cloud server has to validate the users' certificates before storing the data uploaded by the users in order to prevent spam. This incurs considerable costs since numerous users may frequently upload data to the cloud server. This study addresses this problem with a new model of identity-based RDPC (ID-RDPC) protocols. The authors present the first ID-RDPC protocol proven to be secure assuming the hardness of the standard computational Diffie-Hellman problem. In addition to the structural advantage of elimination of certificate management and verification, the authors ID-RDPC protocol also outperforms the existing RDPC protocols in the PKI setting in terms of computation and communication.
 

Huaqun Wang.  2015.  Identity-Based Distributed Provable Data Possession in Multicloud Storage. Services Computing, IEEE Transactions on. 8:328-340.

Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multicloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier's cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multicloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie-Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client's authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification, and public verification.
 

Namdari, Mehdi, Jazayeri-Rad, Hooshang.  2014.  Incipient Fault Diagnosis Using Support Vector Machines Based on Monitoring Continuous Decision Functions. Eng. Appl. Artif. Intell.. 28:22–35.

Support Vector Machine (SVM) as an innovative machine learning tool, based on statistical learning theory, is recently used in process fault diagnosis tasks. In the application of SVM to a fault diagnosis problem, typically a discrete decision function with discrete output values is utilized in order to solely define the label of the fault. However, for incipient faults in which fault steadily progresses over time and there is a changeover from normal operation to faulty operation, using discrete decision function does not reveal any evidence about the progress and depth of the fault. Numerous process faults, such as the reactor fouling and degradation of catalyst, progress slowly and can be categorized as incipient faults. In this work a continuous decision function is anticipated. The decision function values not only define the fault label, but also give qualitative evidence about the depth of the fault. The suggested method is applied to incipient fault diagnosis of a continuous binary mixture distillation column and the result proves the practicability of the proposed approach. In incipient fault diagnosis tasks, the proposed approach outperformed some of the conventional techniques. Moreover, the performance of the proposed approach is better than typical discrete based classification techniques employing some monitoring indexes such as the false alarm rate, detection time and diagnosis time.

Junho Hong, Chen-Ching Liu, Govindarasu, M..  2014.  Integrated Anomaly Detection for Cyber Security of the Substations. Smart Grid, IEEE Transactions on. 5:1643-1653.

Cyber intrusions to substations of a power grid are a source of vulnerability since most substations are unmanned and with limited protection of the physical security. In the worst case, simultaneous intrusions into multiple substations can lead to severe cascading events, causing catastrophic power outages. In this paper, an integrated Anomaly Detection System (ADS) is proposed which contains host- and network-based anomaly detection systems for the substations, and simultaneous anomaly detection for multiple substations. Potential scenarios of simultaneous intrusions into the substations have been simulated using a substation automation testbed. The host-based anomaly detection considers temporal anomalies in the substation facilities, e.g., user-interfaces, Intelligent Electronic Devices (IEDs) and circuit breakers. The malicious behaviors of substation automation based on multicast messages, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Measured Value (SMV), are incorporated in the proposed network-based anomaly detection. The proposed simultaneous intrusion detection method is able to identify the same type of attacks at multiple substations and their locations. The result is a new integrated tool for detection and mitigation of cyber intrusions at a single substation or multiple substations of a power grid.
 

Boukhtouta, A., Lakhdari, N.-E., Debbabi, M..  2014.  Inferring Malware Family through Application Protocol Sequences Signature. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.

Boukhtouta, A., Lakhdari, N.-E., Debbabi, M..  2014.  Inferring Malware Family through Application Protocol Sequences Signature. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.