Visible to the public Biblio

Found 918 results

Filters: First Letter Of Title is T  [Clear All Filters]
2017-03-07
Wazzan, M. A., Awadh, M. H..  2015.  Towards Improving Web Attack Detection: Highlighting the Significant Factors. 2015 5th International Conference on IT Convergence and Security (ICITCS). :1–5.

Nowadays, with the rapid development of Internet, the use of Web is increasing and the Web applications have become a substantial part of people's daily life (e.g. E-Government, E-Health and E-Learning), as they permit to seamlessly access and manage information. The main security concern for e-business is Web application security. Web applications have many vulnerabilities such as Injection, Broken Authentication and Session Management, and Cross-site scripting (XSS). Subsequently, web applications have become targets of hackers, and a lot of cyber attack began to emerge in order to block the services of these Web applications (Denial of Service Attach). Developers are not aware of these vulnerabilities and have no enough time to secure their applications. Therefore, there is a significant need to study and improve attack detection for web applications through determining the most significant factors for detection. To the best of our knowledge, there is not any research that summarizes the influent factors of detection web attacks. In this paper, the author studies state-of-the-art techniques and research related to web attack detection: the author analyses and compares different methods of web attack detections and summarizes the most important factors for Web attack detection independent of the type of vulnerabilities. At the end, the author gives recommendation to build a framework for web application protection.

Tunc, C., Hariri, S., Montero, F. D. L. P., Fargo, F., Satam, P., Al-Nashif, Y..  2015.  Teaching and Training Cybersecurity as a Cloud Service. 2015 International Conference on Cloud and Autonomic Computing. :302–308.

The explosive growth of IT infrastructures, cloud systems, and Internet of Things (IoT) have resulted in complex systems that are extremely difficult to secure and protect against cyberattacks which are growing exponentially in complexity and in number. Overcoming the cybersecurity challenges is even more complicated due to the lack of training and widely available cybersecurity environments to experiment with and evaluate new cybersecurity methods. The goal of our research is to address these challenges by exploiting cloud services. In this paper, we present the design, analysis, and evaluation of a cloud service that we refer to as Cybersecurity Lab as a Service (CLaaS) which offers virtual cybersecurity experiments that can be accessed from anywhere and from any device (desktop, laptop, tablet, smart mobile device, etc.) with Internet connectivity. In CLaaS, we exploit cloud computing systems and virtualization technologies to provide virtual cybersecurity experiments and hands-on experiences on how vulnerabilities are exploited to launch cyberattacks, how they can be removed, and how cyber resources and services can be hardened or better protected. We also present our experimental results and evaluation of CLaaS virtual cybersecurity experiments that have been used by graduate students taking our cybersecurity class as well as by high school students participating in GenCyber camps.

Ceolin, Davide, Noordegraaf, Julia, Aroyo, Lora, van Son, Chantal.  2016.  Towards Web Documents Quality Assessment for Digital Humanities Scholars. Proceedings of the 8th ACM Conference on Web Science. :315–317.

We present a framework for assessing the quality of Web documents, and a baseline of three quality dimensions: trustworthiness, objectivity and basic scholarly quality. Assessing Web document quality is a "deep data" problem necessitating approaches to handle both data size and complexity.

Ahmed, Sadia.  2016.  Time and Frequency Domain Analysis and Measurement Results of Varying Acoustic Signal to Determine Water Pollutants in the Rio Grande River. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :30:1–30:2.

Water occupies three forth of earth's surface. Water is directly and indirectly polluted in many ways. Therefore, it is of vital importance to monitor water pollution levels effectively and regularly. It is a well known fact that changes in the water medium and its parameters directly affect the propagation of acoustic signal through it. As a result, time and frequency domain analysis of an acoustic signal propagating through water can be a valued indicator of water pollution. Preliminary investigative results to determine water contaminants using acoustic signal in an indoor laboratory tank environment was presented in [1]. This paper presents an extended abstract of the continuing research involving a time and frequency domain analysis of acoustic signal in the presence of three water pollutants, namely fertilizer, household detergent, and pesticide. A measurement will be conducted in the Rio Grande River, Espanola, NM, at three different locations by transmitting a single pulse through the water at different depths and distances. The same measurement will be conducted in a tank with clean water and in a tank with three pollutants added separately. The three sets of received signal from the three measurements will be compared to each other. The sets of received signal from the measurement results will be compared to the simulated result of the time and frequency domain response of the acoustic signal for validation. To the best knowledge of the author(s) utilizing acoustic signal and its properties to determine water pollutants using the proposed method is a new approach.

Queiroz, Rodrigo, Berger, Thorsten, Czarnecki, Krzysztof.  2016.  Towards Predicting Feature Defects in Software Product Lines. Proceedings of the 7th International Workshop on Feature-Oriented Software Development. :58–62.

Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.

Purohit, Suchit S., Bothale, Vinod M., Gandhi, Savita R..  2016.  Towards M-gov in Solid Waste Management Sector Using RFID, Integrated Technologies. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :61:1–61:4.

Due to explosive increase in teledensity, penetration of mobile networks in urban as well as rural areas, m-governance in India is growing from infancy to a more mature shape. Various steps are taken by Indian government for offering citizen services through mobile platform hence offering smooth transition from web based e-gov services to more pervasive mobile based services. Municipalities and Municipal corporations in India are already providing m-gov services like property and professional tax transaction, Birth and death registration, Marriage registration, due of taxes and charges etc. through SMS alerts or via call centers. To the best of our knowledge no municipality offers mobile based services in Solid Waste management sector. This paper proposes an m-gov service implemented as Android mobile application for SWM department, AMC, Ahmadabad. The application operates on real time data collected from a fully automated Solid waste Collection process integrated using RFID, GPS, GIS and GPRS proposed in the preceding work by the authors. The mobile application facilitates citizens to interactively view the status of the cleaning process of their area file complaints in the case of failure and also can follow up the status of their complaints which could be handled by SWM officials using the same application. This application also facilitates SWM officials to observe, analyze the real time status of the collection process and generated reports.

Gary, Saccá, Jaime, Moreno-Llorena.  2016.  Towards a Strategy for Recognition of Collective Emotions on Social Networking Sites. Proceedings of the XVII International Conference on Human Computer Interaction. :25:1–25:8.

Since the emergence of emotional theories and models, which explain individuals feelings and their emotional processes, diverse research areas have shown interest in studying these ideas in order to obtain relevant information about behavior, habits and preferences of people. However, there are some limitations on emotion recognition that have forced specialists to search ways to achieve it on particular cases. This article treats collective emotions recognition case focusing on social networking sites applying a particular strategy, as follow: Firstly, state of art investigation regard emotions representation models in individual and collectives. In addition, possible solutions are provided by computing areas regarding collective emotions problems. Secondly, a collective emotion strategy was designed where it was retrieved a collection of data from Twitter, in which some cleaning and processing steps were applied, in order to keep the expression as purest. Afterward, the collective emotion tagging step arrived, whither based on consensus theory approach, the majority tagged-feelings were grouped and recognized as collective emotions. Finally, prediction step was executed and resided on modeling collective data, wherein one part was supplied into the Machine Learning during training and the other one was served to test the machine accuracy. Thirdly, An evaluation was set to check the fit of the collective recognition strategy, where results obtained allow to place the proposed work in the right path as consequence of minor differences observed, that indicate higher precision according to the distances measures used during the study development.

You, Taewan.  2016.  Toward the future of internet architecture for IoE: Precedent research on evolving the identifier and locator separation schemes. 2016 International Conference on Information and Communication Technology Convergence (ICTC). :436–439.

Internet has been being becoming the most famous and biggest communication networks as social, industrial, and public infrastructure since Internet was invented at late 1960s. In a historical retrospect of Internet's evolution, the Internet architecture continues evolution repeatedly by going through various technical challenges, for instance, in early 1990s, Internet had encountered danger of scalability, after a short while it had been overcome and successfully evolved by applying emerging techniques such as CIDR, NAT, and IPv6. Especially this paper emphasizes scalability issues as technical challenges with forecasting that Internet of things era has come. Firstly, we describe the Identifier and locator separation scheme that can achieve dramatically architectural evolution in historical perspective. Additionally, it reviews various kinds of Identifier and locator separation scheme because recently the scheme can be the major design pillar towards future of Internet architecture such as both various clean-slated future Internet architectures and evolving Internet architectures. Lastly we show a result of analysis by analysis table for future of internet of everything where number of Internet connected devices will growth to more than 20 billion by 2020.

2017-02-23
Y. Cao, J. Yang.  2015.  "Towards Making Systems Forget with Machine Unlearning". 2015 IEEE Symposium on Security and Privacy. :463-480.

Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations – asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.

2017-02-21
W. Ketpan, S. Phonsri, R. Qian, M. Sellathurai.  2015.  "On the Target Detection in OFDM Passive Radar Using MUSIC and Compressive Sensing". 2015 Sensor Signal Processing for Defence (SSPD). :1-5.

The passive radar also known as Green Radar exploits the available commercial communication signals and is useful for target tracking and detection in general. Recent communications standards frequently employ Orthogonal Frequency Division Multiplexing (OFDM) waveforms and wideband for broadcasting. This paper focuses on the recent developments of the target detection algorithms in the OFDM passive radar framework where its channel estimates have been derived using the matched filter concept using the knowledge of the transmitted signals. The MUSIC algorithm, which has been modified to solve this two dimensional delay-Doppler detection problem, is first reviewed. As the target detection problem can be represented as sparse signals, this paper employs compressive sensing to compare with the detection capability of the 2-D MUSIC algorithm. It is found that the previously proposed single time sample compressive sensing cannot significantly reduce the leakage from the direct signal component. Furthermore, this paper proposes the compressive sensing method utilizing multiple time samples, namely l1-SVD, for the detection of multiple targets. In comparison between the MUSIC and compressive sensing, the results show that l1-SVD can decrease the direct signal leakage but its prerequisite of computational resources remains a major issue. This paper also presents the detection performance of these two algorithms for closely spaced targets.

R. Lee, L. Mullen, P. Pal, D. Illig.  2015.  "Time of flight measurements for optically illuminated underwater targets using Compressive Sampling and Sparse reconstruction". OCEANS 2015 - MTS/IEEE Washington. :1-6.

Compressive Sampling and Sparse reconstruction theory is applied to a linearly frequency modulated continuous wave hybrid lidar/radar system. The goal is to show that high resolution time of flight measurements to underwater targets can be obtained utilizing far fewer samples than dictated by Nyquist sampling theorems. Traditional mixing/down-conversion and matched filter signal processing methods are reviewed and compared to the Compressive Sampling and Sparse Reconstruction methods. Simulated evidence is provided to show the possible sampling rate reductions, and experiments are used to observe the effects that turbid underwater environments have on recovery. Results show that by using compressive sensing theory and sparse reconstruction, it is possible to achieve significant sample rate reduction while maintaining centimeter range resolution.

2017-02-14
M. Völp, N. Asmussen, H. Härtig, B. Nöthen, G. Fettweis.  2015.  "Towards dependable CPS infrastructures: Architectural and operating-system challenges". 2015 IEEE 20th Conference on Emerging Technologies Factory Automation (ETFA). :1-8.

Cyber-physical systems (CPSs), due to their direct influence on the physical world, have to meet extended security and dependability requirements. This is particularly true for CPS that operate in close proximity to humans or that control resources that, when tampered with, put all our lives at stake. In this paper, we review the challenges and some early solutions that arise at the architectural and operating-system level when we require cyber-physical systems and CPS infrastructure to withstand advanced and persistent threats. We found that although some of the challenges we identified are already matched by rudimentary solutions, further research is required to ensure sustainable and dependable operation of physically exposed CPS infrastructure and, more importantly, to guarantee graceful degradation in case of malfunction or attack.

M. Q. Ali, A. B. Ashfaq, E. Al-Shaer, Q. Duan.  2015.  "Towards a science of anomaly detection system evasion". 2015 IEEE Conference on Communications and Network Security (CNS). :460-468.

A fundamental drawback of current anomaly detection systems (ADSs) is the ability of a skilled attacker to evade detection. This is due to the flawed assumption that an attacker does not have any information about an ADS. Advanced persistent threats that are capable of monitoring network behavior can always estimate some information about ADSs which makes these ADSs susceptible to evasion attacks. Hence in this paper, we first assume the role of an attacker to launch evasion attacks on anomaly detection systems. We show that the ADSs can be completely paralyzed by parameter estimation attacks. We then present a mathematical model to measure evasion margin with the aim to understand the science of evasion due to ADS design. Finally, to minimize the evasion margin, we propose a key-based randomization scheme for existing ADSs and discuss its robustness against evasion attacks. Case studies are presented to illustrate the design methodology and extensive experimentation is performed to corroborate the results.

A. A. Zewail, A. Yener.  2015.  "The two-hop interference untrusted-relay channel with confidential messages". 2015 IEEE Information Theory Workshop - Fall (ITW). :322-326.

This paper considers the two-user interference relay channel where each source wishes to communicate to its destination a message that is confidential from the other destination. Furthermore, the relay, that is the enabler of communication, due to the absence of direct links, is untrusted. Thus, the messages from both sources need to be kept secret from the relay as well. We provide an achievable secure rate region for this network. The achievability scheme utilizes structured codes for message transmission, cooperative jamming and scaled compute-and-forward. In particular, the sources use nested lattice codes and stochastic encoding, while the destinations jam using lattice points. The relay decodes two integer combinations of the received lattice points and forwards, using Gaussian codewords, to both destinations. The achievability technique provides the insight that we can utilize the untrusted relay node as an encryption block in a two-hop interference relay channel with confidential messages.

2016-12-09
Tao Xie, University of Illinois at Urbana-Champaign, William Enck, North Carolina State University.  2016.  Text Analytics for Security.

Invited Tutorial, Symposium and Bootcamp on the Science of Security (HotSoS 2016), April 2016.

2016-11-15
2016-11-14
Dong Jin, Illinois Institute of Tecnology.  2016.  Towards a Secure and Resilient Industrial Control System with Software-Defined Networking.

Modern industrial control systems (ICSes) are increasingly adopting Internet technology to boost control efficiency, which unfortunately opens up a new frontier for cyber-security. People have typically applied existing Internet security techniques, such as firewalls, or anti-virus or anti-spyware software. However, those security solutions can only provide fine-grained protection at single devices. To address this, we design a novel software-defined networking (SDN) architecture that offers the global visibility of a control network infrastructure, and we investigate innovative SDN-based applications with the focus of ICS security, such as network verification and self-healing phasor measurement unit (PMU) networks. We are also conducting rigorous evaluation using the IIT campus microgrid as well as a high-fidelity testbed combining network emulation and power system simulation.

Illinois Lablet Information Trust Institute, Joint Trust and Security/Science of Security Seminar, by Dong (Kevin) Jin, March 15, 2016.

2016-11-11
Dong Jin, Illinois Institute of Technology.  2016.  Towards a Secure and Reilient Industrial Control System with Software-Defined Networking.

Modern industrial control systems (ICSes) are increasingly adopting Internet technology to boost control efficiency, which unfortunately opens up a new frontier for cyber-security. People have typically applied existing Internet security techniques, such as firewalls, or anti-virus or anti-spyware software. However, those security solutions can only provide fine-grained protection at single devices. To address this, we design a novel software-defined networking (SDN) architecture that offers the global visibility of a control network infrastructure, and we investigate innovative SDN-based applications with the focus of ICS security, such as network verification and self-healing phasor measurement unit (PMU) networks. We are also conducting rigorous evaluation using the IIT campus microgrid as well as a high-fidelity testbed combining network emulation and power system simulation.

Presented at the Illinois ITI Trust and Security/Science of Security Seminar, March 15, 2016.

2016-11-09
2016-10-06
Aiping Xiong, Robert Proctor, Wanling Zou, Ninghui Li.  2016.  Tracking users’ fixations when evaluating the validity of a web site.

Phishing refers to attacks over the Internet that often proceed in the following manner. An unsolicited email is sent by the deceiver posing as a legitimate party, with the intent of getting the user to click on a link that leads to a fraudulent webpage. This webpage mimics the authentic one of a reputable organization and requests personal information such as passwords and credit card numbers from the user. If the phishing attack is successful, that personal information can then be used for various illegal activities by the perpetrator. The most reliable sign of a phishing website may be that its domain name is incorrect in the address bar. In recognition of this, all major web browsers now use domain highlighting, that is, the domain name is shown in bold font. Domain highlighting is based on the assumption that users will attend to the address bar and that they will be able to distinguish legitimate from illegitimate domain names. We previously found little evidence for the effectiveness of domain highlighting, even when participants were directed to look at the address bar, in a study with many participants conducted online through Mechanical Turk. The present study was conducted in a laboratory setting that allowed us to have better control over the viewing conditions and measure the parts of the display at which the users looked. We conducted a laboratory experiment to assess whether directing users to attend to the address bar and the use of domain highlighting assist them at detecting fraudulent webpages. An Eyelink 1000plus eye tracker was used to monitor participants’ gaze patterns throughout the experiment. 48 participants were recruited from an undergraduate subject pool; half had been phished previously and half had not. They were required to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two trial blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Whether or not the domain name was highlighted in the address bar was manipulated between subjects. Results confirmed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. Participants rarely looked at the address bar during the trial block in which they were not directed to the address bar. The percentage of time spent looking at the address bar increased significantly when the participants were directed to look at it. The number of fixations on the address bar also increased, with both measures indicating that more attention was allocated to the address bar when it was emphasized. When participants were directed to look at the address bar, correct decisions were improved slightly for fraudulent webpages (“unsafe”) but not for the authentic ones (“safe”). Domain highlighting had little influence even when participants were directed to look at the address bar, suggesting that participants do not rely on the domain name for their decisions about webpage legitimacy. Without the general knowledge of domain names and specific knowledge about particular domain names, domain highlighting will not be effective.

Aiping Xiong, Robert Proctor, Wanling Zou, Ninghui Li.  2016.  Tracking users’ fixations when evaluating the validity of a web site.

Phishing refers to attacks over the Internet that often proceed in the following manner. An unsolicited email is sent by the deceiver posing as a legitimate party, with the intent of getting the user to click on a link that leads to a fraudulent webpage. This webpage mimics the authentic one of a reputable organization and requests personal information such as passwords and credit card numbers from the user. If the phishing attack is successful, that personal information can then be used for various illegal activities by the perpetrator. The most reliable sign of a phishing website may be that its domain name is incorrect in the address bar. In recognition of this, all major web browsers now use domain highlighting, that is, the domain name is shown in bold font. Domain highlighting is based on the assumption that users will attend to the address bar and that they will be able to distinguish legitimate from illegitimate domain names. We previously found little evidence for the effectiveness of domain highlighting, even when participants were directed to look at the address bar, in a study with many participants conducted online through Mechanical Turk. The present study was conducted in a laboratory setting that allowed us to have better control over the viewing conditions and measure the parts of the display at which the users looked. We conducted a laboratory experiment to assess whether directing users to attend to the address bar and the use of domain highlighting assist them at detecting fraudulent webpages. An Eyelink 1000plus eye tracker was used to monitor participants’ gaze patterns throughout the experiment. 48 participants were recruited from an undergraduate subject pool; half had been phished previously and half had not. They were required to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two trial blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Whether or not the domain name was highlighted in the address bar was manipulated between subjects. Results confirmed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. Participants rarely looked at the address bar during the trial block in which they were not directed to the address bar. The percentage of time spent looking at the address bar increased significantly when the participants were directed to look at it. The number of fixations on the address bar also increased, with both measures indicating that more attention was allocated to the address bar when it was emphasized. When participants were directed to look at the address bar, correct decisions were improved slightly for fraudulent webpages (“unsafe”) but not for the authentic ones (“safe”). Domain highlighting had little influence even when participants were directed to look at the address bar, suggesting that participants do not rely on the domain name for their decisions about webpage legitimacy. Without the general knowledge of domain names and specific knowledge about particular domain names, domain highlighting will not be effective.

2016-04-11
Haining Chen, Omar Chowdhury, Ninghui Li, Warut Khern-Am-Nuai, Suresh Chari, Ian Molloy, Youngja Park.  2016.  Tri-Modularization of Firewall Policies. ACM Symposium on Access Control Models and Technologies (SACMAT).

Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.

Haining Chen, Omar Chowdhury, Ninghui Li, Warut Khern-Am-Nuai, Suresh Chari, Ian Molloy, Youngja Park.  2016.  Tri-Modularization of Firewall Policies. ACM Symposium on Access Control Models and Technologies (SACMAT).

Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.