Visible to the public Biblio

Found 16998 results

Journal Article
Chatterjee, Urbi, Govindan, Vidya, Sadhukhan, Rajat, Mukhopadhyay, Debdeep, Chakraborty, Rajat Subhra, Mahata, Debashis, Prabhu, Mukesh M..  2019.  Building PUF Based Authentication and Key Exchange Protocol for IoT Without Explicit CRPs in Verifier Database. IEEE Transactions on Dependable and Secure Computing. 16:424–437.
Physically Unclonable Functions (PUFs) promise to be a critical hardware primitive to provide unique identities to billions of connected devices in Internet of Things (IoTs). In traditional authentication protocols a user presents a set of credentials with an accompanying proof such as password or digital certificate. However, IoTs need more evolved methods as these classical techniques suffer from the pressing problems of password dependency and inability to bind access requests to the “things” from which they originate. Additionally, the protocols need to be lightweight and heterogeneous. Although PUFs seem promising to develop such mechanism, it puts forward an open problem of how to develop such mechanism without needing to store the secret challenge-response pair (CRP) explicitly at the verifier end. In this paper, we develop an authentication and key exchange protocol by combining the ideas of Identity based Encryption (IBE), PUFs and Key-ed Hash Function to show that this combination can help to do away with this requirement. The security of the protocol is proved formally under the Session Key Security and the Universal Composability Framework. A prototype of the protocol has been implemented to realize a secured video surveillance camera using a combination of an Intel Edison board, with a Digilent Nexys-4 FPGA board consisting of an Artix-7 FPGA, together serving as the IoT node. We show, though the stand-alone video camera can be subjected to man-in-the-middle attack via IP-spoofing using standard network penetration tools, the camera augmented with the proposed protocol resists such attacks and it suits aptly in an IoT infrastructure making the protocol deployable for the industry.
Cuong Pham, University of Illinois at Urbana-Champaign, Zachary J. Estrada, University of Illinois at Urbana-Champaign, Phuong Cao, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2014.  Building Reliable and Secure Virtual Machines using Architectural Invariants. IEEE Security and Privacy. 12(5):82-85.

Reliability and security tend to be treated separately because they appear orthogonal: reliability focuses on accidental failures, security on intentional attacks. Because of the apparent dissimilarity between the two, tools to detect and recover from different classes of failures and attacks are usually designed and implemented differently. So, integrating support for reliability and security in a single framework is a significant challenge.

Here, we discuss how to address this challenge in the context of cloud computing, for which reliability and security are growing concerns. Because cloud deployments usually consist of commodity hardware and software, efficient monitoring is key to achieving resiliency. Although reliability and security monitoring might use different types of analytics, the same sensing infrastructure can provide inputs to monitoring modules.

We split monitoring into two phases: logging and auditing. Logging captures data or events; it constitutes the framework’s core and is common to all monitors. Auditing analyzes data or events; it’s implemented and operated independently by each monitor. To support a range of auditing policies, logging must capture a complete view, including both actions and states of target systems. It must also provide useful, trustworthy information regarding the captured view.

We applied these principles when designing HyperTap, a hypervisor-level monitoring framework for virtual machines (VMs). Unlike most VM-monitoring techniques, HyperTap employs hardware architectural invariants (hardware invariants, for short) to establish the root of trust for logging. Hardware invariants are properties defined and enforced by a hardware platform (for example, the x86 instruction set architecture). Additionally, HyperTap supports continuous, event-driven VM monitoring, which enables both capturing the system state and responding rapidly to actions of interest.

Kounelis, I., Baldini, G., Neisse, R., Steri, G., Tallacchini, M., Guimaraes Pereira, A..  2014.  Building Trust in the Human?Internet of Things Relationship Technology and Society Magazine, IEEE. 33:73-80.

Our vision in this paper is that agency, as the individual ability to intervene and tailor the system, is a crucial element in building trust in IoT technologies. Following up on this vision, we will first address the issue of agency, namely the individual capability to adopt free decisions, as a relevant driver in building trusted human-IoT relations, and how agency should be embedded in digital systems. Then we present the main challenges posed by existing approaches to implement this vision. We show then our proposal for a model-based approach that realizes the agency concept, including a prototype implementation.

Zhuo Lu, Wenye Wang, Wang, C..  2015.  Camouflage Traffic: Minimizing Message Delay for Smart Grid Applications under Jamming. Dependable and Secure Computing, IEEE Transactions on. 12:31-44.

Smart grid is a cyber-physical system that integrates power infrastructures with information technologies. To facilitate efficient information exchange, wireless networks have been proposed to be widely used in the smart grid. However, the jamming attack that constantly broadcasts radio interference is a primary security threat to prevent the deployment of wireless networks in the smart grid. Hence, spread spectrum systems, which provide jamming resilience via multiple frequency and code channels, must be adapted to the smart grid for secure wireless communications, while at the same time providing latency guarantee for control messages. An open question is how to minimize message delay for timely smart grid communication under any potential jamming attack. To address this issue, we provide a paradigm shift from the case-by-case methodology, which is widely used in existing works to investigate well-adopted attack models, to the worst-case methodology, which offers delay performance guarantee for smart grid applications under any attack. We first define a generic jamming process that characterizes a wide range of existing attack models. Then, we show that in all strategies under the generic process, the worst-case message delay is a U-shaped function of network traffic load. This indicates that, interestingly, increasing a fair amount of traffic can in fact improve the worst-case delay performance. As a result, we demonstrate a lightweight yet promising system, transmitting adaptive camouflage traffic (TACT), to combat jamming attacks. TACT minimizes the message delay by generating extra traffic called camouflage to balance the network load at the optimum. Experiments show that TACT can decrease the probability that a message is not delivered on time in order of magnitude.

Anwar, Z., Malik, A.W..  2014.  Can a DDoS Attack Meltdown My Data Center? A Simulation Study and Defense Strategies Communications Letters, IEEE. 18:1175-1178.

The goal of this letter is to explore the extent to which the vulnerabilities plaguing the Internet, particularly susceptibility to distributed denial-of-service (DDoS) attacks, impact the Cloud. DDoS has been known to disrupt Cloud services, but could it do worse by permanently damaging server and switch hardware? Services are hosted in data centers with thousands of servers generating large amounts of heat. Heating, ventilation, and air-conditioning (HVAC) systems prevent server downtime due to overheating. These are remotely managed using network management protocols that are susceptible to network attacks. Recently, Cloud providers have experienced outages due to HVAC malfunctions. Our contributions include a network simulation to study the feasibility of such an attack motivated by our experiences of such a security incident in a real data center. It demonstrates how a network simulator can study the interplay of the communication and thermal properties of a network and help prevent the Cloud provider's worst nightmare: meltdown of the data center as a result of a DDoS attack.

Manson, Daniel, Pike, Ronald.  2014.  The Case for Depth in Cybersecurity Education. ACM Inroads. 5:47–52.

In his book Outliers, Malcom Gladwell describes the 10,000-Hour Rule, a key to success in any field, as simply a matter of practicing a specific task that can be accomplished with 20 hours of work a week for 10 years [10]. Ongoing changes in technology and national security needs require aspiring excellent cybersecurity professionals to set a goal of 10,000 hours of relevant, hands-on skill development. The education system today is ill prepared to meet the challenge of producing an adequate number of cybersecurity professionals, but programs that use competitions and learning environments that teach depth are filling this void.

 

Rastogi, V., Yan Chen, Xuxian Jiang.  2014.  Catch Me If You Can: Evaluating Android Anti-Malware Against Transformation Attacks. Information Forensics and Security, IEEE Transactions on. 9:99-108.

Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.

Al-Dhaqm, A., Razak, S. A., Dampier, D. A., Choo, K. R., Siddique, K., Ikuesan, R. A., Alqarni, A., Kebande, V. R..  2020.  Categorization and Organization of Database Forensic Investigation Processes. IEEE Access. 8:112846—112858.
Database forensic investigation (DBFI) is an important area of research within digital forensics. It's importance is growing as digital data becomes more extensive and commonplace. The challenges associated with DBFI are numerous, and one of the challenges is the lack of a harmonized DBFI process for investigators to follow. In this paper, therefore, we conduct a survey of existing literature with the hope of understanding the body of work already accomplished. Furthermore, we build on the existing literature to present a harmonized DBFI process using design science research methodology. This harmonized DBFI process has been developed based on three key categories (i.e. planning, preparation and pre-response, acquisition and preservation, and analysis and reconstruction). Furthermore, the DBFI has been designed to avoid confusion or ambiguity, as well as providing practitioners with a systematic method of performing DBFI with a higher degree of certainty.
Yang, Kun, Forte, Domenic, Tehranipoor, Mark M..  2017.  CDTA: A Comprehensive Solution for Counterfeit Detection, Traceability, and Authentication in the IoT Supply Chain. ACM Transactions on Design Automation of Electronic Systems (TODAES). 22:42:1-42:31.

The Internet of Things (IoT) is transforming the way we live and work by increasing the connectedness of people and things on a scale that was once unimaginable. However, the vulnerabilities in the IoT supply chain have raised serious concerns about the security and trustworthiness of IoT devices and components within them. Testing for device provenance, detection of counterfeit integrated circuits (ICs) and systems, and traceability of IoT devices are challenging issues to address. In this article, we develop a novel radio-frequency identification (RFID)-based system suitable for counterfeit detection, traceability, and authentication in the IoT supply chain called CDTA. CDTA is composed of different types of on-chip sensors and in-system structures that collect necessary information to detect multiple counterfeit IC types (recycled, cloned, etc.), track and trace IoT devices, and verify the overall system authenticity. Central to CDTA is an RFID tag employed as storage and a channel to read the information from different types of chips on the printed circuit board (PCB) in both power-on and power-off scenarios. CDTA sensor data can also be sent to the remote server for authentication via an encrypted Ethernet channel when the IoT device is deployed in the field. A novel board ID generator is implemented by combining outputs of physical unclonable functions (PUFs) embedded in the RFID tag and different chips on the PCB. A light-weight RFID protocol is proposed to enable mutual authentication between RFID readers and tags. We also implement a secure interchip communication on the PCB. Simulations and experimental results using Spartan 3E FPGAs demonstrate the effectiveness of this system. The efficiency of the radio-frequency (RF) communication has also been verified via a PCB prototype with a printed slot antenna.

Awad, A., Matthews, A., Qiao, Y., Lee, B..  2017.  Chaotic Searchable Encryption for Mobile Cloud Storage. IEEE Transactions on Cloud Computing. PP:1–1.

This paper considers the security problem of outsourcing storage from user devices to the cloud. A secure searchable encryption scheme is presented to enable searching of encrypted user data in the cloud. The scheme simultaneously supports fuzzy keyword searching and matched results ranking, which are two important factors in facilitating practical searchable encryption. A chaotic fuzzy transformation method is proposed to support secure fuzzy keyword indexing, storage and query. A secure posting list is also created to rank the matched results while maintaining the privacy and confidentiality of the user data, and saving the resources of the user mobile devices. Comprehensive tests have been performed and the experimental results show that the proposed scheme is efficient and suitable for a secure searchable cloud storage system.

Ching-Kun Chen, Chun-Liang Lin, Shyan-Lung Lin, Yen-Ming Chiu, Cheng-Tang Chiang.  2014.  A Chaotic Theorectical Approach to ECG-Based Identity Recognition [Application Notes]. Computational Intelligence Magazine, IEEE. 9:53-63.

Sophisticated technologies realized from applying the idea of biometric identification are increasingly applied in the entrance security management system, private document protection, and security access control. Common biometric identification involves voice, attitude, keystroke, signature, iris, face, palm or finger prints, etc. Still, there are novel identification technologies based on the individual's biometric features under development [1-4].

Vijayakumar, P., Bose, S., Kannan, A..  2014.  Chinese remainder theorem based centralised group key management for secure multicast communication. Information Security, IET. 8:179-187.

Designing a centralised group key management with minimal computation complexity to support dynamic secure multicast communication is a challenging issue in secure multimedia multicast. In this study, the authors propose a Chinese remainder theorem-based group key management scheme that drastically reduces computation complexity of the key server. The computation complexity of key server is reduced to O(1) in this proposed algorithm. Moreover, the computation complexity of group member is also minimised by performing one modulo division operation when a user join or leave operation is performed in a multicast group. The proposed algorithm has been implemented and tested using a key-star-based key management scheme and has been observed that this proposed algorithm reduces the computation complexity significantly.

Vizer, L. M., Sears, A..  2015.  Classifying Text-Based Computer Interactions for Health Monitoring. IEEE Pervasive Computing. 14:64–71.

Detecting early trends indicating cognitive decline can allow older adults to better manage their health, but current assessments present barriers precluding the use of such continuous monitoring by consumers. To explore the effects of cognitive status on computer interaction patterns, the authors collected typed text samples from older adults with and without pre-mild cognitive impairment (PreMCI) and constructed statistical models from keystroke and linguistic features for differentiating between the two groups. Using both feature sets, they obtained a 77.1 percent correct classification rate with 70.6 percent sensitivity, 83.3 percent specificity, and a 0.808 area under curve (AUC). These results are in line with current assessments for MC–a more advanced disease–but using an unobtrusive method. This research contributes a combination of features for text and keystroke analysis and enhances understanding of how clinicians or older adults themselves might monitor for PreMCI through patterns in typed text. It has implications for embedded systems that can enable healthcare providers and consumers to proactively and continuously monitor changes in cognitive function.

Lei, X., Liao, X., Huang, T., Li, H..  2014.  Cloud Computing Service: the Case of Large Matrix Determinant Computation. Services Computing, IEEE Transactions on. PP:1-1.

Cloud computing paradigm provides an alternative and economical service for resource-constrained clients to perform large-scale data computation. Since large matrix determinant computation (DC) is ubiquitous in the fields of science and engineering, a first step is taken in this paper to design a protocol that enables clients to securely, verifiably, and efficiently outsource DC to a malicious cloud. The main idea to protect the privacy is employing some transformations on the original matrix to get an encrypted matrix which is sent to the cloud; and then transforming the result returned from the cloud to get the correct determinant of the original matrix. Afterwards, a randomized Monte Carlo verification algorithm with one-sided error is introduced, whose superiority in designing inexpensive result verification algorithm for secure outsourcing is well demonstrated. In addition, it is analytically shown that the proposed protocol simultaneously fulfills the goals of correctness, security, robust cheating resistance, and high-efficiency. Extensive theoretical analysis and experimental evaluation also show its high-efficiency and immediate practicability. It is hoped that the proposed protocol can shed light in designing other novel secure outsourcing protocols, and inspire powerful companies and working groups to finish the programming of the demanded all-inclusive scientific computations outsourcing software system. It is believed that such software system can be profitable by means of providing large-scale scientific computation services for so many potential clients.
 

Torkura, Kennedy A., Sukmana, Muhammad I. H., Cheng, Feng, Meinel, Christoph.  2020.  CloudStrike: Chaos Engineering for Security and Resiliency in Cloud Infrastructure. IEEE Access. 8:123044–123060.
Most cyber-attacks and data breaches in cloud infrastructure are due to human errors and misconfiguration vulnerabilities. Cloud customer-centric tools are imperative for mitigating these issues, however existing cloud security models are largely unable to tackle these security challenges. Therefore, novel security mechanisms are imperative, we propose Risk-driven Fault Injection (RDFI) techniques to address these challenges. RDFI applies the principles of chaos engineering to cloud security and leverages feedback loops to execute, monitor, analyze and plan security fault injection campaigns, based on a knowledge-base. The knowledge-base consists of fault models designed from secure baselines, cloud security best practices and observations derived during iterative fault injection campaigns. These observations are helpful for identifying vulnerabilities while verifying the correctness of security attributes (integrity, confidentiality and availability). Furthermore, RDFI proactively supports risk analysis and security hardening efforts by sharing security information with security mechanisms. We have designed and implemented the RDFI strategies including various chaos engineering algorithms as a software tool: CloudStrike. Several evaluations have been conducted with CloudStrike against infrastructure deployed on two major public cloud infrastructure: Amazon Web Services and Google Cloud Platform. The time performance linearly increases, proportional to increasing attack rates. Also, the analysis of vulnerabilities detected via security fault injection has been used to harden the security of cloud resources to demonstrate the effectiveness of the security information provided by CloudStrike. Therefore, we opine that our approaches are suitable for overcoming contemporary cloud security issues.
Stanisavljevic, Z., Stanisavljevic, J., Vuletic, P., Jovanovic, Z..  2014.  COALA - System for Visual Representation of Cryptography Algorithms. Learning Technologies, IEEE Transactions on. 7:178-190.

Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
 

Wenli Liu, Xiaolong Zheng, Tao Wang, Hui Wang.  2014.  Collaboration Pattern and Topic Analysis on Intelligence and Security Informatics Research. Intelligent Systems, IEEE. 29:39-46.

In this article, researcher collaboration patterns and research topics on Intelligence and Security Informatics (ISI) are investigated using social network analysis approaches. The collaboration networks exhibit scale-free property and small-world effect. From these networks, the authors obtain the key researchers, institutions, and three important topics.

Hilal, Allaa R., Basir, Otman.  2016.  A Collaborative Energy-Aware Sensor Management System Using Team Theory. ACM Trans. Embed. Comput. Syst.. 15:52:1–52:26.

With limited battery supply, power is a scarce commodity in wireless sensor networks. Thus, to prolong the lifetime of the network, it is imperative that the sensor resources are managed effectively. This task is particularly challenging in heterogeneous sensor networks for which decisions and compromises regarding sensing strategies are to be made under time and resource constraints. In such networks, a sensor has to reason about its current state to take actions that are deemed appropriate with respect to its mission, its energy reserve, and the survivability of the overall network. Sensor Management controls and coordinates the use of the sensory suites in a manner that maximizes the success rate of the system in achieving its missions. This article focuses on formulating and developing an autonomous energy-aware sensor management system that strives to achieve network objectives while maximizing its lifetime. A team-theoretic formulation based on the Belief-Desire-Intention (BDI) model and the Joint Intention theory is proposed as a mechanism for effective and energy-aware collaborative decision-making. The proposed system models the collective behavior of the sensor nodes using the Joint Intention theory to enhance sensors’ collaboration and success rate. Moreover, the BDI modeling of the sensor operation and reasoning allows a sensor node to adapt to the environment dynamics, situation-criticality level, and availability of its own resources. The simulation scenario selected in this work is the surveillance of the Waterloo International Airport. Various experiments are conducted to investigate the effect of varying the network size, number of threats, threat agility, environment dynamism, as well as tracking quality and energy consumption, on the performance of the proposed system. The experimental results demonstrate the merits of the proposed approach compared to the state-of-the-art centralized approach adapted from Atia et al. [2011] and the localized approach in Hilal and Basir [2015] in terms of energy consumption, adaptability, and network lifetime. The results show that the proposed approach has 12 × less energy consumption than that of the popular centralized approach.

Dua, Akshay, Bulusu, Nirupama, Feng, Wu-Chang, Hu, Wen.  2014.  Combating Software and Sybil Attacks to Data Integrity in Crowd-Sourced Embedded Systems. ACM Trans. Embed. Comput. Syst.. 13:154:1–154:19.

Crowd-sourced mobile embedded systems allow people to contribute sensor data, for critical applications, including transportation, emergency response and eHealth. Data integrity becomes imperative as malicious participants can launch software and Sybil attacks modifying the sensing platform and data. To address these attacks, we develop (1) a Trusted Sensing Peripheral (TSP) enabling collection of high-integrity raw or aggregated data, and participation in applications requiring additional modalities; and (2) a Secure Tasking and Aggregation Protocol (STAP) enabling aggregation of TSP trusted readings by untrusted intermediaries, while efficiently detecting fabricators. Evaluations demonstrate that TSP and STAP are practical and energy-efficient.

Ding, Shuai, Yang, Shanlin, Zhang, Youtao, Liang, Changyong, Xia, Chenyi.  2014.  Combining QoS Prediction and Customer Satisfaction Estimation to Solve Cloud Service Trustworthiness Evaluation Problems. Know.-Based Syst.. 56:216–225.

The collection and combination of assessment data in trustworthiness evaluation of cloud service is challenging, notably because QoS value may be missing in offline evaluation situation due to the time-consuming and costly cloud service invocation. Considering the fact that many trustworthiness evaluation problems require not only objective measurement but also subjective perception, this paper designs a novel framework named CSTrust for conducting cloud service trustworthiness evaluation by combining QoS prediction and customer satisfaction estimation. The proposed framework considers how to improve the accuracy of QoS value prediction on quantitative trustworthy attributes, as well as how to estimate the customer satisfaction of target cloud service by taking advantages of the perception ratings on qualitative attributes. The proposed methods are validated through simulations, demonstrating that CSTrust can effectively predict assessment data and release evaluation results of trustworthiness.

Christopher Hannon, Illinois Institute of Technology, Jiaqi Yan, Illinois Institute of Technology, Dong Jin, Illinois Institute of Technology, Chen Chen, Argonne National Laboratory, Jianhui Wang, Argonne National Laboratory.  2018.  Combining Simulation and Emulation Systems for Smart Grid Planning and Evaluation. CM Transactions on Modeling and Computer Simulation (TOMACS) – Special Issue on PADS. 28(4)

Software-defined networking (SDN) enables efficient networkmanagement. As the technology matures, utilities are looking to integrate those benefits to their operations technology (OT) networks. To help the community to better understand and evaluate the effects of such integration, we develop DSSnet, a testing platform that combines a power distribution system simulator and an SDN-based network emulator for smart grid planning and evaluation. DSSnet relies on a container-based virtual time system to achieve efficient synchronization between the simulation and emulation systems. To enhance the system scalability and usability, we extend DSSnet to support a distributed controller environment. To enhance system fidelity, we extend the virtual time system to support kernel-based switches. We also evaluate the system performance of DSSnet and demonstrate the usability of DSSnet with a resilient demand response application case study.

Rommel García, Ignacio Algredo-Badillo, Miguel Morales-Sandoval, Claudia Feregrino-Uribe, René Cumplido.  2014.  A compact FPGA-based processor for the Secure Hash Algorithm SHA-256. Computers & Electrical Engineering. 40:194-202.

This work reports an efficient and compact FPGA processor for the SHA-256 algorithm. The novel processor architecture is based on a custom datapath that exploits the reusing of modules, having as main component a 4-input Arithmetic-Logic Unit not previously reported. This ALU is designed as a result of studying the type of operations in the SHA algorithm, their execution sequence and the associated dataflow. The processor hardware architecture was modeled in VHDL and implemented in FPGAs. The results obtained from the implementation in a Virtex5 device demonstrate that the proposed design uses fewer resources achieving higher performance and efficiency, outperforming previous approaches in the literature focused on compact designs, saving around 60% FPGA slices with an increased throughput (Mbps) and efficiency (Mbps/Slice). The proposed SHA processor is well suited for applications like Wi-Fi, TMP (Trusted Mobile Platform), and MTM (Mobile Trusted Module), where the data transfer speed is around 50 Mbps.

40th-year commemorative issue

Thirunarayan, Krishnaprasad, Anantharam, Pramod, Henson, Cory, Sheth, Amit.  2014.  Comparative Trust Management with Applications: Bayesian Approaches Emphasis. Future Gener. Comput. Syst.. 31:182–199.

Trust relationships occur naturally in many diverse contexts such as collaborative systems, e-commerce, interpersonal interactions, social networks, and semantic sensor web. As agents providing content and services become increasingly removed from the agents that consume them, the issue of robust trust inference and update becomes critical. There is a need to find online substitutes for traditional (direct or face-to-face) cues to derive measures of trust, and create efficient and robust systems for managing trust in order to support decision-making. Unfortunately, there is neither a universal notion of trust that is applicable to all domains nor a clear explication of its semantics or computation in many situations. We motivate the trust problem, explain the relevant concepts, summarize research in modeling trust and gleaning trustworthiness, and discuss challenges confronting us. The goal is to provide a comprehensive broad overview of the trust landscape, with the nitty-gritties of a handful of approaches. We also provide details of the theoretical underpinnings and comparative analysis of Bayesian approaches to binary and multi-level trust, to automatically determine trustworthiness in a variety of reputation systems including those used in sensor networks, e-commerce, and collaborative environments. Ultimately, we need to develop expressive trust networks that can be assigned objective semantics.

Perzyk, Marcin, Kochanski, Andrzej, Kozlowski, Jacek, Soroczynski, Artur, Biernacki, Robert.  2014.  Comparison of Data Mining Tools for Significance Analysis of Process Parameters in Applications to Process Fault Diagnosis. Inf. Sci.. 259:380–392.

This paper presents an evaluation of various methodologies used to determine relative significances of input variables in data-driven models. Significance analysis applied to manufacturing process parameters can be a useful tool in fault diagnosis for various types of manufacturing processes. It can also be applied to building models that are used in process control. The relative significances of input variables can be determined by various data mining methods, including relatively simple statistical procedures as well as more advanced machine learning systems. Several methodologies suitable for carrying out classification tasks which are characteristic of fault diagnosis were evaluated and compared from the viewpoint of their accuracy, robustness of results and applicability. Two types of testing data were used: synthetic data with assumed dependencies and real data obtained from the foundry industry. The simple statistical method based on contingency tables revealed the best overall performance, whereas advanced machine learning models, such as ANNs and SVMs, appeared to be of less value.