Biblio
Data mining visualization is an important aspect of big data visualization and analysis. The impact of the nature-inspired algorithm along with the impact of computing traditions for the complete visualization of the storage and data communication needs have been studied. This paper also explores the possibilities of the hybridization of data mining in terms of association of cloud computing. It also explores the data analytical view in the exploration of these approaches in terms of data storage in big data. Based on these aspects the methodological advancement along with the problem statements has been analyzed. This will help in the exploration of computational capability along with the new insights in this domain.
Internet of Things devices and data sources areseeing increased use in various application areas. The pro-liferation of cheaper sensor hardware has allowed for widerscale data collection deployments. With increased numbers ofdeployed sensors and the use of heterogeneous sensor typesthere is increased scope for collecting erroneous, inaccurate orinconsistent data. This in turn may lead to inaccurate modelsbuilt from this data. It is important to evaluate this data asit is collected to determine its validity. This paper presents ananalysis of data quality as it is represented in Internet of Things(IoT) systems and some of the limitations of this representation. The paper discusses the use of trust as a heuristic to drive dataquality measurements. Trust is a well-established metric that hasbeen used to determine the validity of a piece or source of datain crowd sourced or other unreliable data collection techniques. The analysis extends to detail an appropriate framework forrepresenting data quality effectively within the big data modeland why a trust backed framework is important especially inheterogeneously sourced IoT data streams.
One important aspect in protecting Cyber Physical System (CPS) is ensuring that the proper control and measurement signals are propagated within the control loop. The CPS research community has been developing a large set of check blocks that can be integrated within the control loop to check signals against various types of attacks (e.g., false data injection attacks). Unfortunately, it is not possible to integrate all these “checks” within the control loop as the overhead introduced when checking signals may violate the delay constraints of the control loop. Moreover, these blocks do not completely operate in isolation of each other as dependencies exist among them in terms of their effectiveness against detecting a subset of attacks. Thus, it becomes a challenging and complex problem to assign the proper checks, especially with the presence of a rational adversary who can observe the check blocks assigned and optimizes her own attack strategies accordingly. This paper tackles the inherent state-action space explosion that arises in securing CPS through developing DeepBLOC (DB)-a framework in which Deep Reinforcement Learning algorithms are utilized to provide optimal/sub-optimal assignments of check blocks to signals. The framework models stochastic games between the adversary and the CPS defender and derives mixed strategies for assigning check blocks to ensure the integrity of the propagated signals while abiding to the real-time constraints dictated by the control loop. Through extensive simulation experiments and a real implementation on a water purification system, we show that DB achieves assignment strategies that outperform other strategies and heuristics.
In recent days, cloud computing is one of the emerging fields. It is a platform to maintain the data and privacy of the users. To process and regulate the data with high security, the access control methods are used. The cloud environment always faces several challenges such as robustness, security issues and so on. Conventional methods like Cipher text-Policy Attribute-Based Encryption (CP-ABE) are reflected in providing huge security, but still, the problem exists like the non-existence of attribute revocation and minimum efficient. Hence, this research work particularly on the attribute-based mechanism to maximize efficiency. Initially, an objective coined out in this work is to define the attributes for a set of users. Secondly, the data is to be re-encrypted based on the access policies defined for the particular file. The re-encryption process renders information to the cloud server for verifying the authenticity of the user even though the owner is offline. The main advantage of this work evaluates multiple attributes and allows respective users who possess those attributes to access the data. The result proves that the proposed Data sharing scheme helps for Revocation under a fine-grained attribute structure.
Since remote ages, queues and delays have been a rather exasperating reality of human daily life. Today, they pursue us everywhere: in technical, social, socio-technical, and even control systems, dramatically deteriorating their performance. In this variety, it is the computer systems that are sure to cause the growing anxiety in our digital era. Although for our everyday Internet surfing, experiencing long-lasting and annoying delays is an unpleasant but not dangerous situation, for industrial control systems, especially those dealing with critical infrastructures, such behavior is unacceptable. The article presents a deterministic approach to solving some digital control system problems associated with delays and backlogs. Being based on Network calculus, in contrast to statistical methods of Queuing theory, it provides worst-case results, which are eminently desirable for critical infrastructures. The article covers the basics of a theory of deterministic queuing systems Network calculus, its evolution regarding the relationship between backlog bound and delay, and a technique for handling empirical data. The problems being solved by the deterministic approach: standard calculation of network performance measures, estimation of database maximum updating time, and cybersecurity assessment including such issues as the CIA triad representation, operational technology influence, and availability understanding focusing on its correlation with a delay are thoroughly discussed as well.
In the last couple of years, the move to cyberspace provides a fertile environment for ransomware criminals like ever before. Notably, since the introduction of WannaCry, numerous ransomware detection solution has been proposed. However, the ransomware incidence report shows that most organizations impacted by ransomware are running state of the art ransomware detection tools. Hence, an alternative solution is an urgent requirement as the existing detection models are not sufficient to spot emerging ransomware treat. With this motivation, our work proposes "DeepGuard," a novel concept of modeling user behavior for ransomware detection. The main idea is to log the file-interaction pattern of typical user activity and pass it through deep generative autoencoder architecture to recreate the input. With sufficient training data, the model can learn how to reconstruct typical user activity (or input) with minimal reconstruction error. Hence, by applying the three-sigma limit rule on the model's output, DeepGuard can distinguish the ransomware activity from the user activity. The experiment result shows that DeepGuard effectively detects a variant class of ransomware with minimal false-positive rates. Overall, modeling the attack detection with user-behavior permits the proposed strategy to have deep visibility of various ransomware families.
Today, Internet of Things (IoT) devices mostly operate in enclosed, proprietary environments. To unfold the full potential of IoT applications, a unifying and permissionless environment is crucial. All IoT devices, even unknown to each other, would be able to trade services and assets across various domains. In order to realize those applications, uniquely resolvable identities are essential. However, quantifiable trust in identities and their authentication are not trivially provided in such an environment due to the absence of a trusted authority. This research presents a new identity and trust framework for IoT devices, based on Distributed Ledger Technology (DLT). IoT devices assign identities to themselves, which are managed publicly and decentralized on the DLT's network as Self Sovereign Identities (SSI). In addition to the Identity Management System (IdMS), the framework provides a Web of Trust (WoT) approach to enable automatic trust rating of arbitrary identities. For the framework we used the IOTA Tangle to access and store data, achieving high scalability and low computational overhead. To demonstrate the feasibility of our framework, we provide a proof-of-concept implementation and evaluate the set objectives for real world applicability as well as the vulnerability against common threats in IdMSs and WoTs.
While internet technologies are developing day by day, threats against them are increasing at the same speed. One of the most serious and common types of attacks is Distributed Denial of Service (DDoS) attacks. The DDoS intrusion detection approach proposed in this study is based on fuzzy logic and entropy. The network is modeled as a graph and graphics-based features are used to distinguish attack traffic from non-attack traffic. Fuzzy clustering is applied based on these properties to indicate the tendency of IP addresses or port numbers to be in the same cluster. Based on this uncertainty, attack and non-attack traffic were modeled. The detection stage uses the fuzzy relevance function. This algorithm was tested on real data collected from Boğaziçi University network.
Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.
In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.
Avoiding security vulnerabilities is very important for embedded systems. Dynamic Information Flow Tracking (DIFT) is a powerful technique to analyze SW with respect to security policies in order to protect the system against a broad range of security related exploits. However, existing DIFT approaches either do not exist for Virtual Prototypes (VPs) or fail to model complex hardware/software interactions.In this paper, we present a novel approach that enables early and accurate DIFT of binaries targeting embedded systems with custom peripherals. Leveraging the SystemC framework, our DIFT engine tracks accurate data flow information alongside the program execution to detect violations of security policies at run-time. We demonstrate the effectiveness and applicability of our approach by extensive experiments.