Biblio

Found 200 results

Filters: Keyword is Data analysis  [Clear All Filters]
2017-12-12
Diaz, J. S. B., Medeiros, C. B..  2017.  WorkflowHunt: Combining Keyword and Semantic Search in Scientific Workflow Repositories. 2017 IEEE 13th International Conference on e-Science (e-Science). :138–147.

Scientific datasets and the experiments that analyze them are growing in size and complexity, and scientists are facing difficulties to share such resources. Some initiatives have emerged to try to solve this problem. One of them involves the use of scientific workflows to represent and enact experiment execution. There is an increasing number of workflows that are potentially relevant for more than one scientific domain. However, it is hard to find workflows suitable for reuse given an experiment. Creating a workflow takes time and resources, and their reuse helps scientists to build new workflows faster and in a more reliable way. Search mechanisms in workflow repositories should provide different options for workflow discovery, but it is difficult for generic repositories to provide multiple mechanisms. This paper presents WorkflowHunt, a hybrid architecture for workflow search and discovery for generic repositories, which combines keyword and semantic search to allow finding relevant workflows using different search methods. We validated our architecture creating a prototype that uses real workflows and metadata from myExperiment, and compare search results via WorkflowHunt and via myExperiment's search interface.

2018-01-10
Thaler, S., Menkonvski, V., Petkovic, M..  2017.  Towards a neural language model for signature extraction from forensic logs. 2017 5th International Symposium on Digital Forensic and Security (ISDFS). :1–6.
Signature extraction is a critical preprocessing step in forensic log analysis because it enables sophisticated analysis techniques to be applied to logs. Currently, most signature extraction frameworks either use rule-based approaches or handcrafted algorithms. Rule-based systems are error-prone and require high maintenance effort. Hand-crafted algorithms use heuristics and tend to work well only for specialized use cases. In this paper we present a novel approach to extract signatures from forensic logs that is based on a neural language model. This language model learns to identify mutable and non-mutable parts in a log message. We use this information to extract signatures. Neural language models have shown to work extremely well for learning complex relationships in natural language text. We experimentally demonstrate that our model can detect which parts are mutable with an accuracy of 86.4%. We also show how extracted signatures can be used for clustering log lines.
2018-04-02
Al-Zobbi, M., Shahrestani, S., Ruan, C..  2017.  Implementing A Framework for Big Data Anonymity and Analytics Access Control. 2017 IEEE Trustcom/BigDataSE/ICESS. :873–880.

Analytics in big data is maturing and moving towards mass adoption. The emergence of analytics increases the need for innovative tools and methodologies to protect data against privacy violation. Many data anonymization methods were proposed to provide some degree of privacy protection by applying data suppression and other distortion techniques. However, currently available methods suffer from poor scalability, performance and lack of framework standardization. Current anonymization methods are unable to cope with the massive size of data processing. Some of these methods were especially proposed for MapReduce framework to operate in Big Data. However, they still operate in conventional data management approaches. Therefore, there were no remarkable gains in the performance. We introduce a framework that can operate in MapReduce environment to benefit from its advantages, as well as from those in Hadoop ecosystems. Our framework provides a granular user's access that can be tuned to different authorization levels. The proposed solution provides a fine-grained alteration based on the user's authorization level to access MapReduce domain for analytics. Using well-developed role-based access control approaches, this framework is capable of assigning roles to users and map them to relevant data attributes.

2017-12-12
Gamachchi, A., Boztas, S..  2017.  Insider Threat Detection Through Attributed Graph Clustering. 2017 IEEE Trustcom/BigDataSE/ICESS. :112–119.

While most organizations continue to invest in traditional network defences, a formidable security challenge has been brewing within their own boundaries. Malicious insiders with privileged access in the guise of a trusted source have carried out many attacks causing far reaching damage to financial stability, national security and brand reputation for both public and private sector organizations. Growing exposure and impact of the whistleblower community and concerns about job security with changing organizational dynamics has further aggravated this situation. The unpredictability of malicious attackers, as well as the complexity of malicious actions, necessitates the careful analysis of network, system and user parameters correlated with insider threat problem. Thus it creates a high dimensional, heterogeneous data analysis problem in isolating suspicious users. This research work proposes an insider threat detection framework, which utilizes the attributed graph clustering techniques and outlier ranking mechanism for enterprise users. Empirical results also confirm the effectiveness of the method by achieving the best area under curve value of 0.7648 for the receiver operating characteristic curve.

Jiang, L., Kuhn, W., Yue, P..  2017.  An interoperable approach for Sensor Web provenance. 2017 6th International Conference on Agro-Geoinformatics. :1–6.

The Sensor Web is evolving into a complex information space, where large volumes of sensor observation data are often consumed by complex applications. Provenance has become an important issue in the Sensor Web, since it allows applications to answer “what”, “when”, “where”, “who”, “why”, and “how” queries related to observations and consumption processes, which helps determine the usability and reliability of data products. This paper investigates characteristics and requirements of provenance in the Sensor Web and proposes an interoperable approach to building a provenance model for the Sensor Web. Our provenance model extends the W3C PROV Data Model with Sensor Web domain vocabularies. It is developed using Semantic Web technologies and thus allows provenance information of sensor observations to be exposed in the Web of Data using the Linked Data approach. A use case illustrates the applicability of the approach.

Stephan, E., Raju, B., Elsethagen, T., Pouchard, L., Gamboa, C..  2017.  A scientific data provenance harvester for distributed applications. 2017 New York Scientific Data Summit (NYSDS). :1–9.

Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.

2018-11-14
Teoh, T. T., Nguwi, Y. Y., Elovici, Y., Cheung, N. M., Ng, W. L..  2017.  Analyst Intuition Based Hidden Markov Model on High Speed, Temporal Cyber Security Big Data. 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). :2080–2083.
Hidden Markov Models (HMM) are probabilistic models that can be used for forecasting time series data. It has seen success in various domains like finance [1-5], bioinformatics [6-8], healthcare [9-11], agriculture [12-14], artificial intelligence[15-17]. However, the use of HMM in cyber security found to date is numbered. We believe the properties of HMM being predictive, probabilistic, and its ability to model different naturally occurring states form a good basis to model cyber security data. It is hence the motivation of this work to provide the initial results of our attempts to predict security attacks using HMM. A large network datasets representing cyber security attacks have been used in this work to establish an expert system. The characteristics of attacker's IP addresses can be extracted from our integrated datasets to generate statistical data. The cyber security expert provides the weight of each attribute and forms a scoring system by annotating the log history. We applied HMM to distinguish between a cyber security attack, unsure and no attack by first breaking the data into 3 cluster using Fuzzy K mean (FKM), then manually label a small data (Analyst Intuition) and finally use HMM state-based approach. By doing so, our results are very encouraging as compare to finding anomaly in a cyber security log, which generally results in creating huge amount of false detection.
Teoh, T. T., Zhang, Y., Nguwi, Y. Y., Elovici, Y., Ng, W. L..  2017.  Analyst Intuition Inspired High Velocity Big Data Analysis Using PCA Ranked Fuzzy K-Means Clustering with Multi-Layer Perceptron (MLP) to Obviate Cyber Security Risk. 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). :1790–1793.
The growing prevalence of cyber threats in the world are affecting every network user. Numerous security monitoring systems are being employed to protect computer networks and resources from falling victim to cyber-attacks. There is a pressing need to have an efficient security monitoring system to monitor the large network datasets generated in this process. A large network datasets representing Malware attacks have been used in this work to establish an expert system. The characteristics of attacker's IP addresses can be extracted from our integrated datasets to generate statistical data. The cyber security expert provides to the weight of each attribute and forms a scoring system by annotating the log history. We adopted a special semi supervise method to classify cyber security log into attack, unsure and no attack by first breaking the data into 3 cluster using Fuzzy K mean (FKM), then manually label a small data (Analyst Intuition) and finally train the neural network classifier multilayer perceptron (MLP) base on the manually labelled data. By doing so, our results is very encouraging as compare to finding anomaly in a cyber security log, which generally results in creating huge amount of false detection. The method of including Artificial Intelligence (AI) and Analyst Intuition (AI) is also known as AI2. The classification results are encouraging in segregating the types of attacks.
2017-12-12
Bhattacharjee, S. Das, Yuan, J., Jiaqi, Z., Tan, Y. P..  2017.  Context-aware graph-based analysis for detecting anomalous activities. 2017 IEEE International Conference on Multimedia and Expo (ICME). :1021–1026.

This paper proposes a context-aware, graph-based approach for identifying anomalous user activities via user profile analysis, which obtains a group of users maximally similar among themselves as well as to the query during test time. The main challenges for the anomaly detection task are: (1) rare occurrences of anomalies making it difficult for exhaustive identification with reasonable false-alarm rate, and (2) continuously evolving new context-dependent anomaly types making it difficult to synthesize the activities apriori. Our proposed query-adaptive graph-based optimization approach, solvable using maximum flow algorithm, is designed to fully utilize both mutual similarities among the user models and their respective similarities with the query to shortlist the user profiles for a more reliable aggregated detection. Each user activity is represented using inputs from several multi-modal resources, which helps to localize anomalies from time-dependent data efficiently. Experiments on public datasets of insider threats and gesture recognition show impressive results.

2018-02-06
Settanni, G., Shovgenya, Y., Skopik, F., Graf, R., Wurzenberger, M., Fiedler, R..  2017.  Acquiring Cyber Threat Intelligence through Security Information Correlation. 2017 3rd IEEE International Conference on Cybernetics (CYBCONF). :1–7.

Cyber Physical Systems (CPS) operating in modern critical infrastructures (CIs) are increasingly being targeted by highly sophisticated cyber attacks. Threat actors have quickly learned of the value and potential impact of targeting CPS, and numerous tailored multi-stage cyber-physical attack campaigns, such as Advanced Persistent Threats (APTs), have been perpetrated in the last years. They aim at stealthily compromising systems' operations and cause severe impact on daily business operations such as shutdowns, equipment damage, reputation damage, financial loss, intellectual property theft, and health and safety risks. Protecting CIs against such threats has become as crucial as complicated. Novel distributed detection and reaction methodologies are necessary to effectively uncover these attacks, and timely mitigate their effects. Correlating large amounts of data, collected from a multitude of relevant sources, is fundamental for Security Operation Centers (SOCs) to establish cyber situational awareness, and allow to promptly adopt suitable countermeasures in case of attacks. In our previous work we introduced three methods for security information correlation. In this paper we define metrics and benchmarks to evaluate these correlation methods, we assess their accuracy, and we compare their performance. We finally demonstrate how the presented techniques, implemented within our cyber threat intelligence analysis engine called CAESAIR, can be applied to support incident handling tasks performed by SOCs.

2017-12-12
Legg, P. A., Buckley, O., Goldsmith, M., Creese, S..  2017.  Automated Insider Threat Detection System Using User and Role-Based Profile Assessment. IEEE Systems Journal. 11:503–512.

Organizations are experiencing an ever-growing concern of how to identify and defend against insider threats. Those who have authorized access to sensitive organizational data are placed in a position of power that could well be abused and could cause significant damage to an organization. This could range from financial theft and intellectual property theft to the destruction of property and business reputation. Traditional intrusion detection systems are neither designed nor capable of identifying those who act maliciously within an organization. In this paper, we describe an automated system that is capable of detecting insider threats within an organization. We define a tree-structure profiling approach that incorporates the details of activities conducted by each user and each job role and then use this to obtain a consistent representation of features that provide a rich description of the user's behavior. Deviation can be assessed based on the amount of variance that each user exhibits across multiple attributes, compared against their peers. We have performed experimentation using ten synthetic data-driven scenarios and found that the system can identify anomalous behavior that may be indicative of a potential threat. We also show how our detection system can be combined with visual analytics tools to support further investigation by an analyst.

Ogiela, L., Ogiela, M. R..  2017.  Insider Threats and Cryptographic Techniques in Secure Information Management. IEEE Systems Journal. 11:405–414.

This publication presents some techniques for insider threats and cryptographic protocols in secure processes. Those processes are dedicated to the information management of strategic data splitting. Strategic data splitting is dedicated to enterprise management processes as well as methods of securely storing and managing this type of data. Because usually strategic data are not enough secure and resistant for unauthorized leakage, we propose a new protocol that allows to protect data in different management structures. The presented data splitting techniques will concern cryptographic information splitting algorithms, as well as data sharing algorithms making use of cognitive data analysis techniques. The insider threats techniques will concern data reconstruction methods and cognitive data analysis techniques. Systems for the semantic analysis and secure information management will be used to conceal strategic information about the condition of the enterprise. Using the new approach, which is based on cognitive systems allow to guarantee the secure features and make the management processes more efficient.

2018-01-23
Nakhla, N., Perrett, K., McKenzie, C..  2017.  Automated computer network defence using ARMOUR: Mission-oriented decision support and vulnerability mitigation. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–8.

Mission assurance requires effective, near-real time defensive cyber operations to appropriately respond to cyber attacks, without having a significant impact on operations. The ability to rapidly compute, prioritize and execute network-based courses of action (CoAs) relies on accurate situational awareness and mission-context information. Although diverse solutions exist for automatically collecting and analysing infrastructure data, few deliver automated analysis and implementation of network-based CoAs in the context of the ongoing mission. In addition, such processes can be operatorintensive and available tools tend to be specific to a set of common data sources and network responses. To address these issues, Defence Research and Development Canada (DRDC) is leading the development of the Automated Computer Network Defence (ARMOUR) technology demonstrator and cyber defence science and technology (S&T) platform. ARMOUR integrates new and existing off-the-shelf capabilities to provide enhanced decision support and to automate many of the tasks currently executed manually by network operators. This paper describes the cyber defence integration framework, situational awareness, and automated mission-oriented decision support that ARMOUR provides.

Erola, A., Agrafiotis, I., Happa, J., Goldsmith, M., Creese, S., Legg, P. A..  2017.  RicherPicture: Semi-automated cyber defence using context-aware data analytics. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–8.

In a continually evolving cyber-threat landscape, the detection and prevention of cyber attacks has become a complex task. Technological developments have led organisations to digitise the majority of their operations. This practice, however, has its perils, since cybespace offers a new attack-surface. Institutions which are tasked to protect organisations from these threats utilise mainly network data and their incident response strategy remains oblivious to the needs of the organisation when it comes to protecting operational aspects. This paper presents a system able to combine threat intelligence data, attack-trend data and organisational data (along with other data sources available) in order to achieve automated network-defence actions. Our approach combines machine learning, visual analytics and information from business processes to guide through a decision-making process for a Security Operation Centre environment. We test our system on two synthetic scenarios and show that correlating network data with non-network data for automated network defences is possible and worth investigating further.

2017-12-04
Al-Shomrani, A., Fathy, F., Jambi, K..  2017.  Policy enforcement for big data security. 2017 2nd International Conference on Anti-Cyber Crimes (ICACC). :70–74.

Security and privacy of big data becomes challenging as data grows and more accessible by more and more clients. Large-scale data storage is becoming a necessity for healthcare, business segments, government departments, scientific endeavors and individuals. Our research will focus on the privacy, security and how we can make sure that big data is secured. Managing security policy is a challenge that our framework will handle for big data. Privacy policy needs to be integrated, flexible, context-aware and customizable. We will build a framework to receive data from customer and then analyze data received, extract privacy policy and then identify the sensitive data. In this paper we will present the techniques for privacy policy which will be created to be used in our framework.

2017-12-28
Liu, H., Ditzler, G..  2017.  A fast information-theoretic approximation of joint mutual information feature selection. 2017 International Joint Conference on Neural Networks (IJCNN). :4610–4617.

Feature selection is an important step in data analysis to address the curse of dimensionality. Such dimensionality reduction techniques are particularly important when if a classification is required and the model scales in polynomial time with the size of the feature (e.g., some applications include genomics, life sciences, cyber-security, etc.). Feature selection is the process of finding the minimum subset of features that allows for the maximum predictive power. Many of the state-of-the-art information-theoretic feature selection approaches use a greedy forward search; however, there are concerns with the search in regards to the efficiency and optimality. A unified framework was recently presented for information-theoretic feature selection that tied together many of the works in over the past twenty years. The work showed that joint mutual information maximization (JMI) is generally the best options; however, the complexity of greedy search for JMI scales quadratically and it is infeasible on high dimensional datasets. In this contribution, we propose a fast approximation of JMI based on information theory. Our approach takes advantage of decomposing the calculations within JMI to speed up a typical greedy search. We benchmarked the proposed approach against JMI on several UCI datasets, and we demonstrate that the proposed approach returns feature sets that are highly consistent with JMI, while decreasing the run time required to perform feature selection.

2018-03-19
Ditzler, G., Prater, A..  2017.  Fine Tuning Lasso in an Adversarial Environment against Gradient Attacks. 2017 IEEE Symposium Series on Computational Intelligence (SSCI). :1–7.

Machine learning and data mining algorithms typically assume that the training and testing data are sampled from the same fixed probability distribution; however, this violation is often violated in practice. The field of domain adaptation addresses the situation where this assumption of a fixed probability between the two domains is violated; however, the difference between the two domains (training/source and testing/target) may not be known a priori. There has been a recent thrust in addressing the problem of learning in the presence of an adversary, which we formulate as a problem of domain adaption to build a more robust classifier. This is because the overall security of classifiers and their preprocessing stages have been called into question with the recent findings of adversaries in a learning setting. Adversarial training (and testing) data pose a serious threat to scenarios where an attacker has the opportunity to ``poison'' the training or ``evade'' on the testing data set(s) in order to achieve something that is not in the best interest of the classifier. Recent work has begun to show the impact of adversarial data on several classifiers; however, the impact of the adversary on aspects related to preprocessing of data (i.e., dimensionality reduction or feature selection) has widely been ignored in the revamp of adversarial learning research. Furthermore, variable selection, which is a vital component to any data analysis, has been shown to be particularly susceptible under an attacker that has knowledge of the task. In this work, we explore avenues for learning resilient classification models in the adversarial learning setting by considering the effects of adversarial data and how to mitigate its effects through optimization. Our model forms a single convex optimization problem that uses the labeled training data from the source domain and known- weaknesses of the model for an adversarial component. We benchmark the proposed approach on synthetic data and show the trade-off between classification accuracy and skew-insensitive statistics.

2018-05-24
Sallam, A., Bertino, E..  2017.  Detection of Temporal Insider Threats to Relational Databases. 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC). :406–415.

The mitigation of insider threats against databases is a challenging problem as insiders often have legitimate access privileges to sensitive data. Therefore, conventional security mechanisms, such as authentication and access control, may be insufficient for the protection of databases against insider threats and need to be complemented with techniques that support real-time detection of access anomalies. The existing real-time anomaly detection techniques consider anomalies in references to the database entities and the amounts of accessed data. However, they are unable to track the access frequencies. According to recent security reports, an increase in the access frequency by an insider is an indicator of a potential data misuse and may be the result of malicious intents for stealing or corrupting the data. In this paper, we propose techniques for tracking users' access frequencies and detecting anomalous related activities in real-time. We present detailed algorithms for constructing accurate profiles that describe the access patterns of the database users and for matching subsequent accesses by these users to the profiles. Our methods report and log mismatches as anomalies that may need further investigation. We evaluated our techniques on the OLTP-Benchmark. The results of the evaluation indicate that our techniques are very effective in the detection of anomalies.

2017-12-28
Henretty, T., Baskaran, M., Ezick, J., Bruns-Smith, D., Simon, T. A..  2017.  A quantitative and qualitative analysis of tensor decompositions on spatiotemporal data. 2017 IEEE High Performance Extreme Computing Conference (HPEC). :1–7.

Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (\textbackslashtextless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- \textbackslashtextbar\textbackslashtextbar

2017-11-27
Parate, M., Tajane, S., Indi, B..  2016.  Assessment of System Vulnerability for Smart Grid Applications. 2016 IEEE International Conference on Engineering and Technology (ICETECH). :1083–1088.

The smart grid is an electrical grid that has a duplex communication. This communication is between the utility and the consumer. Digital system, automation system, computers and control are the various systems of Smart Grid. It finds applications in a wide variety of systems. Some of its applications have been designed to reduce the risk of power system blackout. Dynamic vulnerability assessment is done to identify, quantify, and prioritize the vulnerabilities in a system. This paper presents a novel approach for classifying the data into one of the two classes called vulnerable or non-vulnerable by carrying out Dynamic Vulnerability Assessment (DVA) based on some data mining techniques such as Multichannel Singular Spectrum Analysis (MSSA), and Principal Component Analysis (PCA), and a machine learning tool such as Support Vector Machine Classifier (SVM-C) with learning algorithms that can analyze data. The developed methodology is tested in the IEEE 57 bus, where the cause of vulnerability is transient instability. The results show that data mining tools can effectively analyze the patterns of the electric signals, and SVM-C can use those patterns for analyzing the system data as vulnerable or non-vulnerable and determines System Vulnerability Status.

2017-03-29
Kosek, A. M..  2016.  Contextual anomaly detection for cyber-physical security in Smart Grids based on an artificial neural network model. 2016 Joint Workshop on Cyber- Physical Security and Resilience in Smart Grids (CPSR-SG). :1–6.

This paper presents a contextual anomaly detection method and its use in the discovery of malicious voltage control actions in the low voltage distribution grid. The model-based anomaly detection uses an artificial neural network model to identify a distributed energy resource's behaviour under control. An intrusion detection system observes distributed energy resource's behaviour, control actions and the power system impact, and is tested together with an ongoing voltage control attack in a co-simulation set-up. The simulation results obtained with a real photovoltaic rooftop power plant data show that the contextual anomaly detection performs on average 55% better in the control detection and over 56% better in the malicious control detection over the point anomaly detection.

2017-10-10
Bassily, Raef, Nissim, Kobbi, Smith, Adam, Steinke, Thomas, Stemmer, Uri, Ullman, Jonathan.  2016.  Algorithmic Stability for Adaptive Data Analysis. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :1046–1059.

Adaptivity is an important feature of data analysis - the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014) initiated a general formal study of this problem, and gave the first upper and lower bounds on the achievable generalization error for adaptive data analysis. Specifically, suppose there is an unknown distribution P and a set of n independent samples x is drawn from P. We seek an algorithm that, given x as input, accurately answers a sequence of adaptively chosen ``queries'' about the unknown distribution P. How many samples n must we draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy? In this work we make two new contributions towards resolving this question: We give upper bounds on the number of samples n that are needed to answer statistical queries. The bounds improve and simplify the work of Dwork et al. (STOC, 2015), and have been applied in subsequent work by those authors (Science, 2015; NIPS, 2015). We prove the first upper bounds on the number of samples required to answer more general families of queries. These include arbitrary low-sensitivity queries and an important class of optimization queries (alternatively, risk minimization queries). As in Dwork et al., our algorithms are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that the stability notion guaranteed by differential privacy implies low generalization error. We also show that weaker stability guarantees such as bounded KL divergence and total variation distance lead to correspondingly weaker generalization guarantees.

2017-11-20
Deng, C., Qiao, H..  2016.  Network security intrusion detection system based on incremental improved convolutional neural network model. 2016 International Conference on Communication and Electronics Systems (ICCES). :1–5.

With the popularization and development of network knowledge, network intruders are increasing, and the attack mode has been updated. Intrusion detection technology is a kind of active defense technology, which can extract the key information from the network system, and quickly judge and protect the internal or external network intrusion. Intrusion detection is a kind of active security technology, which provides real-time protection for internal attacks, external attacks and misuse, and it plays an important role in ensuring network security. However, with the diversification of intrusion technology, the traditional intrusion detection system cannot meet the requirements of the current network security. Therefore, the implementation of intrusion detection needs diversifying. In this context, we apply neural network technology to the network intrusion detection system to solve the problem. In this paper, on the basis of intrusion detection method, we analyze the development history and the present situation of intrusion detection technology, and summarize the intrusion detection system overview and architecture. The neural network intrusion detection is divided into data acquisition, data analysis, pretreatment, intrusion behavior detection and testing.

2017-03-08
Gupta, A., Mehrotra, A., Khan, P. M..  2015.  Challenges of Cloud Computing amp; Big Data Analytics. 2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom). :1112–1115.

Now-a-days for most of the organizations across the globe, two important IT initiatives are: Big Data Analytics and Cloud Computing. Big Data Analytics can provide valuables insight that can create competitiveness and generate increased revenues. Cloud Computing can enhance productivity and efficiencies thus reducing cost. Cloud Computing offers groups of servers, storages and various networking resources. It enables environment of Big Data to processes voluminous, high velocity and varied formats of Big Data.

Buda, A., Främling, K., Borgman, J., Madhikermi, M., Mirzaeifar, S., Kubler, S..  2015.  Data supply chain in Industrial Internet. 2015 IEEE World Conference on Factory Communication Systems (WFCS). :1–7.

The Industrial Internet promises to radically change and improve many industry's daily business activities, from simple data collection and processing to context-driven, intelligent and pro-active support of workers' everyday tasks and life. The present paper first provides insight into a typical industrial internet application architecture, then it highlights one fundamental arising contradiction: “Who owns the data is often not capable of analyzing it”. This statement is explained by imaging a visionary data supply chain that would realize some of the Industrial Internet promises. To concretely implement such a system, recent standards published by The Open Group are presented, where we highlight the characteristics that make them suitable for Industrial Internet applications. Finally, we discuss comparable solutions and concludes with new business use cases.