Biblio
Figuring innovations and development of web diminishes the exertion required for different procedures. Among them the most profited businesses are electronic frameworks, managing an account, showcasing, web based business and so on. This framework mostly includes the data trades ceaselessly starting with one host then onto the next. Amid this move there are such a variety of spots where the secrecy of the information and client gets loosed. Ordinarily the zone where there is greater likelihood of assault event is known as defenceless zones. Electronic framework association is one of such place where numerous clients performs there undertaking as indicated by the benefits allotted to them by the director. Here the aggressor makes the utilization of open ranges, for example, login or some different spots from where the noxious script is embedded into the framework. This scripts points towards trading off the security imperatives intended for the framework. Few of them identified with clients embedded scripts towards web communications are SQL infusion and cross webpage scripting (XSS). Such assaults must be distinguished and evacuated before they have an effect on the security and classification of the information. Amid the most recent couple of years different arrangements have been incorporated to the framework for making such security issues settled on time. Input approvals is one of the notable fields however experiences the issue of execution drops and constrained coordinating. Some other component, for example, disinfection and polluting will create high false report demonstrating the misclassified designs. At the center, both include string assessment and change investigation towards un-trusted hotspots for totally deciphering the effect and profundity of the assault. This work proposes an enhanced lead based assault discovery with specifically message fields for viably identifying the malevolent scripts. The work obstructs the ordinary access for malignant so- rce utilizing and hearty manage coordinating through unified vault which routinely gets refreshed. At the underlying level of assessment, the work appears to give a solid base to further research.
When building large concurrent systems, one of the key difficulties lies in coordinating component behavior and, in particular, managing the access to shared resources of the execution platform. Components may interact through buses, message buffers, etc. leading to resource contention and potential deadlocks compromising safety-critical operations. The concurrent nature of such interactions is the root cause of the complexity of the resulting software. Thus, the complexity of software systems is exponential in the number of their components, making a-posteriori verification of their correctness practically infeasible. An alternative approach, taken by the BIP framework, consists in ensuring correctness-by-construction by applying automatic transformations to obtain executable code from formally defined models. Following this latter approach, we have designed and implemented a BIP design studio. We have studied extensions of the BIP language for specifying parameterized models and integrated them in the design studio to enhance scalability, reusability, and reduce model size. Additionally, we have studied and implemented a set of necessary and sufficient conditions for validating the consistency and encodability of BIP models at design time. We have developed code generation plugins from graphical BIP models to equivalent Java and BIP code. The generated BIP code can be verified for deadlock-freedom or safety properties using compositional verifications tools offered by the BIP framework.
Efficient management and control of modern and next-gen networks is of paramount importance as networks have to maintain highly reliable service quality whilst supporting rapid growth in traffic demand and new application services. Rapid mitigation of network service degradations is a key factor in delivering high service quality. Automation is vital to achieving rapid mitigation of issues, particularly at the network edge where the scale and diversity is the greatest. This automation involves the rapid detection, localization and (where possible) repair of service-impacting faults and performance impairments. However, the most significant challenge here is knowing what events to detect, how to correlate events to localize an issue and what mitigation actions should be performed in response to the identified issues. These are defined as policies to systems such as ECOMP. In this paper, we present AESOP, a data-driven intelligent system to facilitate automatic learning of policies and rules for triggering remedial actions in networks. AESOP combines best operational practices (domain knowledge) with a variety of measurement data to learn and validate operational policies to mitigate service issues in networks. AESOP's design addresses the following key challenges: (i) learning from high-dimensional noisy data, (ii) capturing multiple fault models, (iii) modeling the high service-cost of false positives, and (iv) accounting for the evolving network infrastructure. We present the design of our system and show results from our ongoing experiments to show the effectiveness of our policy leaning framework.
A mass attack to web services using leaked account information has been done in recent years. The causes of the attack are information leakage and use of a same password among multiple services. Available measures to the attack are mainly using an alternative authentication method such as two-factor authentication or one-time password. Such measures put an additional operation load or credential management on users, and may also impose additional management costs to users or service providers for dedicated devices. These issues limit the applicability of such measures to only parts of various services. Therefore, I propose an alternative measure against the attack by using the concept of shutters in car garages. The proposed scheme is referred as the "authentication shutter". In this scheme, a legitimate user can control the availability of user authentication directly. This means that, even if an attacker has a valid user ID and password, if a legitimate user sets the user authentication as unavailable, an attacker cannot pass user authentication. I explain the basic idea and how to implement the scheme as a web system, and also discuss about the usability and security of the scheme.
Online controlled experiments (e.g., A/B tests) are now regularly used to guide product development and accelerate innovation in software. Product ideas are evaluated as scientific hypotheses, and tested in web sites, mobile applications, desktop applications, services, and operating systems. One of the key challenges for organizations that run controlled experiments is to come up with the right set of metrics [1] [2] [3]. Having good metrics, however, is not enough. In our experience of running thousands of experiments with many teams across Microsoft, we observed again and again how incorrect interpretations of metric movements may lead to wrong conclusions about the experiment's outcome, which if deployed could hurt the business by millions of dollars. Inspired by Steven Goodman's twelve p-value misconceptions [4], in this paper, we share twelve common metric interpretation pitfalls which we observed repeatedly in our experiments. We illustrate each pitfall with a puzzling example from a real experiment, and describe processes, metric design principles, and guidelines that can be used to detect and avoid the pitfall. With this paper, we aim to increase the experimenters' awareness of metric interpretation issues, leading to improved quality and trustworthiness of experiment results and better data-driven decisions.
Fuzzy density is an important part of fuzzy integral, which is used to describe the reliability of classifiers in the process of fusion. Most of the fuzzy density assignment methods are based on the training priori knowledge of the classifier and ignore the difference of the testing samples themselves. To better describe the real-time reliability of the classifier in the fusion process, the dispersion of the classifier is calculated according to the decision information which outputted by the classifier. Then the divisibility of the classifier is obtained through the information entropy of the dispersion. Finally, the divisibility and the priori knowledge are combined to get the fuzzy density which can be dynamically adjusted. Experiments on JAFFE and CK databases show that, compared with traditional fuzzy integral methods, the proposed method can effectively improve the decision performance of fuzzy integral and reduce the interference of unreliable output information to decision. And it is an effective multi-classifier fusion method.
Recently, emotion recognition has gained increasing attention in various applications related to Social Signal Processing (SSP) and human affect. The existing research is mainly focused on six basic emotions (happy, sad, fear, disgust, angry, and surprise). However human expresses many kind of emotions, including mix emotion which has not been explored due to its complexity. We model 12 types of mix emotion recognition from facial expression in a sequence of images using two-stages learning which combines Support Vector Machines (SVM) and Conditional Random Fields (CRF) as sequence classifiers. SVM classifies each image frame and produce emotion label output, subsequently it becomes the input for CRF which yields the mix emotion label of the corresponding observation sequence. We evaluate our proposed model on modified image frames of Cohn Kanade+ dataset, and on our own made mix emotion dataset. We also compare our model with the original CRF model, and our model shows a superior performance result.
The SDN (Software Defined Networking) paradigm rings flexibility to the network management and is an enabler to offer huge opportunities for network programmability. And, to solve the scalability issue raised by the centralized architecture of SDN, multi-controllers deployment (or distributed controllers system) is envisioned. In this paper, we focus on increasing the diversity of SDN control plane so as to enhance the network security. Our goal is to limit the ability of a malicious controller to compromise its neighboring controllers, and by extension, the rest of the controllers. We investigate a heterogeneous Susceptible-Infectious-Susceptible (SIS) epidemic model to evaluate the security performance and propose a coloring algorithm to increase the diversity based on community detection. And the simulation results demonstrate that our algorithm can reduce infection rate in control plane and our work shows that diversity must be introduced in network design for network security.
Software Defined Networking (SDN) stands to transmute our modern networks and data centers, opening them up into highly agile frameworks that can be reconfigured depending on the requirement. Denial of Service (DoS) attacks are considered as one of the most destructive attacks. This paper, is about DoS attack detection and mitigation using SDN. DoS attack can minimize the bandwidth utilization, leaving the network unavailable for legitimate traffic. To provide a solution to the problem, concept of performance aware Software Defined Networking is used which involves real time network monitoring using sFlow as a visibility protocol. So, OpenFlow along with sFlow is used as an application to fight DoS attacks. Our analysis and results demonstrate that using this technique, DoS attacks are successfully defended implying that SDN has promising potential to detect and mitigate DoS attacks.
Zero dynamics attack is lethal to cyber-physical systems in the sense that it is stealthy and there is no way to detect it. Fortunately, if the given continuous-time physical system is of minimum phase, the effect of the attack is negligible even if it is not detected. However, the situation becomes unfavorable again if one uses digital control by sampling the sensor measurement and using the zero-order-hold for actuation because of the `sampling zeros.' When the continuous-time system has relative degree greater than two and the sampling period is small, the sampled-data system must have unstable zeros (even if the continuous-time system is of minimum phase), so that the cyber-physical system becomes vulnerable to `sampling zero dynamics attack.' In this paper, we begin with its demonstration by a few examples. Then, we present an idea to protect the system by allocating those discrete-time zeros into stable ones. This idea is realized by employing the so-called `generalized hold' which replaces the zero-order-hold.
Network coding is a potential method that numerous investigators have move forwarded due to its significant advantages to enhance the proficiency of data communication. In this work, utilize simulations to assess the execution of various network topologies employing network coding. By contrasting the results of network and without network coding, it insists that network coding can improve the throughput, end-to-end delays, Packet Delivery Rate (PDR) and consistency. This paper presents the comparative performance analysis of network coding such as, XOR, LNC, and RLNC. The results demonstrates the XOR technique has attractive outcomes and can improve the real time performance metrics i.e.; throughput, end-to-end delay and PDR by substantial limitations. The analysis has been carried out based on packet size and also number of packets to be transmitted. Results illustrates that the network coding facilitate in dependence between networks.
Bitcoin, a peer-to-peer payment system and digital currency, is often involved in illicit activities such as scamming, ransomware attacks, illegal goods trading, and thievery. At the time of writing, the Bitcoin ecosystem has not yet been mapped and as such there is no estimate of the share of illicit activities. This paper provides the first estimation of the portion of cyber-criminal entities in the Bitcoin ecosystem. Our dataset consists of 854 observations categorised into 12 classes (out of which 5 are cybercrime-related) and a total of 100,000 uncategorised observations. The dataset was obtained from the data provider who applied three types of clustering of Bitcoin transactions to categorise entities: co-spend, intelligence-based, and behaviour-based. Thirteen supervised learning classifiers were then tested, of which four prevailed with a cross-validation accuracy of 77.38%, 76.47%, 78.46%, 80.76% respectively. From the top four classifiers, Bagging and Gradient Boosting classifiers were selected based on their weighted average and per class precision on the cybercrime-related categories. Both models were used to classify 100,000 uncategorised entities, showing that the share of cybercrime-related is 29.81% according to Bagging, and 10.95% according to Gradient Boosting with number of entities as the metric. With regard to the number of addresses and current coins held by this type of entities, the results are: 5.79% and 10.02% according to Bagging; and 3.16% and 1.45% according to Gradient Boosting.
Security at virtualization level has always been a major issue in cloud computing environment. A large number of virtual machines that are hosted on a single server by various customers/client may face serious security threats due to internal/external network attacks. In this work, we have examined and evaluated these threats and their impact on OpenStack private cloud. We have also discussed the most popular DOS (Denial-of-Service) attack on DHCP server on this private cloud platform and evaluated the vulnerabilities in an OpenStack networking component, Neutron, due to which this attack can be performed through rogue DHCP server. Finally, a solution, a game-theory based cloud architecture, that helps to detect and prevent DOS attacks in OpenStack has been proposed.
In the age of Big Data, we are witnessing a huge proliferation of digital data capturing our lives and our surroundings. Data privacy is a critical barrier to data analytics and privacy-preserving data disclosure becomes a key aspect to leveraging large-scale data analytics due to serious privacy risks. Traditional privacy-preserving data publishing solutions have focused on protecting individual's private information while considering all aggregate information about individuals as safe for disclosure. This paper presents a new privacy-aware data disclosure scheme that considers group privacy requirements of individuals in bipartite association graph datasets (e.g., graphs that represent associations between entities such as customers and products bought from a pharmacy store) where even aggregate information about groups of individuals may be sensitive and need protection. We propose the notion of $ε$g-Group Differential Privacy that protects sensitive information of groups of individuals at various defined group protection levels, enabling data users to obtain the level of information entitled to them. Based on the notion of group privacy, we develop a suite of differentially private mechanisms that protect group privacy in bipartite association graphs at different group privacy levels based on specialization hierarchies. We evaluate our proposed techniques through extensive experiments on three real-world association graph datasets and our results demonstrate that the proposed techniques are effective, efficient and provide the required guarantees on group privacy.
Location privacy has become a significant challenge of big data. Particularly, by the advantage of big data handling tools availability, huge location data can be managed and processed easily by an adversary to obtain user private information from Location-Based Services (LBS). So far, many methods have been proposed to preserve user location privacy for these services. Among them, dummy-based methods have various advantages in terms of implementation and low computation costs. However, they suffer from the spatiotemporal correlation issue when users submit consecutive requests. To solve this problem, a practical hybrid location privacy protection scheme is presented in this paper. The proposed method filters out the correlated fake location data (dummies) before submissions. Therefore, the adversary can not identify the user's real location. Evaluations and experiments show that our proposed filtering technique significantly improves the performance of existing dummy-based methods and enables them to effectively protect the user's location privacy in the environment of big data.
Supervisory Control and Data Acquisition (SCADA) systems complexity and interconnectivity increase in recent years have exposed the SCADA networks to numerous potential vulnerabilities. Several studies have shown that anomaly-based Intrusion Detection Systems (IDS) achieves improved performance to identify unknown or zero-day attacks. In this paper, we propose a hybrid model for anomaly-based intrusion detection in SCADA networks using machine learning approach. In the first part, we present a robust hybrid model for anomaly-based intrusion detection in SCADA networks. Finally, we present a feature selection model for anomaly-based intrusion detection in SCADA networks by removing redundant and irrelevant features. Irrelevant features in the dataset can affect modeling power and reduce predictive accuracy. These models were evaluated using an industrial control system dataset developed at the Distributed Analytics and Security Institute Mississippi State University Starkville, MS, USA. The experimental results show that our proposed model has a key effect in reducing the time and computational complexity and achieved improved accuracy and detection rate. The accuracy of our proposed model was measured as 99.5 % for specific-attack-labeled.
Collaborative Filtering (CF) is a successful technique that has been implemented in recommender systems and Privacy Preserving Collaborative Filtering (PPCF) aroused increasing concerns of the society. Current solutions mainly focus on cryptographic methods, obfuscation methods, perturbation methods and differential privacy methods. But these methods have some shortcomings, such as unnecessary computational cost, lower data quality and hard to calibrate the magnitude of noise. This paper proposes a (k, p, I)-anonymity method that improves the existing k-anonymity method in PPCF. The method works as follows: First, it applies Latent Factor Model (LFM) to reduce matrix sparsity. Then it improves Maximum Distance to Average Vector (MDAV) microaggregation algorithm based on importance partitioning to increase homogeneity among records in each group which can retain better data quality and (p, I)-diversity model where p is attacker's prior knowledge about users' ratings and I is the diversity among users in each group to improve the level of privacy preserving. Theoretical and experimental analyses show that our approach ensures a higher level of privacy preserving based on lower information loss.
In this paper, we address the problem of peer grouping employees in an organization for identifying security risks. Our motivation for studying peer grouping is its importance for a clear understanding of user and entity behavior analytics (UEBA) that is the primary tool for identifying insider threat through detecting anomalies in network traffic. We show that using Louvain method of community detection it is possible to automate peer group creation with feature-based weight assignments. Depending on the number of employees and their features we show that it is also possible to give each group a meaningful description. We present three new algorithms: one that allows an addition of new employees to already generated peer groups, another that allows for incorporating user feedback, and lastly one that provides the user with recommended nodes to be reassigned. We use Niara's data to validate our claims. The novelty of our method is its robustness, simplicity, scalability, and ease of deployment in a production environment.