Biblio
This paper proposes a new adaptively distributed packet filtering mechanism to mitigate the DDoS attacks targeted at the victim's bandwidth. The mechanism employs IP traceback as a means of distinguishing attacks from legitimate traffic, and continuous action reinforcement learning automata, with an improved learning function, to compute effective filtering probabilities at filtering routers. The solution is evaluated through a number of experiments based on actual Internet data. The results show that the proposed solution achieves a high throughput of surviving legitimate traffic as a result of its high convergence speed, and can save the victim's bandwidth even in case of varying and intense attacks.
The Extensible Markup Language (XML) is a complex language, and consequently, XML-based protocols are susceptible to entire classes of implicit and explicit security problems. Message formats in XML-based protocols are usually specified in XML Schema, and as a first-line defense, schema validation should reject malformed input. However, extension points in most protocol specifications break validation. Extension points are wildcards and considered best practice for loose composition, but they also enable an attacker to add unchecked content in a document, e.g., for a signature wrapping attack. This paper introduces datatyped XML visibly pushdown automata (dXVPAs) as language representation for mixed-content XML and presents an incremental learner that infers a dXVPA from example documents. The learner generalizes XML types and datatypes in terms of automaton states and transitions, and an inferred dXVPA converges to a good-enough approximation of the true language. The automaton is free from extension points and capable of stream validation, e.g., as an anomaly detector for XML-based protocols. For dealing with adversarial training data, two scenarios of poisoning are considered: a poisoning attack is either uncovered at a later time or remains hidden. Unlearning can therefore remove an identified poisoning attack from a dXVPA, and sanitization trims low-frequent states and transitions to get rid of hidden attacks. All algorithms have been evaluated in four scenarios, including a web service implemented in Apache Axis2 and Apache Rampart, where attacks have been simulated. In all scenarios, the learned automaton had zero false positives and outperformed traditional schema validation.
Emerging computing relies heavily on secure backend storage for the massive size of big data originating from the Internet of Things (IoT) smart devices to the Cloud-hosted web applications. Structured Query Language (SQL) Injection Attack (SQLIA) remains an intruder's exploit of choice to pilfer confidential data from the back-end database with damaging ramifications. The existing approaches were all before the new emerging computing in the context of the Internet big data mining and as such will lack the ability to cope with new signatures concealed in a large volume of web requests over time. Also, these existing approaches were strings lookup approaches aimed at on-premise application domain boundary, not applicable to roaming Cloud-hosted services' edge Software-Defined Network (SDN) to application endpoints with large web request hits. Using a Machine Learning (ML) approach provides scalable big data mining for SQLIA detection and prevention. Unfortunately, the absence of corpus to train a classifier is an issue well known in SQLIA research in applying Artificial Intelligence (AI) techniques. This paper presents an application context pattern-driven corpus to train a supervised learning model. The model is trained with ML algorithms of Two-Class Support Vector Machine (TC SVM) and Two-Class Logistic Regression (TC LR) implemented on Microsoft Azure Machine Learning (MAML) studio to mitigate SQLIA. This scheme presented here, then forms the subject of the empirical evaluation in Receiver Operating Characteristic (ROC) curve.
This paper discusses two issues with multi-channel multi-radio Wireless Mesh Networks (WMN): gateway placement and gateway selection. To address these issues, a method will be proposed that places gateways at strategic locations to avoid congestion and adaptively learns to select a more efficient gateway for each wireless router by using learning automata. This method, called the N-queen Inspired Gateway Placement and Learning Automata-based Selection (NQ-GPLS), considers multiple metrics such as loss ratio, throughput, load at the gateways and delay. Simulation results from NS-2 simulator demonstrate that NQ-GPLS can significantly improve the overall network performance compared to a standard WMN.
Game theory serves as a powerful tool for distributed optimization in multiagent systems in different applications. In this paper we consider multiagent systems that can be modeled as a potential game whose potential function coincides with a global objective function to be maximized. This approach renders the agents the strategic decision makers and the corresponding optimization problem the problem of learning an optimal equilibruim point in the designed game. In distinction from the existing works on the topic of payoff-based learning, we deal here with the systems where agents have neither memory nor ability for communication, and they base their decision only on the currently played action and the experienced payoff. Because of these restrictions, we use the methods of reinforcement learning, stochastic approximation, and learning automata extensively reviewed and analyzed in [3], [9]. These methods allow us to set up the agent dynamics that moves the game out of inefficient Nash equilibria and leads it close to an optimal one in both cases of discrete and continuous action sets.