Biblio
This paper presents a novel game theoretic attack-defence decision making framework for cyber-physical system (CPS) security. Game theory is a powerful tool to analyse the interaction between the attacker and the defender in such scenarios. In the formulation of games, participants are usually assumed to be rational. They will always choose the action to pursuit maximum payoff according to the knowledge of the strategic situation they are in. However, in reality the capacity of rationality is often bounded by the level of intelligence, computational resources and the amount of available information. This paper formulates the concept of bounded rationality into the decision making process, in order to optimise the defender's strategy considering that the defender and the attacker have incomplete information of each other and limited computational capacity. Under the proposed framework, the defender can often benefit from deviating from the minimax Nash Equilibrium strategy, the theoretically expected outcome of rational game playing. Numerical results are presented and discussed in order to demonstrate the proposed technique.
As chips become more and more connected, they are more exposed (both to network and to physical attacks). Therefore one shall ensure they enjoy a sufficient protection level. Security within chips is accordingly becoming a hot topic. Incident detection and reporting is one novel function expected from chips. In this talk, we explain why it is worthwhile to resort to Artificial Intelligence (AI) for security event handling. Drivers are the need to aggregate multiple and heterogeneous security sensors, the need to digest this information quickly to produce exploitable information, and so while maintaining a low false positive detection rate. Key features are adequate learning procedures and fast and secure classification accelerated by hardware. A challenge is to embed such security-oriented AI logic, while not compromising chip power budget and silicon area. This talk accounts for the opportunities permitted by the symbiotic encounter between chip security and AI.
Automated server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and virtualized server infrastructure. In this paper, we investigate automated and agile server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.