Visible to the public Biblio

Filters: Author is Siami Namin, Akbar  [Clear All Filters]
2020-02-18
Zheng, Jianjun, Siami Namin, Akbar.  2019.  Enforcing Optimal Moving Target Defense Policies. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 1:753–759.
This paper introduces an approach based on control theory to model, analyze and select optimal security policies for Moving Target Defense (MTD) deployment strategies. A Markov Decision Process (MDP) scheme is presented to model states of the system from attacking point of view. The employed value iteration method is based on the Bellman optimality equation for optimal policy selection for each state defined in the system. The model is then utilized to analyze the impact of various costs on the optimal policy. The MDP model is then applied to two case studies to evaluate the performance of the model.
2019-06-17
Zheng, Jianjun, Siami Namin, Akbar.  2018.  A Markov Decision Process to Determine Optimal Policies in Moving Target. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2321–2323.

Moving Target Defense (MTD) has been introduced as a new game changer strategy in cybersecurity to strengthen defenders and conversely weaken adversaries. The successful implementation of an MTD system can be influenced by several factors including the effectiveness of the employed technique, the deployment strategy, the cost of the MTD implementation, and the impact from the enforced security policies. Several efforts have been spent on introducing various forms of MTD techniques. However, insufficient research work has been conducted on cost and policy analysis and more importantly the selection of these policies in an MTD-based setting. This poster paper proposes a Markov Decision Process (MDP) modeling-based approach to analyze security policies and further select optimal policies for moving target defense implementation and deployment. The adapted value iteration method would solve the Bellman Optimality Equation for optimal policy selection for each state of the system. The results of some simulations indicate that such modeling can be used to analyze the impact of costs of possible actions towards the optimal policies.