Biblio
With the emergence of computationally intensive and delay sensitive applications, mobile edge computing(MEC) has become more and more popular. Simultaneously, MEC paradigm is faced with security challenges, the most harmful of which is DDoS attack. In this paper, we focus on the resource orchestration algorithm in MEC scenario to mitigate DDoS attack. Most of existing works on resource orchestration algorithm barely take into account DDoS attack. Moreover, they assume that MEC nodes are unselfish, while in practice MEC nodes are selfish and try to maximize their individual utility only, as they usually belong to different network operators. To solve such problems, we propose a price-based resource orchestration algorithm(PROA) using game theory and convex optimization, which aims at mitigating DDoS attack while maximizing the utility of each participant. Pricing resources to simulate market mechanisms, which is national to make rational decisions for all participants. Finally, we conduct experiment using Matlab and show that the proposed PROA can effectively mitigate DDoS attack on the attacked MEC node.
This paper studies the secure computation offloading for multi-user multi-server mobile edge computing (MEC)-enabled internet of things (IoT). A novel jamming signal scheme is designed to interfere with the decoding process at the Eve, but not impair the uplink task offloading from users to APs. Considering offloading latency and secrecy constraints, this paper studies the joint optimization of communication and computation resource allocation, as well as partial offloading ratio to maximize the total secrecy offloading data (TSOD) during the whole offloading process. The considered problem is nonconvex, and we resort to block coordinate descent (BCD) method to decompose it into three subproblems. An efficient iterative algorithm is proposed to achieve a locally optimal solution to power allocation subproblem. Then the optimal computation resource allocation and offloading ratio are derived in closed forms. Simulation results demonstrate that the proposed algorithm converges fast and achieves higher TSOD than some heuristics.
This work seeks to advance the state of the art in HPC I/O performance analysis and interpretation. In particular, we demonstrate effective techniques to: (1) model output performance in the presence of I/O interference from production loads; (2) build features from write patterns and key parameters of the system architecture and configurations; (3) employ suitable machine learning algorithms to improve model accuracy. We train models with five popular regression algorithms and conduct experiments on two distinct production HPC platforms. We find that the lasso and random forest models predict output performance with high accuracy on both of the target systems. We also explore use of the models to guide adaptation in I/O middleware systems, and show potential for improvements of at least 15% from model-guided adaptation on 70% of samples, and improvements up to 10 x on some samples for both of the target systems.