Biblio
The security of Energy Data collection is the basis of achieving reliability and security intelligent of smart grid. The newest security communication of Data collection is Zero Trust communication; The Strategy of Zero Trust communication is that don’t trust any device of outside or inside. Only that device authenticate is successful and software and hardware is more security, the Energy intelligent power system allow the device enroll into network system, otherwise deny these devices. When the device has been communicating with the Energy system, the Zero Trust still need to detect its security and vulnerability, if device have any security issue or vulnerability issue, the Zero Trust deny from network system, it ensures that Energy power system absolute security, which lays a foundation for the security analysis of intelligent power unit.
Cyber Threat Intelligence (CTI) sharing facilitates a comprehensive understanding of adversary activity and enables enterprise networks to prioritize their cyber defense technologies. To that end, we introduce HogMap, a novel software-defined infrastructure that simplifies and incentivizes collaborative measurement and monitoring of cyber-threat activity. HogMap proposes to transform the cyber-threat monitoring landscape by integrating several novel SDN-enabled capabilities: (i) intelligent in-place filtering of malicious traffic, (ii) dynamic migration of interesting and extraordinary traffic and (iii) a software-defined marketplace where various parties can opportunistically subscribe to and publish cyber-threat intelligence services in a flexible manner. We present the architectural vision and summarize our preliminary experience in developing and operating an SDN-based HoneyGrid, which spans three enterprises and implements several of the enabling capabilities (e.g., traffic filtering, traffic forwarding and connection migration). We find that SDN technologies greatly simplify the design and deployment of such globally distributed and elastic HoneyGrids.
Differential privacy has become the dominant standard in the research community for strong privacy protection. There has been a flood of research into query answering algorithms that meet this standard. Algorithms are becoming increasingly complex, and in particular, the performance of many emerging algorithms is data dependent, meaning the distribution of the noise added to query answers may change depending on the input data. Theoretical analysis typically only considers the worst case, making empirical study of average case performance increasingly important. In this paper we propose a set of evaluation principles which we argue are essential for sound evaluation. Based on these principles we propose DPBench, a novel evaluation framework for standardized evaluation of privacy algorithms. We then apply our benchmark to evaluate algorithms for answering 1- and 2-dimensional range queries. The result is a thorough empirical study of 15 published algorithms on a total of 27 datasets that offers new insights into algorithm behavior–-in particular the influence of dataset scale and shape–-and a more complete characterization of the state of the art. Our methodology is able to resolve inconsistencies in prior empirical studies and place algorithm performance in context through comparison to simple baselines. Finally, we pose open research questions which we hope will guide future algorithm design.