Biblio

Filters: Author is Chen, Shiping  [Clear All Filters]
2021-10-12
Musleh, Ahmed S., Chen, Guo, Dong, Zhao Yang, Wang, Chen, Chen, Shiping.  2020.  Statistical Techniques-Based Characterization of FDIA in Smart Grids Considering Grid Contingencies. 2020 International Conference on Smart Grids and Energy Systems (SGES). :83–88.
False data injection attack (FDIA) is a real threat to smart grids due to its wide range of vulnerabilities and impacts. Designing a proper detection scheme for FDIA is the 1stcritical step in defending the attack in smart grids. In this paper, we investigate two main statistical techniques-based approaches in this regard. The first is based on the principal component analysis (PCA), and the second is based on the canonical correlation analysis (CCA). The test cases illustrate a better characterization performance of FDIA using CCA compared to the PCA. Further, CCA provides a better differentiation of FDIA from normal grid contingencies. On the other hand, PCA provides a significantly reduced false alarm rate.
2018-12-10
Yan, Hua, Sui, Yulei, Chen, Shiping, Xue, Jingling.  2018.  Spatio-temporal Context Reduction: A Pointer-analysis-based Static Approach for Detecting Use-after-free Vulnerabilities. Proceedings of the 40th International Conference on Software Engineering. :327–337.

Zero-day Use-After-Free (UAF) vulnerabilities are increasingly popular and highly dangerous, but few mitigations exist. We introduce a new pointer-analysis-based static analysis, CRed, for finding UAF bugs in multi-MLOC C source code efficiently and effectively. CRed achieves this by making three advances: (i) a spatio-temporal context reduction technique for scaling down soundly and precisely the exponential number of contexts that would otherwise be considered at a pair of free and use sites, (ii) a multi-stage analysis for filtering out false alarms efficiently, and (iii) a path-sensitive demand-driven approach for finding the points-to information required. We have implemented CRed in LLVM-3.8.0 and compared it with four different state-of-the-art static tools: CBMC (model checking), Clang (abstract interpretation), Coccinelle (pattern matching), and Supa (pointer analysis) using all the C test cases in Juliet Test Suite (JTS) and 10 open-source C applications. For the ground-truth validated with JTS, CRed detects all the 138 known UAF bugs as CBMC and Supa do while Clang and Coccinelle miss some bugs, with no false alarms from any tool. For practicality validated with the 10 applications (totaling 3+ MLOC), CRed reports 132 warnings including 85 bugs in 7.6 hours while the existing tools are either unscalable by terminating within 3 days only for one application (CBMC) or impractical by finding virtually no bugs (Clang and Coccinelle) or issuing an excessive number of false alarms (Supa).