Biblio

Filters: Author is Wang, Xiaolin  [Clear All Filters]
2023-08-03
Chen, Wenlong, Wang, Xiaolin, Wang, Xiaoliang, Xu, Ke, Guo, Sushu.  2022.  LRVP: Lightweight Real-Time Verification of Intradomain Forwarding Paths. IEEE Systems Journal. 16:6309–6320.
The correctness of user traffic forwarding paths is an important goal of trusted transmission. Many network security issues are related to it, i.e., denial-of-service attacks, route hijacking, etc. The current path-aware network architecture can effectively overcome this issue through path verification. At present, the main problems of path verification are high communication and high computation overhead. To this aim, this article proposes a lightweight real-time verification mechanism of intradomain forwarding paths in autonomous systems to achieve a path verification architecture with no communication overhead and low computing overhead. The problem situation is that a packet finally reaches the destination, but its forwarding path is inconsistent with the expected path. The expected path refers to the packet forwarding path determined by the interior gateway protocols. If the actual forwarding path is different from the expected one, it is regarded as an incorrect forwarding path. This article focuses on the most typical intradomain routing environment. A few routers are set as the verification routers to block the traffic with incorrect forwarding paths and raise alerts. Experiments prove that this article effectively solves the problem of path verification and the problem of high communication and computing overhead.
Conference Name: IEEE Systems Journal
2019-01-16
Pan, Cheng, Hu, Xiameng, Zhou, Lan, Luo, Yingwei, Wang, Xiaolin, Wang, Zhenlin.  2018.  PACE: Penalty Aware Cache Modeling with Enhanced AET. Proceedings of the 9th Asia-Pacific Workshop on Systems. :19:1–19:8.
Past cache modeling techniques are typically limited to a cache system with a fixed cache line/block size. This limitation is not a problem for a hardware cache where the cache line size is uniform. However, modern in-memory software caches, such as Memcached and Redis, are able to cache varied-size data objects. A software cache supports update and delete operations in addition to only reads and writes for a hardware cache. Moreover, existing cache models often assume that the penalty for each cache miss is identical, which is not true especially for software cache targeting web services, and past cache management policies that aim to improve cache hit rate are no longer sufficient. We propose a more general cache model that can handle varied cache block sizes, nonuniform miss penalties, and diverse cache operations. In this paper, we first extend a state-of-the-art cache model to accurately predict cache miss ratios for variable cache sizes when object size, updates and deletions are considered. We then apply this model to drive cache management when miss penalty is brought into consideration. Our approach delivers better results than a recent penalty-aware cache management scheme, Hyperbolic Caching, especially when cache budget is tight. Another advantage of our approach is that it provides predictable and controllable cache management on cache space allocation, especially when multiple applications share the cache space.