Visible to the public Biblio

Filters: Keyword is Q-learning algorithm  [Clear All Filters]
2022-10-20
Li, Jian, Rong, Fei, Tang, Yu.  2020.  A Novel Q-Learning Algorithm Based on the Stochastic Environment Path Planning Problem. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1977—1982.
In this paper, we proposed a path planning algorithm based on Q-learning model to simulate an environment model, which is suitable for the complex environment. A virtual simulation platform has been built to complete the experiments. The experimental results show that the algorithm proposed in this paper can be effectively applied to the solution of vehicle routing problems in the complex environment.
2021-11-30
Shateri, Mohammadhadi, Messina, Francisco, Piantanida, Pablo, Labeau, Fabrice.  2020.  Privacy-Cost Management in Smart Meters Using Deep Reinforcement Learning. 2020 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe). :929–933.
Smart meters (SMs) play a pivotal rule in the smart grid by being able to report the electricity usage of consumers to the utility provider (UP) almost in real-time. However, this could leak sensitive information about the consumers to the UP or a third-party. Recent works have leveraged the availability of energy storage devices, e.g., a rechargeable battery (RB), in order to provide privacy to the consumers with minimal additional energy cost. In this paper, a privacy-cost management unit (PCMU) is proposed based on a model-free deep reinforcement learning algorithm, called deep double Q-learning (DDQL). Empirical results evaluated on actual SMs data are presented to compare DDQL with the state-of-the-art, i.e., classical Q-learning (CQL). Additionally, the performance of the method is investigated for two concrete cases where attackers aim to infer the actual demand load and the occupancy status of dwellings. Finally, an abstract information-theoretic characterization is provided.
2020-04-13
Phan, Trung V., Islam, Syed Tasnimul, Nguyen, Tri Gia, Bauschert, Thomas.  2019.  Q-DATA: Enhanced Traffic Flow Monitoring in Software-Defined Networks applying Q-learning. 2019 15th International Conference on Network and Service Management (CNSM). :1–9.
Software-Defined Networking (SDN) introduces a centralized network control and management by separating the data plane from the control plane which facilitates traffic flow monitoring, security analysis and policy formulation. However, it is challenging to choose a proper degree of traffic flow handling granularity while proactively protecting forwarding devices from getting overloaded. In this paper, we propose a novel traffic flow matching control framework called Q-DATA that applies reinforcement learning in order to enhance the traffic flow monitoring performance in SDN based networks and prevent traffic forwarding performance degradation. We first describe and analyse an SDN-based traffic flow matching control system that applies a reinforcement learning approach based on Q-learning algorithm in order to maximize the traffic flow granularity. It also considers the forwarding performance status of the SDN switches derived from a Support Vector Machine based algorithm. Next, we outline the Q-DATA framework that incorporates the optimal traffic flow matching policy derived from the traffic flow matching control system to efficiently provide the most detailed traffic flow information that other mechanisms require. Our novel approach is realized as a REST SDN application and evaluated in an SDN environment. Through comprehensive experiments, the results show that-compared to the default behavior of common SDN controllers and to our previous DATA mechanism-the new Q-DATA framework yields a remarkable improvement in terms of traffic forwarding performance degradation protection of SDN switches while still providing the most detailed traffic flow information on demand.