Paper Conference

Proceedings of BSO Conference 2020: Fifth Conference of IBPSA-England


Integration and Evaluation of Deep Reinforcement Learning Controller in a Building Co-Simulation Environment

Ahmed Amrani, Rim Kaddah, Jean-Philippe Tavella, Mathieu Schumann

Abstract: Deep Reinforcement Learning (DRL) is a promising Artificial Intelligence (AI) approach for buildings heating control. However, DRL controllers require dynamic simulation involving heterogeneous physical models (building, power-grid, etc.). Co-simulation allows the inter-operation between heterogeneous components when exported following the Functional Mock-Up Interface (FMI) standard. Controllers based on DRL can be implemented using various languages but are mostly based on Python libraries like Tensorflow. Their integration into cosimulation environments requires their exportation as Functional Mockup Units (FMUs). Thus, languagespecific FMI-compliant export tools are needed for every programming language, specific library or platform used. This process is costly in effort and time and results in large FMUs. This paper proposes a novel method that simplifies AIbased controller integration regardless of the language or platform in a co-simulation environment and apply our methodology to assess a DRL controller. For the first objective, we use existing FMI export tools to create an FMU having the same input and output parameters as the controller. In the proposed architecture, the FMU acts as a proxy whose objective is to communicate with an external DRL controller deployed on a local or remote machine. We propose an application of this generic architecture for heating control in a house using DACCOSIM NG co-simulation environment. Through this architecture, the deployed DRL-based controller is connected to a house energy model, which includes weather conditions and heating. We show that our proposed controller is capable to learn the system dynamics and keep temperature within 1 degree of setpoint 93% of the time.
Pages: 80 - 87