new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Apr 28

Go Beyond Black-box Policies: Rethinking the Design of Learning Agent for Interpretable and Verifiable HVAC Control

Recent research has shown the potential of Model-based Reinforcement Learning (MBRL) to enhance energy efficiency of Heating, Ventilation, and Air Conditioning (HVAC) systems. However, existing methods rely on black-box thermal dynamics models and stochastic optimizers, lacking reliability guarantees and posing risks to occupant health. In this work, we overcome the reliability bottleneck by redesigning HVAC controllers using decision trees extracted from existing thermal dynamics models and historical data. Our decision tree-based policies are deterministic, verifiable, interpretable, and more energy-efficient than current MBRL methods. First, we introduce a novel verification criterion for RL agents in HVAC control based on domain knowledge. Second, we develop a policy extraction procedure that produces a verifiable decision tree policy. We found that the high dimensionality of the thermal dynamics model input hinders the efficiency of policy extraction. To tackle the dimensionality challenge, we leverage importance sampling conditioned on historical data distributions, significantly improving policy extraction efficiency. Lastly, we present an offline verification algorithm that guarantees the reliability of a control policy. Extensive experiments show that our method saves 68.4% more energy and increases human comfort gain by 14.8% compared to the state-of-the-art method, in addition to an 1127x reduction in computation overhead. Our code and data are available at https://github.com/ryeii/Veri_HVAC

  • 3 authors
·
Feb 28, 2024

A Safe and Data-efficient Model-based Reinforcement Learning System for HVAC Control

Model-Based Reinforcement Learning (MBRL) has been widely studied for Heating, Ventilation, and Air Conditioning (HVAC) control in buildings. One of the critical challenges is the large amount of data required to effectively train neural networks for modeling building dynamics. This paper presents CLUE, an MBRL system for HVAC control in buildings. CLUE optimizes HVAC operations by integrating a Gaussian Process (GP) model to model building dynamics with uncertainty awareness. CLUE utilizes GP to predict state transitions as Gaussian distributions, effectively capturing prediction uncertainty and enhancing decision-making under sparse data conditions. Our approach employs a meta-kernel learning technique to efficiently set GP kernel hyperparameters using domain knowledge from diverse buildings. This drastically reduces the data requirements typically associated with GP models in HVAC applications. Additionally, CLUE incorporates these uncertainty estimates into a Model Predictive Path Integral (MPPI) algorithm, enabling the selection of safe, energy-efficient control actions. This uncertainty-aware control strategy evaluates and selects action trajectories based on their predicted impact on energy consumption and human comfort, optimizing operations even under uncertain conditions. Extensive simulations in a five-zone office building demonstrate that CLUE reduces the required training data from hundreds of days to just seven while maintaining robust control performance. It reduces comfort violations by an average of 12.07% compared to existing MBRL methods, without compromising on energy efficiency.

  • 4 authors
·
Nov 4, 2024

Physicochemical-Neural Fusion for Semi-Closed-Circuit Respiratory Autonomy in Extreme Environments

This paper introduces Galactic Bioware's Life Support System, a semi-closed-circuit breathing apparatus designed for integration into a positive-pressure firefighting suit and governed by an AI control system. The breathing loop incorporates a soda lime CO2 scrubber, a silica gel dehumidifier, and pure O2 replenishment with finite consumables. One-way exhaust valves maintain positive pressure while creating a semi-closed system in which outward venting gradually depletes the gas inventory. Part I develops the physicochemical foundations from first principles, including state-consistent thermochemistry, stoichiometric capacity limits, adsorption isotherms, and oxygen-management constraints arising from both fire safety and toxicity. Part II introduces an AI control architecture that fuses three sensor tiers, external environmental sensing, internal suit atmosphere sensing (with triple-redundant O2 cells and median voting), and firefighter biometrics. The controller combines receding-horizon model-predictive control (MPC) with a learned metabolic model and a reinforcement learning (RL) policy advisor, with all candidate actuator commands passing through a final control-barrier-function safety filter before reaching the hardware. This architecture is intended to optimize performance under unknown mission duration and exertion profiles. In this paper we introduce an 18-state, 3-control nonlinear state-space formulation using only sensors viable in structural firefighting, with triple-redundant O2 sensing and median voting. Finally, we introduce an MPC framework with a dynamic resource scarcity multiplier, an RL policy advisor for warm-starting, and a final control-barrier-function safety filter through which all actuator commands must pass, demonstrating 18-34% endurance improvement in simulation over PID baselines while maintaining tighter physiological and fire-safety margins.

  • 2 authors
·
Mar 15