Rongjian Liang, Hua Xiang, et al.
ISPD 2020
Although Markov Decision Process (MDP) has wide applications in autonomous systems as a core model in Reinforcement Learning, a key bottleneck is the large memory utilization of the state transition probability matrices. This is particularly problematic for computational platforms with limited memory, or for Bayesian MDP, which requires dozens of such matrices. To mitigate this difficulty, we propose a highly memory-efficient representation for probability matrices using Binary Decision Diagram (BDD) based sampling, and develop a corresponding (Bayesian/classical) MDP solver on a CPU-GPU platform. Simulation results indicate our approach reduces memory by one and two orders of magnitude for Bayesian/classical MDP, respectively.
Rongjian Liang, Hua Xiang, et al.
ISPD 2020
Eric Cheng, Daniel Mueller-Gritschneder, et al.
DAC 2019
Zhuo Li, C.N. Sze, et al.
ASP-DAC 2005
C.N. Sze, Jiang Hu, et al.
ASP-DAC 2004