Poster
Transfer Q-Learning with Composite MDP Structures
Jinhang Chai · Elynn Chen · Lin Yang
West Exhibition Hall B2-B3 #W-904
When a computer learns a new task, it typically starts from scratch, requiring lots of time and data. Imagine if, instead, it could remember what it learned before and adapt quickly to new challenges, even when conditions change. Our work makes this possible in a specific type of artificial intelligence known as reinforcement learning, where machines learn through trial and error to make good decisions.We designed a new learning method that allows computers to effectively transfer their experience from past tasks to solve new, related ones faster and more accurately. Our key idea was to separate what remains common across tasks from what changes, much like identifying common rules in different board games while noting specific rule differences.By structuring the learning process in this way, our approach helps machines use their experience more wisely. This not only makes learning faster and smarter but also lays the groundwork for practical applications ranging from robots adapting to new environments to better decision-making systems in healthcare or business.