Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes
1 minute read ∼ Filed in : A paper noteHandle unknown tasks:
- Meta-learning achieves quick adaptation and generalization with learned inductive bias. But it assumes that the tasks for training and testing are independently sampled from the same accessible distribution.
- Continual learning aims to solve a sequence of tasks with clear task delineations while avoiding catastrophic forgetting.
In real-world, the assumptions of both methods are not met,
- The mutual knowledge transfer in meta-learning may degrade the generalization performance.
- Task distribution modeling those interactions are complex to determine.
This work is to solve nonstationary online problems where the task boundaries and the number of tasks are unknown by proposing a model-based reinforcement learning (RL) method that does not require a pre-trained model.
- either create a new model for unseen dynamics or recalls an old model for encountered dynamics.
- task recognition => update model parameter via conjugate gradient
###