学习第n个任务会比之前的容易吗

29次阅读

共计 2178 个字符,预计需要花费 6 分钟才能阅读完成。

Sebastian Thrun

Abstract

This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks.

Introduction

人在学习过程中并不只使用提供的训练数据,而是会综合过往的经验。就像学车的时候,也许你才学了几天,但是你从学就学会了识别路牌,有一些基本的机械知识,这些都会帮助你学习驾驶。

lifelong learning 架构是指,假设你面对的任务是你整个人生中所有任务,这些任务间的学习是可以相互促进的。从前面学习的任务中提取经验会有利于新的任务的学习。

我们可以假设每个新的任务是一个 concept,每个 concept 对应一个函数 f。所以遇到一个任务,我们需要先知道它属于哪个 concept,用哪个函数。我们在学习第 n 个任务的时候,前 n - 1 个任务的数据也会有用,这些数据叫做支持集。

Memory-Based Learning Approaches

基于记忆的方式。

KNN 和 Shepard

Shepard 是给 KNN 每个点加了权重,距离越远,权重越小。

学习新的表征

我们认为一个好的表征是让同类的样本间距离近,不同类的样本间距离远。

学习距离函数

可以用神经网络来学习距离函数,设定一个阈值,来判断样本是属于哪个 concept。

基于神经网络的方式

反向传播

Learning with Hints

看起来像 multi-task leanring 的原始版本。

EBNN

作者之前的一篇论文。EBNN 估计目标函数的斜率(tangents)。使用了 Tangent-Prop 算法。

实验结果

ENBB 最好,有知识迁移效果。

Discussion

Learning becomes easier when embedded in a lifelong learning context.

Y. S. Abu-Mostafa. Learning from hints in neural networks. Journal of Complexity, 6: 192-198,

W-K. Ahn and W F. Brewer. Psychological studies of explanation-based learning. In
G. Dejong, editor, Investigating Explanation-Based Learning. Kluwer Academic Publishers,
BostonlDordrechtILondon, 1993.

T. M. Mitchell and S. Thrun. Explanation-based neural network learning for robot control. In
S. J. Hanson, J. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing
Systems 5, pages 287-294, San Mateo, CA, 1993. Morgan Kaufmann.

] J. O’Sullivan, T. M. Mitchell, and S. Thrun. Explanation-based neural network learning from
mobile robot perception. In K. Ikeuchi and M. Veloso, editors, Symbolic Visual Learning. Oxford
University Press, 1995.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error
propagation. In D. E. Rumelhart and 1. L. McClelland, editors, Parallel Distributed Processing.
Vol. I + II. MIT Press, 1986.

S. Thrun. Explanation-Based Neural Network Learning: A Lifelong Learning Approach. Kluwer
Academic Publishers, Boston, MA, 1996. to appear.

正文完
 0