Molter Colin, Salihoglu Utku, Bersini Hugues
Laboratory for Dynamics of Emergent Intelligence, RIKEN Brain Science Institute, Wako, Saitama, 351-0198, Japan.
Neural Comput. 2007 Jan;19(1):80-110. doi: 10.1162/neco.2007.19.1.80.
This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides "on its own" the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more "respectful" of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as "frustrated chaos."
这封信旨在研究迭代赫布学习算法对递归神经网络潜在动力学的影响。首先,讨论了一种迭代监督学习算法。该算法的一个重要改进是通过外部刺激而不是像霍普菲尔德最初提出的那样仅使用初始条件来索引吸引子信息项。修改刺激主要会导致整个内部动力学的变化,从而导致吸引子集和潜在记忆包的扩大。学习对网络动力学的影响如下:作为神经网络极限环吸引子存储的信息越多,网络的背景动力学状态中就越普遍存在混沌。实际上,背景混沌广泛传播并呈现出一种非常无结构的形状,类似于白噪声。接下来,我们引入一种从生物学角度来看更合理的监督学习新形式:网络必须学会通过循环一个不再事先指定的序列来对外部刺激做出反应。基于其自发动力学,网络“自行”决定与刺激相关联的动力学模式。与经典监督学习相比,在存储容量和计算成本方面都有了巨大提升。此外,这种新的监督学习形式由于更“尊重”网络的内在动力学,在获得的混沌中保持了更多结构。在混沌状态下仍然可以观察到所学吸引子的痕迹。这种复杂但仍然非常有信息价值的状态被称为“受挫混沌”。