3. Visualization of massive neural nets

Suppose given a massive neural net, that is, for which the size, N, may be on the order of tens or hundreds of thousands. How to observe its instantaneous state, or a sequence of states, to understand its evolution? In this paper we present only one of many possible strategies, already inherent in the neural net approach: the view of the matrix of connection strengths as a two-dimensional image. This may be done in shades of gray, or through translation by a color lookup table. There are two serious problems with this approach. Nevertheless, we advocate it here, and plan to pursue it in further work.

The first problem is in the massive size of the image. As computer screens and printed pages are generally limited to a size of one thousand or so, the literal image of a matrix of size N as conceived here must cover many computer screens, or many pages of print. The obvious solution to this problem of massive size is an intentional reduction of resolution, by pixel averaging for example.

The second problem is in the fictitious representation of the nodes in linear order, that is, as a one- dimensional geographic space, when in fact, the ordering given by the index (I) is arbitrary, or logical, or anything but geographical. In case there is a geometric or geographical map for the nodes of the neural net, its dimension is usually greater than one, and so the representation within a one-dimensional space is forced and artificial. (Note: Complex dynamical systems with geometric reference spaces have been discussed in the literature. For example, with a two-dimensional reference space, the connection matrix may be embedded in four dimensions, giving rise to a four- dimensional image.)

Worse yet, these two problems aggravate each other. For averaging neighboring pixels, when the proximity of nodes has no natural significance, may destroy all significance in the image, providing a very foggy (that is, fractal) visualization of the net.

Nevertheless, we feel this approach has a certain promise, as fractal geometry provides tools for studying foggy (fractal) images. And here we propose just one of these tools: the pointwise fractal dimension. By computing the fractal dimension of the large matrix at each point, we obtain another matrix of the same size. This derived matrix may be viewed as a topography of complexity, a parameter of considerable significance in the context of morphogenesis, even of foggy images. And furthermore, the derived image of the complexity of the original image may be expected to behave well under pixel averaging, or other resolution reducing transformations. For this invariance under scaling is a characteristic of fractals.

In summary, here is our proposal for viewing the morphogenetic process of a massive neural net:

Given a time series of connection matrices, compute the derivatives D and E for each, and view the time series of matrices, E, as a time-lapse movie of the morphogenesis of the net.