## Contents of Volume 1 (1991)

1/1991 2/1991 3/1991 4/1991 5/1991### 6/1991

Full text**Whitcomb M.J., Augusteijn M.F.: Hierarchical Learning in Neural Networks: A New Paradigm with Possible Applications to Automated Data Processing, p.321**

Abstract: The application of neural network learning to automated data processing is explored. The requirements for learning methods in this domain are discussed. Two new learning methods, satisfying these requirements, are introduced. Both methods dynamically allocate the necessary number of hidden nodes. The first method is called the Flat Learning Procedure because it builds a single layer of hidden nodes. Performance of this procedure is compared with that of the Generalized Delta Rule. The Flat Learning Procedure requires less training but uses more hidden nodes. The generalization properties of this procedure are unsatisfactory. The second method, the Hierarchical Learning Procedure, builds a hierarchical structure of hidden nodes. This method is capable of learning a hierarchy of concepts. Due to this property it is able to generalize well while maintaining the favorable characteristics of the flat procedure with respect to fast learning. The introduction of these learning procedures is a first step towards the application of neural nets to automated programming.**Adamatzky A.I.: Neural Algorithm for Constructing Minimum Spanning Tree of a Finite Planar Set, p.335**

Abstract: We present a local parallel algorithm for constructing minimum spanning tree of a finite planar set. Our algorithm is based on mechanisms of a dendritic tree growth during neuronal ontogenesis. We assume that some neuron wishes to make synaptic terminals on the all points of a given set. Neuron solves this problem by growing and sprouting its dendritic tree at plane. We implement our algorithm in a cellular automata processor, which architecture is similar to neural one. Cellular automata processor is the perspective specimen among massively-parallel computers. Offered algorithm runs in 0(h) time and it requires 0(n) processors, when h is a number of the given points in the longest branch of minimum spanning tree, n is a number of the given planar points.**Vornberger D., Zeppemfeld K.: GAVIDAL (A Graphical Visualization Tool for Transputer Networks, p.341**

Abstract; Large distributed algorithms are hard to design and to implement. One difficulty is the potential complexity of the interactions among the large number of parallel processes. One way to help these problems as small as possible is visualization of the dynamic behavior of such distributed algorithms. This article describes GRAVIDAL, a graphical visualization environment for occam programs running on arbitrary transputer networks. It provides animated user defined views of algorithms during their runtime. The user has only to place some macro calls in his source code and GRAVIDAL then generates a visualized version of his algorithm. Therefore, a modular and specially adapted visualization for any distributed algorithm can be easily achieved. Time measurements show that the overhead for the animated version is in an acceptable range. Besides visualization and debugging, GRAVIDAL can be used to improve teaching and learning of the science of distributed algorithms.**Günham A.E.: Pattern Classifier, An Alternative Method of Unsupervised Learning, p.349**

Abstract: This model arid its learning algorithm is based on the neurophysiological activity of real neurons. Natural Neural Networks have an extremely powerful self-organizing property. In the present model, this selforganization property emerges not only as a consequence of mutual inhibition of neurons, but also as a result of a very simple and plausible learning principle governing the individual neurons.**Frank O.: Statistical Models and Tests of Intrafascicular Nerve Fiber Arrangments, p.355**

Abstract: In order to gain insight into the structure and organization of human peripheral sensory nerve fascicles, it is possible to use a microneurographic method described in a recent article by Hallin, Ekedahl and Frank. This method uses a specially devised concentric needle electrode for obtaining intrafascicular recordings of nerve activity. The statistical analysis of such recordings is discussed here, and stochastic models are developed for testing various hypothesis on nerve fiber arrangements. The general approach is illustrated by analyzing experimental data collected for studying particular segregation and clustering phenomena of fibers in human sensory nerve fascicles.**Olej V., Chrmúrny J,: Analysis o f Decision-making processes in Discrete Systems by Fuzzy Petri Nets, p.361**

Abstract: This paper presents possibility of access to uncertainty in analysis of decision-making processes in discrete systems by fuzzy Petri nets (FPN).**Kraaijveld M.A., Duin R.P.W.: An Optimal Stopping Criterion for Backpropagation Learning, p.365**

Abstract: A common problem of iterative learning procedures like the backpropagation algorithm is the lack of insight in the learning phase. Therefore, it is difficult to decide at what moment the learning phase should be terminated, For many ad hoc criteria that are used in practice, it can be shown that they suffer from serious defects, especially for recognition problems in which the distributions of the classes have some overlap. By the application of a technique from the statistical pattern recognition literature, the editing algorithm [3], a learning set can be transformed to a data set in which the overlap of the classes is effectively removed. This results in an optimal stopping criterion for iterative learning procedures, and a number of experiments indicate a moderate improvement in learning speed for the backpropagation algorithm. Moreover, because it can be proven that an edited data set yields a performance which is close to Bayes-optimal fqr the nearest-neighbor classifier, it is very likely that a classifier which is based on an iterative learning procedure and which classifies all samples in the edited learning set correctly, is also close to Bayes-optimal.**Samsonovich A.V.: Molecular-Level Neuroelectronics, p.371**

Abstract: New ideas of modern neural network models molecular-level (ML) implementation based on Coulomb correlated electron tunneling in molecular media are proposed; estimations of reachable parameters have been obtained. The neuronet approach proves to be more relevant for molecular-level computers than the von Neumann one.**Tutorial: Hořejš J.: A View on Neural Network Paradigms Development (Part 6), p.383**

### 5/1991

Full text**Hornik K.: Functional Approximation and Leraning in Artificial Neural Networks, p.257**

Abstract: We discuss the potential of using artificial neural networks in problems of functional approximation and learning (estimation). It is argued that an analysis of this potential should be based on a rigorous theory rather than on the findings of particular simulation experiments. We survey some of the results which have already been established, discuss their relevance and indicate directions in which we think further research wiU be necessary.**Kufudaki O., Hořejš J.:: PAB: Parameters Adapting Back-Propagation, p.267**

Abstract: A new method for back-propagation is suggested. It uses the parameterized transfer sigmoid function S(L,L,r) =

**Herrmann M., Englisch H.: Neural Nets as Heteroassociative Memories, p.275**

The ability of neural network to store information as heteroassociative memory devices is discussed.**Kainen P. C.: On Parallel Heuristics, p.281**

Abstract: We consider parallel heuristics based on mathematical knowledge. Several examples are surveyed from optimization and problem solving with emphasis on how an abstract insight leads to efficient calculation. In some cases the resulting methods are error-tolerant and are problem-size independent. W e give instances in visual perception in artificial, mathematically defined environments which suggest that heuristics may be utilized in some types of biological computation, and an experiment to test this idea is proposed.**Hlaváčková K.: Basic Algorithms for Maps, p.287**

Abstract: Several learning variants of self-organization in a map-type neural network are demonstrated. Algorithms based on terms of neighbourhood are discussed, A new conception of neighbourhood - the ordering neighbourhood - is introduced.**Beran H.: Time Delay Neural Networks, p.295**

Summary: The Time Delay Neural Network (TDNN) is a perceptron- like structure with several layers of neurons. Each neuron receives not only the instantaneous part of information from all the neurons of the previous layer, but also the past part of information from the predefined delays. The full equivalence between the Time Delay and Back Propagation (BP) is prooved, where the TD N N structure is equivalent to the classical BP structure with some empty connections. The original approach to the adaptation algorithm based upon this equivalence is derived. Simple examples of the network performance are demonstrated.**Goltsev A.D.: The Neuronlike Network for Brightness Picture Segmentation, p.303**

Abstract: The algorithm of brightness picture segmentation is described. The algorithm is realized as a computer model of a neural network. The results of the model’s runs are presented in photographs.**Fatton D.: On the Uniform Training, p.309**

New strategies of backpropagation training avoiding the danger of local minimum convergence are discussed.**Tutorial: Hořejš J.: A View on Neural Network Paradigms Development (Part 5), p.313**

### 4/1991

Full text**Editorial, p. 193**

**Gorse D., Taylor G.J.: Universal Associative Stochastic Learning Automata, p.193**

Abstract: A generalisation of the concept of binary-input stochastic learning automata is given which incorporates non-linearity and stochasticity to a maximal degree. This universal automaton is identified with the ‘probabilistic random access memory’ (pRAM), a hardware-realisable neural model previously proposed by the authors. A reinforcement training rule is presented for such automata, and convergence theorems proved. The nature of the invariant measure is explored for a 1-input automaton with a two dimensional state space. The reinforcement rule is then simulated in the context of a particular classification task, and the results compared favourably with those obtained by Barto and Anandan using a less general training rule.**Personnaz L., Nerrand O., Roussel-Ragout P., Dreyfus G.: Training Discrete-Time Feedback Networks for Filtering and Control, p.205**

A general framework for training discrete-time neural networks by gradient methods applicable for any network architecture is described**Bitzan P.: Neural Networks Simulator, p. 215**

Abstract: Programmed models of neural nets sometimes exhibit slow training and uncertain testing which can fail if the system converges into improper minima. Subsequently, they face strong competition from standard algorithms. When one is interested only in applications, neural nets can be simulated highly effectively on the base of nonadaptive algorithms. We have described here a so-called neural networks simulator, which provides simulation of neural nets by means of a pseudo-metric which is constructed in the space of training samples.**Bulsari A.B., Saxén H : System Identification Using the Symmetric Logarithmoid as an Activation Function in a Feed-Forward Neural Network, p.221**

Abstract: In most applications, the sigmoidal activation function is used without questioning its limitations. The sigmoid restricts the outputs from feed—forward neural networks to between —1 and 1, or 0 and 1. However, there are systems whose outputs are not constrained within —1 and I, or 0 and 1, and for reasons of loss in sensitivity, it is not desirable to map the output range to 0 to 1. In such cases, the symmetric logarithmoid provides a viable alternative to the sigmoid, while preserving many characteristics of the sigmoid. This paper illustrates the applicability of the symmetric logarithmoid activation function in a feed — forward neural network, exemplified by a system identification problem of a biochemical reactor. The inputs to the networks were the three state variables at a time, and the process input variables (control variables and disturbances) from that time to the time for which the state variables are to he predicted. This duration was 0.1 hour, and the characteristic time for the process was 2.9 hours under normal circumstances. Levenberg-—Marquardt method was used to train the neural networks by minimising the sum of squares of the residuals. In most cases, the symmetric logarithmoid resulted in lower error square sum values than the sigmoid. The predictions were quite accurate. The symmetric logarithmoid is continuous, first-—order differentiable and a simple, monotonically increasing algebraic function. Convergence is generally faster compared to the sigmoidal activation function. Extremely large weights are not commonly generated by the training process, but is a usual feature with the sigmoids.**Chudý V., Chudý L., Hapák V.: Invariant Speech Perception and Recognition by Neural Nets, p.227**

Abstract: In the paper is described one of the possible approaches to speech recognition in the family of modern Indo — european languages, esspecially in Slovak. The model describes the processes of perception by an invariant feature neural net and the learning and recognition by the probabalistic neural net. The model requires maximum 256 bits for a pattern of any word what corresponds to compression of the information rate from some 216 bits per second to 2s bits per second for an isolated word. The aim of the model is recognition which is independent (invariant) of a speaker, and the redundant physical and phonetic parameters. An underlying Group symmetry approach is implicitly involved in the model.**Rybak I.A., Golovan A.V., Gusakova V.I., Shevtsova N.A., Podladchikova L.N.: A Neural Network System for Active Visual Perception and Recognition, p.245**

Abstract: A method for parallel-sequential processing of greylevel images and their representation which is invariant to position, rotation, and scale, is developed. The method is based on the idea that an image is memorized and recognized by way of consecutive fixations of moving eyes on the most informative image fragments. The method provides the invariant representation of the image in each fixation point and of spatial relations between the features extracted in neighboring fixations. The applications of the method to recognition of greylevel images are considered.**Tutorial - Hořejš J.: A View on Neural Network Paradigms (Part 4), p. 253**

### 3/1991

Full text**Editional..................................... 129**

**Kerckhoffs E.J.H.: View on Problem-Solving Paradigms Including Neurocomputing, p. 129**

Connectionism is considered as a problem-solving paradigm among other methodologies such as (numeric) simulation and (symbolic) reasoning. In order to create still more powerful and useful problem-solving tools, simulation systems, knowledge-based expert systems and connestionist systems can, at least in principle, be coupled. The spectrum of these so-called ôcoupled systemsö (or ô hybrid systemsö ) is surveyed with respect to methodological aspects, functionalities and practical applications. The emerging role of parallel processing when dealing with the more complex systems in either domain is discussed. Finally, some neural-network application projects currently running at Delft University of Technology (the Netherlands) are briefly dealt with; they might illustrate some of the issues considered.**Cimagalli V., Balsi M., De Carolis A.: Information Storage in Neurocomputing, p.155**

Abstract: In this paper we introduce the concept of ôrelational informationsö as a peculiar property of neurocomputers. In fact, due to the distributed way of storig and processing information, the organized structure of a neurocomputer adds significant relations to data fed to it and this is the reason why it is able to generalize from a limited number of training inputs. After having summarized the most significant results related to our problem, available in the literature, we suggest methods for measuring the said relational information both in a dynamic system as a chaotic map and in a neural network of any kind.**Jiŕina M.: Binary Neural Net, p.163**

Summary: The paper deals with possibility of using the principle of ordinary digital logical elements for design of a inode! of neural net. The two-layer structure of AND and OR logical gates similar to minimal disjoint form is introduced. Active as well as adaptive dynamics is described and it is shown that the net can serve as adaptive classifier, decoder and it can recognize äblurredô patterns, but it is not noise resistant.**Vítková G., Míček J.: Knowledge Processing by Neural Networks, p.171**

Abstract: The paper outlines the basic properties of a neural network associative memory extrapolation model and describes the main abilities of fuzzy cognitive maps. A revised equation for associative memory model performance is proposed. Modeling the logical function äandô for inferring knowledge by a neural network is discussed. The results of an investigation of an associative memory extrapolation based knowledge system used for diagnostics are presented. Outlined is the frame of the QUEST system for knowledge processing. The system functions and its other facilities are also presented.**Hořejš J.: A View on Neural Network Paradims Development (Part 3), p.185**

### 2/1991

Full text**Gupta M.M.; Uncertainty and Information: the Emerging Paradigms, p.65**

In this paper, we describe some aspects of information and its cognate the uncertainty from the design of perspectives of intelligent systems. The discussion is centered around statistical uncertainty and cognitive uncertainty, an important class of uncertainty that arises from human thinking and cognition process. Also, we discuss how these two uncertainties can help us in the design of new class of sensors and intelligent systems.**Marko H.: Pattern Recognition with Homogeneous and Space — Variant Neural Layers, p.71**

In the present article an attempt is made to understand pattern recognition of simple symbols (e.g. alphanumerical letters) by use of a system of homogeneous layers constructed in accordance with known properties of the visual system.**Růžička P.: Neural Network Learning with Respect to Sensitivity to Weight Errors, p.81**

Th is paper deals with the problem of neural network learning to get the most convenient “configuration“. By the configuration is meant the vector of synaptic weights and thresholds of formal neurons creating the network. In the configuration design, we respect the complexity of technical realization of the network and we consider both the possible errors in keeping precise the designed configuration during the realization and fluctuations of the configuration during the net exploitation. To achieve this we introduce a cumulative loss function of the network which expresses the loss evoked by unprecise learning. The network learns through the optimization of sensitivity of the cumulative loss to large changes of configuration, the sensitivity to large changes being constructed on the basis of differentiating linear integral parametric operators of derivatives estimation. The possibilities of such an approach are demonstrated by an example.**Frolov A.A.: Limiting Informational Characteristic of Neural Networks Performing Associative Learning, p.97**The capability of associative learning is one of the main properties of the brain. We share the idea (Palm, 1982; Kohonen, 1984) that design of devices modelling behavior of some biological organism as a whole can be based on associative memory mechanism. This idea is related to the one of Pavlov: that adapted animal behavior is based on the conditioning ability. A lot of experimental data on neurophysiology of associative learning has been accumulated since Pavlov. Associative memory models have been developed simultaneously to generalize experimental data and to create the basis for further experiments (Rosenblatt, 1959; Konorsky, 1970; Hebb, 1949; Steinbuch, 1961; Willshaw et al. , 1969; Briendley, 1969; Marr, 1969, 1970, 1971; Palm, 1981, 1982; Kohonen, 1980, 1984; Hopfield, 1982, 1984 etc. ). As a result of experimental and theoretical research, the following common understanding of learning and memory problems in the nervous system has been reached.**Sandler Yu. M., Artyushkin V. F.: The Model of Neural Network with Selective Memorization and Chaotic Behaviour (p.105)**In the present paper a generalization of Hopfield model is shown, associated with a break of the specific invariance of the equations of motion (2). Unlike the Hopfield model, the present model can exhibit selectivity in the process of learning (that is, “ memorizing” only the patterns of certain kind) and has quasi-stochastic attractors.-
**Eldridge W.: Record of the Panel, Discussion on NEURONET ’90** **Hořejš J.: A View on Neural Network Paradigms Development (Part 2) (p.121)**

Here we continue in the tutorial paper concerning the neural network paradigm, which first part was published in the Neural Network World, No. 1, 1991.

TUTORIAL

### 1/1991

Full textScanning the Issue:

**Editorial - THE INAUGURATION OF A NEW JOURNAL, p. 1**

Papers:

**Taylor J. G. Can Neural Networks ever be made to Think? , p. 4**

An outline is given of neural network modules, and the modes of them, such that a machine operating with such a structure can he said to be thinking. The approach is based on a relational theory of meaning, in which the relations are determined by developing episodic mentor) in the net. This later form of memory is itself based on temporal sequences and their storage, as is the possibility of the machine developing “ trains of thought".

**Faber J. . Associative Interneuronal Biological Mechanisms, p. 13**

The neurologist finds analogies between the Farley and ( lark automatic self-organizing model and the brain highly intriguing. The signal generator suggests comparison with the thalamus which also has a rhythm-making function and. likewise, sends many variables — impulses — into the cortex. The complex with its elements randomly connected at the start of the experiment is reminiscent of the cortex which, in the newborn, is in a naive, poorly organized state. The discrimination unit designed to determine the state values of the cortex is like the limbic system which monitors theh body's metabolic equilibrium by means of internal environment receptors in the hypothalamus, and which adjusts the emotive equilibrium of mental functions by means of endocrine and nervous mechanisms. Stimuli from the discrimination unit travel on to the signal generator and to the formator. The formator can be likened to the modulatory humorergic centres it similarly regulates the thresholds of elements and connections in the cortex and other parts of the brain. In the model there is one formator, in the brain there are more, for each state there is a centre of formator action: the reticular formation for the state of vigilance, nuclei raphe for synchronous sleep, locus eaeruleus for paradoxical sleep. Each nucleus operates in its own way, generally perhaps by setting the threshold and, consequently, by changing the programmes of the target neuronal circuits and networks. Under pathological circumstances, even a cortical lesion, e.g. an epileptic focus, can become a formator. This focus then competes with physiological formators

for control of the cortex. This power struggle then results in an epileptic attack or acute psychosis. For the most part, physiological formators act as inhibitors. During epileptogenesis, prior to manifest paroxysms, there is gradual loss of sleep, especially paradoxical sleep.

**Koruga D. : Neurocomputing and Consciousness, p.32**

This article deals with the problem of interrelation between neurocompilling and consciousness. Neurocomputing is approached from the aspect of the space-time structures, while consciousness is perceived as a link between

states of mind and images of these structures in the brain. This approach leads to a relativistic model of inform ation theory, and opens up the possibilities of linking information with mass and energy. By considering neurocomputing and consciousness, a new field of science emerges which can be named: in formational physics. In the final discussion, one extra problem is considered: Can a machine, as a form of artificial life, posses consciousness?

**Kuan C. AT, Hornik K .: Learning in Partially Hard-Wired Reccurent Network, p.39**

In this paper we propose a partially hard-wired Elman network, A distinct feature of our approach is that only minor modifications of existing on-line and off-line learning algorithms are necessary in order to implement the proposed network. This allows researchers to adapt easily to trainable recurrent networks. Given this network architecture, we show that in a general dynamic environment the standard back-propagation estimates for the learnable connection weights can converge to a mean square error minimizer with probability one and are asymptotically normally distributed.

**Nordbotten S. : Teaching Strategies for Artificial Neural Network Learning, p. 46**

This paper presents an evaluation of the effects of variation of training set si/e, ordering of examples in the training set, adjustment (learning) rate, and reinforcement on pattern recognition in artificial single layer neural networks (ANNs) which use a learning algorithm based on the Widrow-Hoff principle. These parameters can be considered as alternative teaching strategies for ANNs. The evaluation has been carried out as a set of simulation experiments on synthetic sets of patterns. The results indicate that for the type of pattern identification considered, learning in ANN is sensitive to the teaching strategy chosen.

**Ezhov A. A,, Khromov A. G., Knizhnikova L. A., Vvedensky V. L.: Self-Reproducible Networks: Classification, Antagonistic Rules and Generalization, p. 52**

Self-reproducible neural networks with synchronously changing neuron thresholds are interesting objects for theoretical investigations and computer modeling. The networks with anti-Hebbian bonds are described.

**Frank O.:Statistical Models of Intraneural Topography, p.58**

**Gavrilov A. V.: An Architecture of Neurocomputer for Image Recognition, p.59**

**Tutorial: Horejs J: A View on Neural Network Paradigms Development, p.61**