Contents of Volume 10 (2000)

5/2000 4/2000 2/2000 1-2/2000


6/2000

  • [1] Editorial, 907.
  • [2] Kioutsioukis I. (Italy), Melas D. (Greece), Ziomas I.C. (Greece), Skouloudis A. (Italy): Predicting peak photochemical pollutant concentrations with a combination of neural network models, 909-916.

    The paper describes an attempt for the 24-hours prediction of photochemical pollutant levels using neural network models. Two models are developed for this purpose that relate peak pollutant concentrations to meteorological and emission variables. The analysis is based on measurements of O3 and NO2 from the city of Athens. The selected input meteorological variables fulfil two criteria: (a) cover atmospheric processes that determine the dispersion and diffusion of the airborne pollutants and (b) are available from routine observations or forecasts. The comparison between model predictions and actual observations shows very good agreement.

  • [3] Frolov A.A. (Russia), Húsek D. (Czech R.), Snášel V. (Czech R.), Combe P. (France): Recall time in densely encoded Hopfield network, 917-928.

    Recall time in densely encoded Hopfield neural network with parallel dynamics is investigated analytically and by computer simulation. The method of the recall time estimation is based on calculation of overlaps between successive patterns of network dynamics. Recall time is estimated as the time when the overlap reaches the value 1 - m where m is the minimal increment of overlap for the network of a given size. It is shown, first, that this time actually gives rather accurate estimate for the recall time and, second, that the overlap between successive patterns of network dynamics can be rather accurately estimated by the theory recently developed. It is shown that recall process has three very different phases: the search of the recalled prototype by large steps with low convergence rate, fast convergence to the attractor in the vicinity of the recalled prototype and again slow convergence to the attractor when it is almost reached. If recall process ends at two first phases then point attractors dominate. If it ends at the third phase then cyclic attractors of the length 2 dominate. Transition to the third phase can be revealed by computer simulation of networks of extremely large size (up to a number of neurons on the order of 105). Special algorithm is used to avoid storing in the computer memory both connection matrix and the set of stored prototypes.

  • [4] Levendovszky J. (Hungary), van der Meulen E.C. (Belgium), Elek Zs. (Hungary): Nonparametric Bayesian estimation by feedforward neural networks, 929-957.

    The paper is concerned with developing novel nonparametric detectors implemented by neural networks. These detection algorithms are of great importance in point-to-point microwave digital communication, and in mobile communication as well. It is proven that asymptotically optimal detection performance can be achieved by the proposed methods. The complexity of the newly developed algorithms is minimized by different coding techniques. Extensive numerical results demonstrate the optimal performance of the new detection schemes in the case of different channel models. This optimized algorithm enables practical real-time detection in digital communication systems.

  • [5] Mokriš I. (Slovakia), Turčaník M. (Slovakia): A comment to the invariant pattern recognition by multilayer perceptron, 959-967.

    The paper deals with analysis of multilayer perceptron with sigmoidal activation function, which is used for invariant pattern recognition. Analysed invariance of multilayer perceptron is oriented to the recognition of translated, rotated, dilated, destroyed and incomplete patterns. Parameters of analysis are the number of hidden layers, number of neurons in hidden layers and number of learning cycles due to Back-Propagation learning algorithm of multilayer perceptron. Results of analysis can be used for evaluation of quality of invariant pattern recognition by multilayer perceptron.

  • [6] Mokriš I. (Slovakia), Turčaník M. (Slovakia): Contribution to the analysis of multilayer perceptrons for pattern recognition, 969-982.

    The paper deals with analysis of a multilayer perceptron with sigmoidal transfer function, which was used for pattern recognition. The optimal numbers of hidden layers, neurons in hidden layers, and synapses of the multilayer perceptron were analysed. The number of learning cycles was also analysed using the back-propagation algorithm for a multilayer perceptron.

  • [7] Nunnari G. (Italy): Simplified fuzzy modelling of pollutant time series, 983-1000.

    The paper presents a new strategy to reduce the complexity of NARX (Non-linear AutoRegressive with eXogenous inputs) fuzzy singleton models which are suitable for modelling pollutant time series. The proposed strategy is based on a mixed clustering- statistical approach, which greatly reduces the number of fuzzy rules. The paper describes in detail the procedure to choose the structure of the fuzzy model and identify the parameters. By presenting an application to model an ozone time series the benefit of this approach for both approximation and analysis purposes is demonstrated, and compared with commercially available tools.

  • [8] Peruš M. (Slovenia): Neural networks as a basis for quantum associative networks, 1001-1013.

    We have a great deal of experience with computer simulations of Hopfield and holographic neural net models. Taking these models as starting points, this paper presents an analogous quantum information processing system called a quantum associative network. It was obtained by translating an associative neural net model into the mathematical formalism of quantum theory in order to enable microphysical implementation of associative memory and pattern recognition. It is expected that successful quantum implementation of the model would yield many benefits, including significant increases in speed, miniaturization, efficiency of performance, and in memory capacity. These benefits would accrue through the additional exploitation of quantum-phase encoding.

5/2000

  • [1] Editorial, 775-776.
  • [2] Amato P. (Italy), Porto M. (Italy): An algorithm for the automatic generation of a logical formula representing a control law, 777-786.

    Given a continuous control function f we present an approximation algorithm for f, based on McNaughton's representation of propositions in the infinite-valued calculus of L ukasiewicz. Our algorithm outputs a formula belonging to a fragment of the Esteva-Godo-Montagna logic L 1_2; this formula, on the one hand represents the human expert's approximate description of f , and on the other hand describes a function telling us, for every pair x; y in the phase space of the system, how much the output value y is appropriate for the input value x. We discuss the relevance of the algorithm for the general problems of fuzzy control.

  • [3] Andrejková G. (Slovakia): Applications of the approximation theory by neural networks, 787-795.

    The results of Kolmogorov's theorem and Jones's theorem are used for approximations of continuous functions. We present incremental algorithms operating on one- and two-hidden- layer neural networks with linear output units in such a way that in each iteration, some new hidden unit is put in the first or in the second hidden layer. The weight parameters of the new units are determined and output weights of all units are recalculated. We apply the algorithms to the special class of functions (for predictions of geomagnetic storms).

  • [4] Coufal D. (Czech R.): Initialization of possibilistic clustering via mountain clustering method, 797-809.

    In this paper a combination of mountain clustering method with fuzzy c-means and possibilistic c-means algorithms is investigated. The idea of this combination is driven by an effort for automated setting of input parameters of c-means algorithms. The proposed solutions are demonstrated on several experiments.

  • [5] Godo L. (Spain), Esteva F. (Spain), Hájek P. (Czech R.): Reasoning about probability using fuzzy logic, 811-824.

    In this paper we deal with an approach to reasoning about numerical beliefs in a logical framework. Among the different models of numerical belief, probability theory is the most relevant. Nearly all logics of probability that have been proposed in the literature are based on classical two-valued logic. After making clear the differences between fuzzy logic and probability theory, that apply also to uncertainty measures in general, here we propose two different theories in a fuzzy logic to cope with probability and belief functions respectively. Completeness results are provided for them. The main idea behind this approach is that uncertainty measures of crisp propositions can be understood as truth-values of some suitable fuzzy propositions associated to the crisp ones.

  • [6] Kramosil I. (Czech R.): Elements of Boolean-valued Dempster-Shafer Theory, 825-835.

    There are several reasons for which also non-numerical degrees of belief deserve attention when proposing mathematical models for uncertainty quantification and processing based on the theory of belief functions (Dempster-Shafer theory, in other terms). Here we introduce and briefly survey a Boolean-valued modification of this theory and we show that for almost all elementary notions and constructions of this theory their Boolean analogies can be found, and analogous assertions stated and proved, in a more or less routine way following the deep going formal analogies between possibilistic and probabilistic measures analyzed in detail.

  • [7] Montagna F. (Italy): The free BL-algebra on one generator, 837-844.

    Free algebras are always important in algebra. In particular, in Fuzzy Logic free algebras allow us to regard propositions as functions, and are relevant in many contexts, for instance in Proof Theory of Fuzzy Logic. Recently, Hájek introduced a fuzzy logic, named Basic Logic (BL for short), which is a very natural fragment common to all the most important known fuzzy logics. As far as I know, only free algebras of very prticular subvarieties of BL algebras are known. The present paper contains a complete description of the free BL algebra on one generator in terms of McNaughton functions.

  • [8] Navara M. (Czech R.): Satisfiability in fuzzy logics, 845-858.

    The notion of validation set of a formula in a fuzzy logic was introduced by Butnariu, Klement and Zafrany. It is the set of all evaluations of the formula for all possible evaluations of its atomic symbols. We generalize this notion to sets of formulas. This enables us to formulate and prove generalized theorems on satisfiability and compactness of various fuzzy logics. We also propose and study new types of satisfiability and consistency degree of a set of formulas.

  • [9] Neruda R. (Czech R.), Krušina P. (Czech R.), Petrová Z. (Czech R.): Towards soft computing agents, 859-867.

    This paper shows the concept of combining several modern artificial intelligence methods that fall within the soft computing area - namely neural networks, genetic algorithms, and fuzzy logic controllers - embodied as agents. We have created a unified software platform, a system called Bang, that allows for easy combination of these agents, their cooperation and concurrent multi-threaded work. The ideas behind the design of this open software tool are presented together with the brief description of how the system works. Several experiments with hybrid methods show the possible use of the system in rapid prototype model design. Finally, ideas for future work are sketched.

  • [10] Novák V. (Czech R.): On functions in fuzzy logic with evaluated syntax, 869-875.

    The paper is a contribution to the theory of fuzzy logic in the narrow sense with evaluated syntax (FLn). We discuss the possibility to introduce functional symbols and fuzzy equality in it. We show that completeness with fuzzy equality is not harmed and that extension by functional symbols is conservative.

  • [11] Perfilieva I. (Czech R.): Fuzzy relations, functions, and their representation by formulas, 877-890.

    In this paper, we present the representation theorem for fuzzy relations, introduce the disjunctive and conjunctive normal forms and formulate what is understood by an approximation of continuous functions by appropriate fuzzy relations via defuzzification. The best approximation property for certain choice of defuzzification has been established.

  • [12] Vinař J. (Slovakia), Vojtáš P. (Czech R.): A formal model for fuzzy knowledge based systems with similarities, 891-905.

    The paper studies the problem of valid reasoning from data in a situation when attribute values are crisp but there is uncertainty concerning the identity of objects possessing these attributes. In this situation similarity between object names must be taken into consideration. A motivating example is used to illustrate a formal mathematical model.

4/2000

  • [1] Franklin S. (USA): Deliberation and voluntary action in "conscious" software agents, 505-521.

    Here we describe briefly the architecture and mechanisms of a "conscious" software agent named IDA who is to act as a type of employment agent. We go on to show how this architecture and these mechanisms allow IDA to deliberate much as we humans do and to make voluntary choices of actions. The technology described, if successful, should allow the automation of the work of many human information agents. The IDA model also makes, hopefully testable, hypotheses about how human deliberation and volition.

  • [2] Alexandru M. (Romania): Building on-line diagnosis system upon artificial neural network techniques, 523-534.

    In this article a distribute system architecture for process monitoring, fault diagnosis and assisted maintenance is proposed. The diagnosis system aims identifying failures as and when they happen in normal operation. A neural network classifier is developed to diagnose sensor and actuator faults. The implementation of dynamic diagnostic decision is interesting in the case of closed-loop systems for which the residual responses may be transient. The stated aim of the suggested neural technique is to obtain a persistent fault isolation in the presence of unknown inputs (torque disturbances), even if some residuals go down to zero after compensation of the fault effect by the control law. The fuzzified residuals are evaluated with the aim to generate an alarm, shortly, after fault occurrence and to isolate different possible faults. Simulations were performed under different significant operating conditions, in the presence of load torque variations and changes of electrical parameters, in order to ensure the training and test sets will cover the whole behavior of the system. The actual developing supervisory intelligent system has to:

    • detect and interpret the abnormal conditions that will cause an incident
    • find reasons of equipment malfunctions and determine what kind of action should be taken to resume the process to normal conditions.
  • [3] Bullinaria J. A. (UK), Riddell P. M. (UK): Learning and evolution of control systems, 535-544.

    The oculomotor control system, like many other systems that are required to respond appropriately under varying conditions to a range of different cues, would be rather difficult to program by hand. A natural solution to modelling such systems, and formulating artificial control systems more generally, is to allow them to learn for themselves how they can perform most effectively. We present results from an extensive series of explicit simulations of neural network models of the development of human oculomotor control, and conclude that lifetime learning alone is not enough. Control systems that learn also benefit from constraints due to evolutionary type factors.

  • [4] Cheng B. - B. (ROC): Multi-response optimization based on a neuro-fuzzy system, 545-551.

    Multi-response optimization involves the identification of a system under study and the optimization of this system's responses based on the identified system model. Traditionally, system identification is done by statistical regression analysis in which exact functional form is assumed. With the consideration of unknown and nonlinear systems frequently being encountered in practice, we use a neuro-fuzzy system, which falls into the category of non- parametric regression analysis, to identify the system. An algorithm is also proposed in this paper to solve the response optimization problem based on the neuro-fuzzy system.

  • [5] Dermatas E. (Greece): Polynomial extension of the generalized radial-basis function networks in density estimation and classification problems, 553-564.

    A new family of generalized radial-basis function networks is presented and evaluated in a two-dimensional, two class pattern classification problem. More specific, the linear layer of the RBF network is replaced by sigma-pi neurons and a novel training algorithm based on moments is used to estimate the output layer weights. The experimental results show faster learning rate, better generalization capabilities comparing to the classic RBF, the mixture of Gaussian pdf and the multilayer perceptron.

  • [6] Georgopoulos E. F. (Greece), Likothanassis S.D. (Greece), Adamopoulos A.V. (Greece): Evolving artificial neural networks using genetic algorithms, 565-574.

    In this paper is proposed a specially designed Genetic Algorithm for the simultaneous training and evolution of the topology of Multi-Layered Perceptrons. The algorithm is general and does not make use of any classical learning rule for the network training. In order to examine the performance of the proposed algorithm, they were used to real world classification problems, the Iris Plant and the Breast Cancer. The results obtained are very satisfactory.

  • [7] Hernández-Espinosa C. (Spain), Fernández-Redondo M. (Spain): On the effect of weight-decay in input selection, 575-588.

    In this paper we present a research on the effect of using weight-decay in combination with input selection methods based on the analysis of a trained Multitlayer Feedforward neural network. In order to apply this type of input selection methods, we should first train a neural network with all the potential inputs that we want to evaluate. Some authors have used and proposed weight-decay as a better alternative than Backpropagation to train this initial network. At first, the proposal seems to be interesting because of the pruning capabilities of weight-decay. So, we have applied a methodology that allows experimentally evaluating and comparing feature selection methods to 17 reviewed input selection methods in combination with Backpropagation and weight-decay as training algorithms. For the comparison we have used a total of 8 different problems from the UCI repository. The results are that weight- decay diminishes the performance differences among input selection methods by improving the performance of the worst ones and decreasing the performance of the best methods. For this reason, we do not recommend to use weight-decay for this task. Instead of that, we should select one of the best input selection methods and use it with a neural network trained by Backpropagation.

  • [8] Husmeier D. (UK): Bayesian regularization of hidden Markov models with an application to bioinformatics, 589-595.

    This paper discusses a Bayesian approach to regularizing hidden Markov models and demonstrates an application of this scheme to Bioinformatics.

  • [9] Hussen M.P.B. (UK), Althoefer K.A. (UK), Seneviratne L.D. (UK): Sewer defect detection and classification using a neural network, 597-605.

    The standard sewer inspection technique, based on closed-circuit television systems, is limited in its performance. It only acquires data from the gaseous part of sewers while the bottom area of the pipe where the sewage is transported is difficult to inspect with existing techniques. Relying on camera data only is not sufficient and may lead to catastrophic events such as sewer collapse. In order to improve the knowledge about the sewer status, a hybrid approach is suggested which complements the existing inspection technique for the acquisition of additional data from the pipe wall below sewage. This paper proposes an inspection system, which makes use of an ultrasonic sensor and a neural network to interpret the acquired sensor signals. The use of the proposed network will remove the need for human operators and minimise the impact of human error and concentration lapses that are common during manual inspection. The network's task is not only to detect sewer defects but also to determine their location in the inner wall of the inspected sewer segment. Experimental set-up, network structure and training as well as initial results using this underwater ultrasonic inspection system in a sewer environment are presented.

  • [10] Kamimura R. (Japan): Conditional information control for feature detection and pattern classification, 607-618.

    In this paper, we propose a new method to maximize or minimize selectively conditional information according to the important or characteristics of input patterns. For conditional information control, we introduce ff-information used to distort the ordinary Shannon information function. We have no choice but to maximize or minimize information to eliminate this distortion. Thus the distortion elimination can be used as a basic mechanism of the conditional information control. The information control was applied to alphabet character recognition problems and medical data analysis. Experimental results confirmed that conditional information is flexibly maximized or minimized, depending upon input patterns. We could also see that conditional information is a good measure to distinguish between different classes, and that the strength of conditional information is used to classify input patterns.

  • [11] Koutras A. (Greece), Dermatas E. (Greece), Kokkinakis G. (Greece): Blind separation of speakers in noisy reverberant environments: a neural network approach, 619-630.

    In this paper we present neural network solutions to the Blind Signal Separation problem of simultaneous speech signals in reverberant noisy rooms. The separation networks that are used, are feedforward and recurrent neural networks, along with a proposed hybrid network. These networks perform separation of convolutive speech mixtures in the time domain, without any prior knowledge of the propagation media, based on the Maximum Likelihood Estimation (MLE) criterion. The proposed separation networks improve more than 30 dB the Signal to Interference ratio in a two simultaneous speaker environment, even under the presence of a noise source (more than 15 dB improvement in a 0 dB SNR noisy environment). In addition, the recognition accuracy of a continuous phoneme-based speech recognition system was improved more than 20% in all adverse mixing situations with high interference from competing speakers and noise. Therefore, the proposed separation networks can be used as a front-end processor for continuous speech recognition of simultaneous speakers in real reverberant rooms.

  • [12] Kusumoputro B. (Indonesia), Sulita A. (Indonesia): Genetic algorithms in optimization of cylindrical-hidden layer neural network for 3-D object recognition system, 631-639.

    Recognition system of three-dimensional objects, (3-D), has been recently developed due to its importance in multi-media systems. However, it has not yet realized a high performance of recognition. Many difficulties have occurred, for example, even in a simple 3-D object, quite large number of two-dimensional observed images at various visual points should be calculated. As a consequence, when a 3-D object should be memorized for recognition, a large memory size may be required. In this paper, an alternative technique by using a neural network system is developed. Neural network system has been successfully applied in the fields of pattern recognition, pattern classification and other important problems such as in function approximation. We have modified the conventional multi-layer perceptron (MLP) neural network by substituting each of neuron in its hidden layer with a circular-structure of neurons to construct a cylindrical-structure of hidden layer (CHL-NN). Thus the hidden layer of the CHL-NN now consists of piles of circular-structure of hidden neurons. It is experimentally shown that increasing the number of ring-of-neurons in each circular-structure of hidden neurons had increased the recognition capability of the system up to 97.20%. However, since the neural system consumes high computational cost and memory sizes due to its connections of many neurons in their hidden layer, optimizing such neurons without sacrificing the recognition rate is important, especially in its application on low cost computer. Authors developed an optimization of the cylindrical-structure of hidden layer neural system using Genetics Algorithms and used it as a 3-D pattern recognition system. Experimental results show that the optimized neural system has higher capability to recognize the 3-D object even with lower hidden neurons. When a multiplied mode of neurons configuration is used, 100% of recognition rate could be achieved, even the CHL-NN worked with 46.30% of its total neurons is deleted through genetics algorithms. The over all improvement in the classification accuracy achieved by the GA based algorithm shown that globally optimal strategy in determining the relation between number of neuron and the networks error calculation has a positive impact on their performance.

  • [13] Lara B. (UK), Seneviratne L.D. (UK), Althoefer K. (UK): Screw insertion error classification using radial basis artificial neural networks, 641-652.

    The screw insertion process can be monitored using the torque signature signals during insertions. It has been shown that artificial neural networks provide an effective means of monitoring screw fastenings. However the research to date provides only a binary successful/unsuccessful type of classification. In practice when a fault occurs, it is useful to know the cause as several events such as jamming and cross-threading can lead to failure. A radial basis neural network is used to classify insertion signals, differentiating successful insertions from failed insertions and categorising different types of insertion failures. A normalised representation of the insertion signal is used as the input to the network. It is shown that the approach is a reliable and robust tool for failure classification. This paper describes the network classification experiments conducted and presents the results obtained.

  • [14] Moschitti A. (Italy): A novel neural approximation technique for integer formulation of the asymmetric travelling salesman problem, 653-664.

    In this paper a novel approximation algorithm for integer formulation of the Asymmetric Travelling Salesman Problem based on a sequence of Hopfield's networks is described. The sequence, in polynomial time, converges to a state representing a feasible solution for a given instance. The results, carried out with two different evaluation methods for the designed neural algorithm, show how this technique is interesting in order to make the neural network approximation approaches competitive with respect to the more classical ones.

  • [15] Musílek P. (Canada): Neurocomputing with fuzzy numbers, 665-674.

    Neural networks and fuzzy systems are two important tools for solving variety of problems in many areas of engineering and science. The products of their fusion - fuzzy neural networks, then exploit favorable properties of both the constituents. In this paper we concentrate on the combination of neurocomputing and fuzzy arithmetic: fuzzy arithmetic neural networks. As opposed to their fuzzy logic counterparts, fuzzy arithmetic neural networks perform computations in a way similar to standard neural networks but on uncertain quantities. When dealing with fuzzy quantities, one has to be equipped wit appropriate tools. While studying properties of fuzzy arithmetic operations and their suitability for fuzzy computations in neural networks, we found the tools of standard fuzzy arithmetic inappropriate. Standard fuzzy arithmetic operations generally cause great increase of fuzziness rendering the results unusable for the purposes of fuzzy neurocomputing. To overcome these shortcomings, we have introduced a set of modified fuzzy arithmetic operations more appropriate to given purposes. Description and analysis of these operations constitute the major part of this paper.

  • [16] Neagu C.-D. (Romania), Palade V. (Romania): An interactive fuzzy operator used in rule extraction from neural networks, 675-684.

    In the conjugate effort of building shells for fuzzy rule-based systems with a homogenous architecture based on neural networks, a difficult task is to exhibit and explain the results of neural calculus as a parallel inference process. This paper focuses on strictly fuzzy approach of neural networks, and proposes fuzzy operators in order to extract connectionist knowledge on the base of the concept of f -duality. The methodology is tested using two known benchmarks: iris problem and portfolio problem.

  • [17] Park T.S. (Korea), Lee C.H. (Korea), Choi M.R. (Korea): Cooperative parasite genetic algorithm for intrinsic circuit evolution, 685-694.

    In this paper, a new genetic algorithm, which leads to effective solutions in complicated solution spaces, such as intrinsic digital circuit synthesis, is introduced. The idea comes from the host-parasite model and the invasion operator. The host genome is a genotype, which is designed to describe a circuit and its evolution process is governed by a traditional GA. The parasite genome, which is to replace a part of the host genome, is refreshed successively, independent from the process of GA. The continuous invasions of genomes from the parasite gene pool to host genomes rescue the host chromosomes from the local minima even in the case of problems with extremely complex schema. The cooperative parasite genetic algorithm is evaluated for a few intrinsic circuit evolution cases.

  • [18] Schwenker F. (Germany), Bohnenstengel G. (Germany): Predicting the capacity of telephon fixed Internet lines using radial basis function networks, 695-701.

    Short-time forecasting the capacity of a telephon fixed internet line (here called leased line) using artificial neural networks is the topic of this paper. Data sets used for network training and testing have been sampled online in the internet service center (ICS) of eXtension World Wide Connections. For this task, a special software system (data gathering system (DGS)) has been implemented on the gateway of the ICS to the Internet. For the prediction task a radial basis function (RBF) network was integrated into the DGS.

  • [19] Shamsuddin A. (Australia), Cross J. (Australia), Bouzerdoum A. (Australia): Performance analysis of a new multi-directional training algorithm for feed-forward neural networks, 703-712.

    A derivative free training method that uses multi-directional search is developed. It uses a restricted momentum search, which provides a smooth descent to a local minimum of the error function. The momentum search is performed using past and current search directions. The different search directions are derived from rectilinear and Euclidean moves. A constrained univariate interpolation search is employed to determine the momentum parameter so that the direction of search doesn't fail to locate a local minimum. The XOR problem is used to compare the performance of the algorithm with the standard back propagation training and other training methods reported in the literature. The proposed algorithm improves on the standard first-order back propagation training method in the number of epochs, speed of convergence, and number of function evaluations.

  • [20] da Silva I.N. (Brazil), de Arruda L.V.R. (Brazil), do Amaral W.C. (Brazil): A modified Hopfields architecture for solving nonlinear programming problems with bounded variables, 713-722.

    A neural network model for solving nonlinear programming problems with bounded variables is presented in this paper. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. The network is shown to be completely stable and globally convergent to the solutions of nonlinear programming problems. Simulation results are presented to validate the proposed approach.

  • [21] da Silva I.N. (Brazil), de Souza A.N. (Brazil), Bordon M.E. (Brazil): A method for solving dynamic programming problems using artificial neural networks and fuzzy sets, 723-737.

    Systems based on artificial neural networks have high computational rates due to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. Neural networks with feedback connections provide a computing model capable of solving a large class of optimization problems. This paper presents a novel approach for solving dynamic programming problems using artificial neural networks. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. A fuzzy logic controller is incorporated in the network to minimize convergence time. Simulated examples are presented and compared with other neural networks. The results demonstrate that the proposed method gives a significant improvement.

  • [22] Villmann T. (Germany): Controling strategies for the magnification factor in the neural gas network, 739-750.

    In the present article we give several strategies for controlling the magnification factor in the Neural Gas Network (NG). As a consequence we are able to achieve a network which realizes an optimal information transfer. The strategies are theoretically derived and validated by numerical simulations.

  • [23] Villmann T. (Germany), Hermann W. (Germany), Geyer M. (Germany): Variants of self-organizing maps for data mining and data visualization in medicine, 751-762.

    In the present contribution the authors show the application of Self-Organizing Maps (SOMs) for visualization of data in medical area. For this purpose extensions of the usual SOM are discussed to obtain an adequate visualization result for easy medical expert assessment.

  • [24] Weitzenfeld A. (Mexico): A multi-level approach to biologically inspired robotic systems, 763-774.

    The study of biological systems has inspired the development of a large number of neural network architectures and robotic implementations. Through both experimentation and simulation biological systems provides a means to understand the underlying mechanisms in living organisms while inspiring the development of robotic applications. Experimentation, in the form of data gathering (ethological physiological and anatomical), provides the underlying data for simulation generating predictions to be validated by theoretical models. These models provide the understanding for the underlying neural dynamics, and serve as basis for simulation and robotic experimentation. Due to the inherent complexity of these systems, a multi-level analysis approach is required where biological, theoretic and robotic systems are studied at different levels of granularity. The work presented here overviews our existing modeling approach and describes current simulation results.

3/2000

  • [1] Editorial, 299.
  • [2] Hájek P. (Czech R.): Logics for data mining (GUHA rediviva), 301-311.

    The logic of monadic observational calculi (as a logic for data mining) is surveyed. Two approaches to making it fuzzy are discussed and several open problems are posed.

  • [3] Kainen P.C. (USA), Vogt A. (USA), uKrková V. (Czech R.): An integral formula for Heaviside neural networks, 313-319.

    A connection is investigated between integral formulas and neural networks based on the Heaviside function. The integral formula developed byuKrková, Kainen and Kreinovich is derived in a new way for odd dimensions and extended to even dimensions. In particular, it is shown that well-behaved functions of d variables can be represented by integral combinations of Heavisides with weights depending on higher derivatives.

  • [4] Kerckhoffs E.J.H. (The Netherlands), Water P.R. (The Netherlands): The "group method of data handling"applied to dynamic systems modeling and simulation, 321-332.

    In this paper we give results of applying two different variants of the Group Method of Data Handling (GMDH), a slightly modified basic (GMDH) and the so-called heuristic-free GMDH, to black-box modeling of dynamic systems. On the basis of observed input-output data, the GMDH net (a kind of neural network) is trained to reveal the "relevant" inputs with their time lags; besides this "data mining" (or more precisely "dependency modeling") aspect, the trained GMDH provides an output value for any concrete "relevant" input. The approach is tested on a number of application examples; a synthetic one and two real- world applications are considered in this paper. The (very computation intensive) heuristic- free approach shows the better performance, which justifies the employed parallel and/or distributed processing. Our parallel GMDH implementation allows flexible experimentation with various experiment parameters, such as o.a. different selection criteria. This facilitates finding the GMDH configurations with optimal performance.

  • [5] Arulampalam G. (Australia), Bouzerdoum A. (Australia): Training shunting inhibitory artificial neural networks as classifiers, 333-350.

    Shunting Inhibitory Artificial Neural Networks (SIANNs) are biologically inspired networks in which the neurons interact among each other via a nonlinear mechanism called shunting inhibition. Cellular neural networks based on shunting inhibition have been used successfully in vision and image processing applications. In this article, we apply SIANNs to classification problems. Since they are high-order networks, SIANNs are capable of producing complex, nonlinear decision boundaries. A network structure for feedforward SIANNs is presented, and training algorithms based on gradient-descent and Levenberg-Marquadt have been developed for them. They have been applied to some standard classification problems such as the XOR, 3-bit and 5-bit parity problems. Some improvements to the network structure and training procedure have also been proposed and tested. SIANNs have shown the capability of being trained to handle nonlinear classification problems, such as the parity problem, with high success rates using relatively small and simple network structures.

  • [6] Barrera-Cortés J. (Mexico), Baruch I. (Mexico): A recurrent neural network for identification and prediction of B.t. fermantation process, 351-359.

    A new Recurrent Neural Network Model (RNNM) to identify and predict the production of Bacillus thuringiensis in a Fed Batch Fermentation (FBF) process, was developed. The comprehend fermentation kinetic data (concentration of bacterias, spores, glucose and nitrogen) are considered both as input and output data of the RNNM. The Total Solids Initial Concentration (TSIC) of the culture is taken as an additional input. So, the multi- input multi-output RNNM have five inputs, four outputs, nine neurones in the hidden layer, and also main and local feedbacks. The weight update learning algorithm is a version of the backpropagation through time one, specially designed for this RNN topology. The mean square error obtained for the last epoch of learning is 2.3% and the total time of learning is 51 epochs, where the epoch size is 75 iterations. The learning process is applied simultaneously for four fermentation kinetics data of different TSIC (60, 105, 150 and 200 g/l) and good results have been obtained.

  • [7] Castellano G. (Italy), Fanelli A.M. (Italy): Fuzzy inference and rule extraction using a neural network, 361-371.

    This paper proposes a neural network for building and optimizing fuzzy models. The network can be regarded both as an adaptive fuzzy inference system with the capability of learning fuzzy rules from data, and as a connectionist architecture provided with linguistic meaning. Fuzzy rules are extracted from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data using a modified competitive learning, and rule parameter tuning phase using gradient descent learning. This allows simultaneous definition of the structure and the parameters of the fuzzy rule base. After learning, the network encodes in its topology the essential design parameters of a fuzzy inference system. A well- known classification benchmark is used to illustrate applicability of the proposed neuro-fuzzy hybrid network.

  • [8] Donnelly G.M. (Northen Ireland), Ojha P.C. (Northen Ireland), Bell D.A. (Northen Ireland): Tiling algorithm for datasets with exceptional cases, 373-379.

    We report further work with variants of the tiling algorithm.1 Our aim is to optimise the algorithm to produce compact networks in the hope that these would generalise better. To this end, we have now extended the algorithm to allow for exceptional cases in the training set which the network is not expected to classify correctly. Experiments with one of the benchmark datasets (the sonar data of Gorman and Sejnowski) suggest that one closely related group of variants consistently produces more compact networks than others and when exceptional training data are allowed for, the networks become even more compact. The performance of this group of variants on the test set is, however, the poorest. We analyse this in terms of the computation carried out by the first hidden layer.

  • [9] Eaton M. (Ireland), McQuade E. (Ireland): Evolutionary selection of neural network weights for plant control, 381-387.

    In this paper we investigate the control of an unstable second order linear system using a neural network whose weights are chosen by a genetic algorithm. The genetic algorithm, originally proposed by Holland [1] is a powerful search and optimisation technique based on natural evolution and natural genetics. The genetic algorithm is used to evolve the weights of a population of neural-like linear controllers, each of whose individual performances is evaluated using the ITAE (Integral of Time by Absolute value of Error) criterion applied to the response of the controlled plant to a step input of fixed magnitude.

  • [10] Fernández-Redondo M. (Spain), Hernández-Espinosa C. (Spain): Analysis of input selection methods for multilayer feedforward network, 389-406.

    The first step to solve a pattern recognition problem is to select the appropriate features or inputs. It is an important and difficult problem. In this paper, we review two very different types of input selection methods: the first one is based on the analysis of a trained multilayer feedforward neural network (MFNN) and the second ones is based on an analysis of the training set. We also present a methodology that allows experimentally evaluating and comparing feature selection methods. This methodology is applied to the 26 reviewed methods and we evaluate the usefulness of these methods for selecting the appropriate inputs in the case of using a multilayer feedforward (MF) as a pattern recognition method. We have used a total number of 15 different real world classification problems in our experiments. As a result, we present an ordination of methods according to their performance. It is concluded that the performance of input selection methods based on the analysis of the training set is in general worse, except for one of them, gd-distance based on information theory concepts. This method seems to be one of the most promising because of its performance and low computational cost.

  • [11] Grim J. (Czech R.): Self-organizing maps and probabilistic neural networks, 407-415.

    The self-organizing map algorithm for training of artificial neural networks is shown to be closely related to a sequential modification of EM algorithm for maximum-likelihood estimation of finite mixtures. The established correspondence provides a helpful theoretical basis for interpretation of the properties of SOM algorithm and for the choice of involved parameters.

  • [12] Jiřina M. (Czech R.), jr., Jiřina M. (Czech R.): Neural network classifier based on growing hyperspheres, 417-428.

    This paper discusses a new approach to the classification of high-dimensional data using a special neural network classifier, Growing hyperspheres neural network (GHS net), based on the data representation by means of hyperspheres. The process of learning and recalling is precisely described. Features of the new network are discussed and comparised with other neural classifiers. A task classifying the data from a gamma telescope is presented to show the capabilities of the network. Finally, some values describing the nature of the data in the case that two classes are introduced and a method is outlined for stating them using the GHS net without prior knowledge.

  • [13] Kawamoto T. (Japan), Hotta K. (Japan), Mishima T. (Japan), Fujiki J. (Japan), Tanaka M. (Japan), Kurita T. (Japan): Estimation of single tones from chord sounds using non-negative matrix factorization, 429-436.

    Non-negative Matrix Factorization (NMF) is a method to estimate the representation basis from sample data set of vectors. This method is used for face recognition and for semantic analysis of a corpus of encyclopedia articles. NMF gives good results for both cases, for instance, for face recognition it chooses facial parts as a representation basis. After this success, in this paper we adopt NMF to musical scene analysis in order to get single tones. Two types of experiments are given to show whether NMF can estimate the single tones or not. Both of the results show that NMF is useful tool for sound data analysis.

  • [14] Kramosil I. (Czech R.): Fuzzified belief functions and their approximations, 437-444.

    The standard combinatoric definition of belief functions is translated into the terms of random sets generated by a compatibility relation. This formulation is then generalized by fuzzification, i.e., the compatibility relation in question is replaced by a fuzzy relation. The resulting fuzzified belief functions are approximated by their upper bounds obtained either by a weakening of their definitoric conditions or by a second-order randomization. An interesting relation between these approximations and the so called product implication in fuzzy logic is demonstrated.

  • [15] Rothkrantz L.J.M. (The Netherlands), Nollen D. (The Netherlands): Automatic speech recognition using recurrent neural networks, 445-453.

    The main topic of this paper was to research the use of recurrent neural networks in the process of automatic speech processing. The starting point was a modified version of RECNET as developed by Robinson. This phoneme recognizer is based on a RNN. But post-processing is based on Hidden Markov Models. A parallel version of RECNET was implemented on a parallel computer (nCUBE2) and Elman RNN was used as postprocessor. Word segmentation is also realized using an Elman RNN. The network models and results of testing are reported in this paper.

  • [16] Rothkrantz L.J.M. (The Netherlands), Wojdel J.C. (The Netherlands), Wojdel A. (The Netherlands), Knibbe H. (The Netherlands): Ant based routing algorithms, 455-462.

    The main topic of this paper was to research the use of recurrent neural networks in the process of automatic speech processing. The starting point was a modified version of RECNET as developed by Robinson. This phoneme recognizer is based on a RNN. But post-processing is based on Hidden Markov Models. A parallel version of RECNET was implemented on a parallel computer (nCUBE2) and Elman RNN was used as postprocessor. Word segmentation is also realized using an Elman RNN. The network models and results of testing are reported in this paper.

  • [17] Řízek S. (Czech R.), Frolov A. (Russia), Dufossé M. (France): Adaptive neurocontrol of anthropomorphic systems, 463-471.

    The proposed neural model of control of multijoint anthropomorphic systems imitates the visual-motor transformations performed in living creatures. It involves three subtasks: design of the central neurocontroller; modelling of the neuromuscular apparatus of living creatures; development of the model of the human arm biomechanics. The Equilibrium Point theory simplifies the task (reaching movement) performed by the central neurocontroller to the inverse static problem. The proposed complex model may provide a scientific base for the design of anthropomorphic robots and manipulators.

  • [18] Schwenker F. (Germany), Dietrich C. (Germany): Initialisation of radial basis function networks using classification trees, 473-482.

    Learning in radial basis function (RBF) networks is the topic of this paper. Particularly we address the problem of intialisation the centers and scaling parameters in RBF networks utilizing classification tree algorithms. This method was introduced by Kubat in 1998. Algorithms for the calculation of the centers and scaling parameters in an RBF network are presented and numerical results for these algorithms are shown for two different data sets.

  • [19] Yeo S.W. (Korea), Lee C.H. (Korea): A new neural network model for associative learning of spatiotemporal patterns, 483-494.

    This paper proposes a new neural network model and its associative learning rule for storing and recalling spatiotemporal patterns. This network model consists of a series of two-dimensional layers, each of which retains the spatial information while the progression of layers holds temporal aspects of the pattern. Between layers are the connection synapses which transmit the signals. Each time the input pattern is provided, the synapses grow and redirect successively by the proposed associative learning rule. The network model reduces complexity burdens and performs on-line learning by adopting a new neuron model and activation function. For the application of this model, a few nursery songs are transformed into spatiotemporal patterns and applied to the network. The results show that the network recalls the stored patterns effectively. Other preferable features such as adaptive tuning, content-addressing and error-tolerance are also achieved.

  • [20] Žerovnik J. (Slovenia): On temperature schedules for generalized Boltzmann machine, 495-503.

    This paper compares several cooling schedules for the version of simulated annealing which is used in Boltzmann machines. Experimental results on a selection of graph coloring instances and on the generalized Boltzmann machine as defined are reported.

1-2/2000

  • [1] Preface, 1.
  • [2] Deboeck G.J. (USA): Modeling non-linear market dynamics for intra-day trading, 3-27.

    At the end of the 20th century speculation on short-term price movements of stocks is on the rise. Electronic direct access trading often simply called day trading has exploded in popularity. It has become the subject of intense interests, particularly on the part of security exchange officials, Wall Street brokerage firms, financial journalists and many individuals. This paper focuses on short-term trading. The main objectives are to review what caused the increased interest in short-term trading, demonstrate various approaches, discuss intra-day trading strategies, show how they can be deployed in practice, and illustrate what kinds of results can be achieve by day trading. As a concrete example we choose to apply various intra-day trading strategies to an Internet stock, in particular CMGI Inc. CMGI is a venture capital fund that invests in attractive Interne t companies. As a venture capital fund investing in Internet stocks, CMGI is sometimes regarded as a proxy for the entire Internet sector. This paper starts by discussing the major changes that have occurred in the past couple of years in the functioning of markets and in the access to market information for individuals. It then compares strategies for short-tem trading based on different frequencies - from fixed time periods to trading in variable time horizons - and various technical indicators - e.g. moving averages to provide the buy and sell signals. After reviewing some technical analysis approaches this study shows how NASDAQ Level II information can be used for intra-day trading. To detect patterns in the price formation process the study applied the self-organizing map method to hourly samples of Level II NASDAQ information. This produced two-dimensional representations of market maker behaviors. The same self-organizing map method is also used to detect turns in price movements based on a digital codification of the old Japanese candlestick approach. The underlying rational for the price movements is explored on the basis of tick-by-tick data using correlation analysis for detecting chaos and predictability; and rescaled range analysis for computing the Hurst coefficient. Finally, recent discoveries in regard to the valuation of Internet stocks are discussed. From these, practical rules are derived for intra-day trading of Internet stocks. The main contribution of this paper is to demonstrate a variety of modeling approaches for capturing the no n-linear dynamics of price movements and to derive from it practical tips for profitable day trading.

  • [3] Suykens J.A.K. (Belgium): Least squares support vector machines for classification and nonlinear modelling, 29-47.

    Support vector machines (SVM's), as recently introduced by Vapnik, is a new method for solving classification and static nonlinear function estimation problems. A typical property of SVM's is that, up to a small number of hyperparameters, the solution is characterized by a convex optimization problem, more specifically a quadratic programming (QP) problem. Moreover, the model complexity (e.g. number of hidden units) also follows from this QP problem. Recently, we have introduced a modified version of SVM's, so-called least squares SVM's (LS-SVM's). In LS-SVM's the solution is given by a linear system instead of a QP problem. The aim of this paper is to give an introduction to the theory and methods of LS-SVM's for classification and nonlinear function estimation. The LS-SVM formulation turns out to be surprisingly simple and at the same time very powerful.

  • [4] Möller I. (Germany), Petersmeier K. (Germany), Poddig Th. (Germany): The lambda-test for nonlinear dependencies, 49-57.

    The presupposition when developing an econometric model is the knowledge of the relevant parameters and influent quantities. So a testing procedure to find out these quantities is needed. A useful test should allow to test for multivariate dependencies. Additionally, it also should detect even nonlinear dependencies (otherwise a simple correlation analysis would be sufficient) and provide a statement about a significance level as statistical tests usually do. We present a test procedure derived from an idea by Pi and Petersen that matches these requirements. Further we present a simulation study that shows the potential of the test method, although it indicates still some conceptional problems, and we will give an outlook on the apparent strategy to solve them. Despite these problems it seems that the test is even now a worthful instrument in the context of variable selection, dependency test and neural network development.

  • [5] van den Bergh W.-M. (The Netherlands), van den Berg J. (The Netherlands): Competitive exception learning using fuzzy frequency distributions, 59-71.

    A competitive exception learning algorithm for finding a non-linear mapping is proposed which puts the emphasis on the discovery of the important exceptions rather than the main rules. To do so, we first cluster the output space using a competitive fuzzy clustering algorithm and derive a fuzzy frequency distribution describing the general, average system's output behavior. Next, we look for a fuzzy partitioning of the input space in such a way that the corresponding fuzzy output frequency distributions 'deviate at most' from the average one as found in the first step. In this way, the most important 'exceptional regions' in the input-output relation are determined. Using the joint input-output fuzzy frequency distributions, the complete input-output function as extracted from the data, can be expressed mathematically. In addition, the exceptions encountered can be collected and described as a set of fuzzy if-then-else-rules. Besides presenting a theoretical description of the new exception learning algorithm, we report on the outcomes of certain practical simulations.

  • [6] Tambakis D.N. (UK): Information-theoretic sample size selection for linear prediction, 73-79.

    What is the appropriate number of past observations to use in forecasting univariate linear processes? A non-parametric statistic useful for sample size selection is proposed involving the data's average information content (AIC). It is shown that the asymptotic predictability of a process is increasing in its AIC. Monte Carlo simulations of stationary pdf's indicate that AIC increases with sample size, suggesting that "more is better", while for stock market returns over a large number of sample sizes the AIC and mean squared forecast error are significantly negatively correlated.

  • [7] Popescu T.D. (Romania): Change detection in systems time and frequency approaches, 81-87.

    The problem of change detection in dynamical systems has received considerable attention during the last two decades in a research context and appears to be the central issue in various application domains. A change detection algorithm essentially consists of two stages: residual generation and decision making. The residuals are analytical redundancy generated measurements representing the difference between the observed and the expected system behaviour. Basic tools for residuals generation are filters and estimators. In the stage of decision making, the residuals are processed and examined under certain decision rules to determine the system change status. The paper presents some algorithms for change detection in dynamical systems, in time and frequency domain. All the discussed algorithms constituted the object of evaluation in a multiple simulation study and have been used for change detection in modal characteristics of structural systems, during a strong seismic motion.

  • [8] Vallois P. (France), Tapiero C.S. (France): The range inter-event process in a symmetric birth death random walk and the detection of chaos, 89-99.

    This paper provides new results regarding the range inter-events process of a birth-death random walk. Motivations for determining and using the inter-range event distribution have two sources. First, the analytical results we obtain are simpler than the range process and make it easier therefore to use statistics based on the inter- range event process. Further, most of the results regarding the range process are based on long run statistical properties which limits their practical usefulness while inter-range events are by their nature "short term" statistics. Second, in many cases, data regarding amplitude change is easier to obtain and calculate than range and standard deviation processes. As a result, the predicted statistical properties of the inter-range event process can provide an analytical foundation for the development of statistical tests that may be used practically. Application to outlier detection, volatility and time series analysis discussed. Finally, results are summarized with proofs provided in another more extensive paper.

  • [9] Pascual B. (Spain): The effect of multinational firms' activity on the indraday patterns of stock return volatility the case of the Spanish stock exchange, 101-116.

    This paper studies the effect that multinational firms' activity in a foreign country could have on stock prices in the short run. One explanation of this potential effect is that news about daily business activity matters for stock pricing. To get empirical evidence we used the Spanish Stock Exchange (SSE), which is especially well suited because most of its firms' international activity is concentrated in South America. In this market, under the hypothesis that daily business activity news affects stock prices in the short run, we expect firms with higher real activity in the Americas to have a higher proportion of their daily volatility concentrated at the opening of the SEE and during the daytime in the Americas. These are indeed the results we found. Werner and Kleidon found that UK stocks dually listed on the New York Stock Exchange (NYSE) have more volatility during the overlapping trading period. We repeated the analysis without the Spanish stocks listed on the NYSE, and just with those dually listed stocks, and the results are the same. Our contribution is the finding of empirical evidence supporting the hypothesis that the geographical distribution of firm's real activity affects stock prices in the short run.

  • [10] Ratsimalahelo Z. (France): Identification tests for multivariate time series models, 117-130.

    In this paper we consider the problem of model specification and parameter estimation of a multivariate time series models. The linear systems theory (LST) estimator is used to estimate the parameters of a multivariate linear system. Four procedures are considered for model specification: the AIC criterion, the singular value decomposition of a block Hankel matrix (HSVD), a test based on the columns of the observability matrix (M(j)) test and a test based on the ratio of the determinants of the innovations covariance matrices of the process from models of two different sizes (Innovations covariance test). A numerical example using a Monte Carlo experiment is presented for comparison of results. The results show promise for the two latter of methods which are based on asymptotic likelihood ratio tests.

  • [11] Antoniou A. (UK), Vorlow C. (UK): Recurrence plots and financial time series analysis, 131-145.

    Recurrence plots are a visual tool for the detection of high-dimensional dynamics in time series. We explore their applicability in the graphic investigation of financial time series and their volatility. We examine the recurrence plots of various stock market indices in different frequencies and search for indications of deterministic nonlinearities. We provide a comparison of recurrence plots of observed financial time series with theoretically relevant sequences such as the Brownian motion, white noise and ARCH processes. Our conclusion is that recurrence plots can at least be used as a very accurate weak form efficiency visual test.

  • [12] Marček D. (Slovakia): Forecasting of economic quantities using fuzzy autoregressive models and fuzzy neural networks, 147-155.

    Most models for the time series of stock prices have centered on autoregressive (AR) processes. Traditionaly, fundamantal Box-Jenkins analysis have been the mainstream methodology used to develop time series models. Next, we briefly describe the develop a classical AR model for stock price forecasting. Then a fuzzy regression model is then introduced Following this description, an artificial fuzzy neural network based on B-spline member ship function is presented as an alternative to the stock prediction method based on AR models. Finnaly, we present our preliminary results and some further experiments that we performed.

  • [13] Hunter J. (UK), Serguieva A. (UK): Project risk evaluation using an alternative to the standard present value criteria, 157-172.

    The article presents a method of modelling the restricted information set relevant to an investment project. The benefit of the approach is that it allows one to make fewer assumptions and to handle uncertainty in a more fundamental way. The decision-maker is able to consider a project and take decisions based on various levels of uncertainty and at each level one is provided with an exact set of net present values corresponding to a family of possible future cash flows and discount rates. Each project is characterised by a critical level of uncertainty, under which the project is definitely profitable, and at or above which there is a chance of being unprofitable. If one takes the risk at higher levels of uncertainty, there is a possibility of getting an even more profitable project. The results in the example show that the critical level of uncertainty for the same project is lower in case of time-varying discount rates in comparison with the constant-discount-rate case.

  • [14] Resta M. (Italy): TRN: picking up the challenge of non linearity testing by means of topology representing networks, 173-186.

    In this work, I will describe a new approach for time series non linearity testing by means of neural networks, and I'll extend it to financial data. The novelty of this approach stands primarily in the kind of artificial agents chosen for simulations: Topology Representing Networks (TRN), that is competitive learning algorithms. In this context, a TRN ensemble will be used to analyse signals generated by different processes: periodic and deterministic, uniformly distributed and multi-scaling L-stable processes. The performances obtained by means of this technique will be compared to more conventional tools in time series analysis, with particular attention to recurrence quantification analysis. Furthermore, real world data will be observed and the results obtained by TRN will be closely linked with economical interpretations.

  • [15] Dunis C.L. (UK), Laws J. (UK), Chauvin S. (Spain): FX volatility forecasts: a fusion-optimisation approach, 187-202.

    In this paper, we examine the medium-term forecasting ability of several alternative models of currency volatility. The data period covers more than eight years of daily observations, January 1991-March 1999, for the spot exchange rate, 1 and 3-month volatility of the DEM/JPY, GBP/DEM, GBP/USD, USD/CHF, USD/DEM and USD/JPY. Comparing with the results of 'pure' time series models, we investigate whether market implied volatility data can add value in terms of medium-term forecasting accuracy. We do this using data directly available from the marketplace in order to avoid the potential biases arising from 'backing out' volatility from a specific option pricing model. On the basis of the over 34000 out-of-sample forecasts produced, evidence tends to indicate that, although no single volatility model emerges as an overall winner in terms of forecasting accuracy, the 'mixed' models incorporating market data for currency volatility perform best most of the time.

  • [16] Deboeck G.J. (UK), Ultsch A. (Germany): Picking stocks with emergent self-organizing value maps, 203-216.

    Picking stocks that are suitable for portfolio management is a complex task. The most common criteria are the price earnings ratio, the price book ratio, price sales ratio, the price cash flow ratio, and market capitalization. Another approach called CAN SLIM relies on earnings growth (quarterly and annual earnings growth) of companies; the relative strength of the stock prices; the institutional sponsorship; the debt capital ratio, the shares outstanding, market capitalization, and the market direction. The main issue with the traditional approaches is the proper weighting of criteria to obtain a list of stocks that are suitable for portfolio management. This paper proposes an improved method for stock picking using the CAN SLIM system in conjunction with emergent self-organizing value maps to assemble a portfolio of stocks that outperforms a relevant benchmark. The neural network approach discussed in this paper finds structures in sets of stocks that fulfill the CAN SLIM criteria. These structures are visualized using U-Matrix and used to construct portfolios. Portfolios constructed in this way perform better the more the CAN SLIM criteria were fulfilled. The best of the portfolios constructed by emergent self-organizing value maps outperformed the S&P500 Index by about 12% based on two months of out-of-sample testing.

  • [17] Obu-Cann K. (Japan), Fujimura K. (Japan), Tokutaka H. (Japan), Yoshihara K. (Japan): Data mining from chemical spectra data using self-organising maps, 217-230.

    The Self-Organising Map (SOM) being one of the most widely used ANNs, is a powerful tool for data mining or knowledge discovery and visualisation of high dimensional data. It simultaneously performs topology preservation of the data space whiles quantizing the data space formed by the input data. Data is useless to mankind if no meaningful information can be derived from it. In this work, SOM is applied to Chemical spectral data from Auger Electron Spectroscopy (AES), X-ray Photoelectron Spectroscopy (XPS) and a combination of data from both AES and XPS. This paper also attempts to build a SOM of elements in the periodic table. By use of this map, any element can be analysed. In topology preservation, similar input patterns that are close to each other in the input data space are correspondingly located close to each other on the map. This paper looks at clustering using the Minimal Spanning Tree (MST).

  • [18] Moshou D. (Belgium), Ramon H. (Belgium): Wavelets and self-organizing maps in financial time series analysis, 231-238.

    A methodology on how to combine wavelets with Self-Organizing Maps for financial time- series visualisation and interpretation is presented. Current volatility modelling is introduced and compared with the advantages the wavelet-based analysis offers over conventional moving average based methods like the Bollinger bands. The immunity of the wavelet based de- noising to recording errors and transient shocks offers important help in analysing the long and short-term behaviour of financial data. The visualisation of transient shocks like crashes, in higher order wavelet coefficients is presented. The Self-Organising Map Neural Network is introduced to aid the visualisation of the behaviour of indicator data and specifically the Dow-Jones Industrial Average. The features that are used for the visualisation are the approximation coefficients of 32-day trading periods with daily sampling of the closing value. The trajectory formed at different component levels shows the evoluion of the indicator data from its beginning until September 1999.

  • [19] Deboeck G. (USA): Self-organizing patterns in world poverty using multiple indicators of poverty repression and corruption, 239-254.

    This paper maps world poverty based on multi-dimensions of poverty. These global maps are based on a well-established neural network algorithm. They show world poverty based on similarity and dissimilarity in poverty structures. The data used for this study was extracted from the World Development Indicators (WDI) published by the World Bank, the 2000 Index of Freedom published by the Heritage Foundation and the Wall Street Journal, and the Corruption Perception Index produced by Transparency International. Ten poverty indicators were selected out of 96 in the WDI, which related to quality of life, health, education, and sanitation. The Index of Freedom reflects ten broad factors of economic freedom; the Corruption Perception Index is based on multiple surveys conducted in countries. The maps in this study obtained through self-organization cluster poverty in 145 countries into 5 groups. Two-dimensional representations of the data are presented showing the countries in each group. The distributions of the indicators and indices are shown; the non- linear relationships between various factors are discussed. From this we find that (i) GNP per capita is an imperfect proxy for representing poverty around the globe; (ii) geographical- based displays of poverty indicators are inadequate for discovering poverty structure (and hence should be supplemented or replaced by self-organizing maps); (iii) countries which suffer the most severe levels of poverty also tend to be the ones that are most repressed and where the perceived corruption on the part of government officials tends to be high. The maps in this paper demonstrate how new knowledge can be detected via neural networks for the design and reshaping of strategies for fighting the war against poverty. This new knowledge may challenge traditional ways of organizing development assistance for poverty reduction. They also suggest new avenues for future research on the non-linear relationships among poverty indicators. While Poverty Networks and participatory approaches in defining poverty are a major step forward, actual patterns in poverty structures can be detected from the available data using neural network-based approaches and data mining techniques.

  • [20] Hunter J. (UK), Ioannidis C. (UK), Monoyios M. (UK): Transaction costs and nonlinear adjustment in option prices, 255-269.

    We develop a model of optimal valuation and hedging of options in the presence of proportional transaction costs. The model places bounds on option prices by requiring that a trader suffers no marginal loss of utility when diverting a fraction of his initial wealth into the purchase or sale of options. The resulting singular stochastic optimal control problem for the option price is subject to "super contact" (or "smooth pasting") conditions applied at the boundaries of the investor's no-transaction region. These lead to possible nonlinear adjustment of the option price if the optimal pricing bands are violated, causing corrective trades by option traders. Empirical analysis of FTSE 100 index options lends support to the theoretical hypothesis.

  • [21] Ghaziri H. (Libanon), Elfakhani S. (Libanon), Assi J. (Libanon): Neural networks approach to pricing options, 271-277.

    The highly complex structure of options has led many researchers to develop sophisticated mathematical models that best describe their behavior and price them. The most powerful and most popular model is the Black-Scholes model. In this paper we will compare the performance of this model with neural networks. Although widely used in the finance community, the Black-Scholes suffers from a certain number of limitations. Among these limitations is the assumption that the underlying probability distribution is lognormal. This issue is controversial and many studies showed that the probability distribution of stock prices deviates from the log-normal distribution. Another limitation is the assumption that variance is constant. As a result of these limitations, the Black-Scholes results is showing some deviation from the actual values of option prices. For these we have decided to explore another approach namely the neural network approach. Neural networks are known for their capability to capture patterns in non linear structures. We have used a multi-layer feedforward ne ural network and neuro-fuzzy network to price S&P 500 index call options. The data set used covers 70 in the money, at the money and out-of-the-money call options that has taken place between February 26 and February 27 of 1997. We have also compared the results with Black-Scholes model and we found that neural network approach outperform Black-Scholes model provided that a sufficient number of patterns is presented.

  • [22] Rasson J.P., Pircon J.Y., Poulain I.P.: Kernel estimation of density for credit scoring, 279-286.

    Credit scoring is a decision-making process in a credit instruction. Its principle consists of classifying the applicants according to a client's default risk. Common techniques are used for modeling the default risk on a multivariate basis. They have to combine the manipulation of qualitative and quantitative risk factors. Amongst those techniques, we employed the logistic regression and a factorial approach associating a homogeneity analysis procedure, a Fisher's linear discriminant analysis and a scoring function. The predictive power of those methods is 70% what is a high level by reference to the ad hoc literature. The outcomes being probative in terms of accuracy, we apply actually those methods for deciding the granting of all individual loans such as installment loans, universal credits . . . in the Fortis Group. Wishing to improve the predictive power of our scoring functions, we surveyed the efficiency of more sophisticated techniques using a non-parametric estimation of density. So, we experimented the k nearest neighbor method and the kernel estimators. The combination of both qualitative and quantitative information prevailed us to define generalized density function estimation derived from a product of uniform kernels and a smoothing vector. This methodology led us to choose an appropriated kernel and one smoothing parameter by kind of data (uniform, binary, 3-categorical, 4-categorical . . . ). The technical complexity of manipulating a product of kernels invited us to study also the application of one unique kernel. We concentrated on the Epanechnikov and the normal kernels. Instead of Mahalanobis or Euclidean distances, we opted for a distance mitigating qualitative and quantitative information. For evaluating the efficiency of those parametric (logistic regression, factorial approach) and non-parametric (KNN, product of kernels, Epanechnikov normal kernels) classification rules, we proceeded to the calculation of the several misclassification errors such as the apparent error rate, the leaving-one-out and the bootstrap error rates. The outcome privilege uniformly the application of the non-parametric approach based on the product of kernels. Indeed, error rate can be reduced by two or three percents and the predictive power grows to a level of 72%. We propose to evoke hereafter the principles and the application of the non parametric smoothing method.

  • [23] Van Gestel T. (Belgium), Suykens J. (Belgium), De Moor B. (Belgium), Baestaens D.E.(Belgium): Volatility tube support vector machines, 287-297.

    In Support Vector Machines (SVM's), a non-linear model is estimated based on solving a Quadratic Programming (QP) problem. The quadratic cost function consists of a maximum likelihood cost term with constant variance and a regularization term. By specifying a difference inclusion on the noise variance model, the maximum likelihood term is adopted for the case of heteroskedastic noise, which arises in financial time series. The resulting Volatility Tube SVM's are applied on the 1-day ahead prediction of the DAX30 stock index. The influence of today's closing prices of the New York Stock Exchange on the prediction of tomorrow's DAX30 closing price is analyzed.



Thanks to CodeCogs.com