Contents of Volume 11 (2001)5/2001 4/2001 2/2001 2/2001 1/2001
-  Editorial, 537.
-  Amato P., Manara C. (Italy): Global defuzzification methods, 539-545.
Defuzzification is a methodology to single out a numerical value vx from a set vx of possibilities, where x is an input parameter ranging over a domain U Í Rn. In most approaches found in the literature, defuzzification is local, in the sense that Vx only depends on x. We present a global defuzzification method, where the map x ® Vx depends on the whole family Vx | x Î U . Under suitable assumptions, we show that the minlink algorithm, originating from the robot vision problems, yields an example of our defuzzification procedure.
-  Andrejková G., Jirásek J. (Slovakia): Neural network topologies and evolutionary design, 547-560.
In this paper, we suggest Evolution Algorithms (EA) for development of neural network topologies to find the optimal solution of some problems. Topologies are modified in feed-forward neural networks and in special cases of recurrent neural networks.
We applied two approaches to the tuning of neural networks. One is classical, using evolution principles only. In the other approach, the adaptation phase (training phase) of the neural network is made in two steps. In the first step we use the genetic algorithm to find better than random starting weights (nearly optimal values), in the second step we use the backpropagation algorithm to finish the adaptation phase. This means that the starting weights for the backpropagation algorithm are not random values, but approximately optimum values. In this context, the fitness of a chromosome (neural network) is a function of its estimated test error (its estimated generalization ability).
Some results obtained by these methods are demonstrated in a prediction of GeoMagnetic Storms (GMS) and Handwriting Recognition (HWR).
-  Cintula P. (Czech Republic): An alternative approach to the ŁP logic, 561-571.
The ŁP and ŁP 1/2 logics were introduced by Godo, Esteva and Montagna in  and further developed in my work . These types of logic unite many other known propositional and predicate logics, including the three mainly investigated ones (Gödel, Product and Łukasiewicz logic).
The aim of this paper is to show a tight connection between the ŁP logic and the product involutive logic. This logic was introduced by Esteva, Godo, Hájek and Navara in their paper .
We will see that all the connectives of the ŁP logic are definable from the connectives of this logic. In addition we show that the ŁP logic is an schematic extension of this logic by a single axiom. We also make some simplification of the axiomatic system of this logic.
-  Deschrijver G., Kerre E. E. (Belgium): On the cartesian product of intuitionistic fuzzy sets, 573-578.
Cartesian products of intuitionistic fuzzy sets have been defined using the min-max and the product-probabilistic sum operations. In this paper we introduce and analyse the properties of a generalized cartesian product of intuitionistic fuzzy sets using a general triangular norm and conorm. In particular we investigate the emptiness, the commutativity, the distributivity, the interaction with respect to generalized unions and intersections, the distributivity with respect to the difference, the monotonicity and the cutting in terms of level-sets.
-  Gerla B. (Italy): Rational Łukasiewicz logic and DMV-algebras, 579-594.
In this paper we present some results concerning the variety of divisible MV-algebras. Any free divisible MV-algebra is an algebra of continuous piecewise linear functions with rational coefficients. Correspondingly, the Rational Łukasiewicz logic is defined and its tautology problem is shown to be co-NP-complete.
-  Holeňa M. (Czech Republic): A fuzzy-logic generalization of a data mining approach, 595-610.
Data mining nowadays belongs to the most prominent information technologies, experiencing a boom of interest from users and software producers. Traditionally, extracting knowledge from data has been a domain of statisticians, and the largest variety of methods encountered in commercial data mining systems are actually methods for statistical data analysis tasks. One of the most important ones among them is testing hypotheses about the probability distribution underlying the data. Basically, it consists in checking the null hypothesis that the probability distribution, a priori assumed to belong to a broad set of distributions, actually belongs to one of its narrow subsets, which must be precisely delimited in advance. However, in a situation in which the data mining is performed, there are seldom enough clues for such a precise delimitation. That is why the generalizations of statistical hypotheses testing to vague hypotheses have been investigated for more than a decade, so far following the most straightforward way – to replace the set defining the null hypothesis by a fuzzy set. In this paper, a principally different generalization is proposed, based on the observational-logic approach to data mining, and in particular to hypotheses testing. Its key idea is to view statistical testing of a fuzzy hypothesis as an application of an appropriate generalized quantifier of a fuzzy predicate calculus to predicates describing the data. The theoretical principles of the approach are elaborated for both crisp and fuzzy significance levels, and illustrated on the quantifier lower critical implication, well known from the data mining system Guha. Finally, the implementation of the approach is briefly sketched.
-  Jiroušek R., Vejnarová J. (Czech Republic): Perfect sequences for belief networks representation, 611-626.
In contrast to most other approaches used to represent multidimensional probability distributions, which are based on graphical Markov modelling (i.e. dependence structure of distributions is represented by graphs), the described method is rather procedural. Here, we describe a process by which a multidimensional distribution can be composed from a "generating sequence" – a sequence of low-dimensional distributions. The main advantage of this approach is that the same apparatus based on operators of composition can be applied for description of both probabilistic and possibilistic models.
-  Perfilieva I. (Czech Republic): Neural nets and normal forms from fuzzy logic point of view, 627-638.
The paper addresses the problem of efficient and adequate representation of functions using two soft computing techniques: fuzzy logic and neural networks. The principle approach to the construction of approximating formulas is discussed. We suggest a generalized definition of the normal forms in predicate BL and ŁP logic and prove conditional equivalence between a formula and each of its normal forms. Some mutual relations between the normal forms will be also established.
-  Pokorny D. (Germany): Implication, equivalence and agreement. Kappa coefficient as a measure of degree of equivalence, 639-649.
The Cohen's kappa coefficient is a widely accepted measure of agreement on categorical variables and has replaced some older simpler measures. Observational and statistical properties of the kappa coefficient in 2 ´ 2 tables are investigated. The asymmetrical measure "Cohenized implication'' is proposed. The decomposition of the symmetrical measure kappa into two asymmetrical components is shown. These statistically motivated measures are discussed as weakened forms of strict logical notions of equivalence and implication. Applications of kappa and "Cohenized implication'' are recommended; on the one hand in the medical research as a supplement to traditional measures of sensitivity and specifity, on the other hand as quantifiers in the GUHA procedure ASSOC as a statistically contemporary operationalization of the weakened equivalence.
-  Slobodová A. (Slovakia): A new approach to decision making in transferable belief model, 651-659.
Expected utility model can be derived not only in probability theory, but also in other models proposed to quantify someone's belief. We deal with the transferable belief model and use the pignistic probabilities when decision is required. We introduce a new class of graphical representation, expected utility networks with pignistic probabilities and define conditional expected utility independence to decompose the expected utility function.
-  Vojtáš P. (Slovakia): Annotated and fuzzy logic programs – relationship and comparison of expressive power, 661-674.
The aim of this paper is to show relationships between the different formalism for uncertainty in artificial intelligence and its applications. We introduce a model of fuzzy logic programming (FLP). We propose a solution to the problem of discontinuous restricted semantics of annotated logic programs introducing annotated logic programs with left continuous annotation terms (ALPLCA). We show that FLP and ALPLCA have the same expressive power and both have continuous semantics. We have soundness and completeness results. This enables us to introduce a new relational algebra. Our procedural semantics enables us to estimate the truth values of answers during the computation. Using this, we introduce several search strategies. Consequences of many valued-logic abduction and many-valued resolutions are also discussed.
-  Wiedermann J. (Czech Republic): Fuzzy neuroidal nets and recurrent fuzzy computations, 675-686.
We define fuzzy neuroidal nets in a way that enables to relate their computations to computations of fuzzy Turing machines. Namely, we show that the polynomially space-bounded computations of fuzzy Turing machines with a polynomial advice function are equivalent to the computations of a polynomially-sized family of fuzzy neuroidal nets. The same holds for fuzzy neural nets which are a special case of fuzzy neuroidal nets. This result ranks discrete fuzzy neural nets among the most powerful computational devices known in the computational complexity theory.
-  Lieskovský M. (Czech Republic): Reasoning strategies for best answer with tolerance, 687-702.
In this article we present several new approaches to implementation of a Fuzzy modified Warren Abstract Machine. Fuzzy logic programming is very time-consuming, which is why we need search strategies. New techniques for searching for the best answer and for best answer up to the allowed tolerance are presented. They are based on the thresholds and the estimation of the truth value of the answers.
-  Kramosil I. (Czech Republic): Inner and outer possibilistic measures, 413-422. The standard techniques of lower and upper approximations, used in order to define the inner and outer measures given a s -additive measure, perhaps a probabilistic one, are applied to possibilistic measures. The conditions under which this approach can be reasonable and useful are investigated and the most elementary properties of the resulting inner and outer possibilistic measures are briefly sketched.
-  Schwarz J., Očenášek J. (Czech Republic): Multiobjective Bayesian optimization algorithm for combinatorial problems: theory and practice, 423-441.
This paper deals with the utilizing of the Bayesian optimization algorithm (BOA) for the multiobjective optimization of combinatorial problems. Three probabilistic models used in the Estimation Distribution Algorithms (EDA), such as UMDA, BMDA and BOA which allow one to search effectively on the promising areas of the combinatorial search space, are discussed. The main attention is focused on the incorporation of Pareto optimality concept into classical structure of the BOA algorithm. We have modified the standard algorithm BOA for one criterion optimization utilizing the known niching techniques to find the Pareto optimal set. The experiments are focused on tree classes of the combinatorial problems: artificial problem with known Pareto set, multiple 0/1 knapsack problem and the bisectioning of hypergraphs as well.
-  Votruba Z. (Czech Republic): Modeling of synaptic information function – proposal of the multilingual translation approach, 443-456.
The topic of the presented paper is the discussion of possible approaches to the homogenization of synaptic information functions from the system-engineering point of view. Homogenization is a significant step to the construction of effective models that should enable understanding synaptic information functions. An attempt of a pragmatic language translation within the multilingual environment is proposed and briefly discussed.
-  Svítek M., Týc F. (Czech Republic): Processing of GPS signals by non-traditional information technologies, 457-472.
The article presents a new methodology concerning the GPS signals processing and shows the signal pre-processing the influence on the quality of the prediction error. The next parameter, which qualified the model quality, is exponential forgetting. For slowly time dependent models the exponential forgetting is approximately 0.98 - 0.99. The lower forgetting value points out the time varying model which is not usable for our modelling application. At the end of the article we achieved model for GPS signals with the appropriate prediction errors and with adequate exponential forgetting. All theoretical results are practically applied on real GPS signals and the achieved accuracy is much better according to the raw measured data.
-  Kvasnička V. (Slovakia): An evolutionary simulation of modularity emergence of genotype-phenotype mappings, 473-491.
A novel method that allows us to study the emergence of modularity for genotype-phenotype mapping in the course of Darwinian evolution is described. The evolutionary method used is based on composite chromosomes with two parts: One is a binary genotype whereas the other corresponds to the mapping of genes onto phenotype characters. For such generalized chromosomes the modularity is determined by the following intuitive way: The genes are divided into two subgroups; simultaneously with this decomposition also an accompanied decomposition of the set of phenotype characters is defined. We expect that for chromosomes with modular structures the genes from one group are mapped onto characters from the respective group, an appearance of "crosslink'' mappings is maximally suppressed. A fundamental question for the whole evolutionary biology (and also for evolutionary algorithms and connectionist cognitive science) is the nature of mechanism of evolutionary emergence of modular structures. An idea of effective fitness is used in the presented explanatory simulations. It is based on the metaphor of Hinton and Nowlan theory of the Baldwin effect, and was used as an effective idea for generalization of evolutionary algorithms. The effective fitness reflects not only a static concept of the phenotype, but also its ability to be adapted (learned) within a neighborhood of the respective chromosome. The chromosomes determined in the presented paper may be understood as objects with the type of plasticity. The metaphor of the Baldwin effect (or effective fitness) applied to evolutionary algorithms offers an evolutionary tool that is potentially able to produce the emergence of modularity.
-  Rohlík O., Mautner P., Matoušek V. (Czech Republic), Kempf J., Weinzierl K. (Germany): A new approach to signature verification: Digital data acquisition pen, 493-501.
This paper presents our experience with a completely new approach to handwritten text recognition. A brief description of a new type of input devices is followed by a more detailed explanation of recognition methods used. The results achieved are discussed and ideas for further research are suggested.
-  Čermák P., Pokorný M. (Czech Republic): An improvement of non-linear neuro-fuzzy model properties, 503-523.
In this paper we proposed a fuzzy neural network model which can embody a fuzzy Takagi-Sugeno model and carry out fuzzy inference and support structure of fuzzy rules. The algorithm of model properties improvement consists of several new procedures namely input space partition, fuzzy terms number and rule number extending, low-effective fuzzy terms and rules extraction and consequent structure identification. A fuzzy neural network is constructed based on fuzzy model. By learning of the neural network we can tune of embedded initial fuzzy model. To show the applicability of new method and to make a possibility to real systems modelling, we designed the fuzzy-neural network programme tool FUZNET. Next, we performed numerical experiment to do fuzzy modelling for an artifical time series and real non-linear complex system.
-  Vysoký P. (Czech Republic): Central fatigue identification of human operator, 525-535.
The aim of the research described in this work is to find the estimator of the fatigue level of a human operator - driver. The demand of non-intrusive approach constraints the existing possibilities. It is demonstrated that a small steering wheel movement compensating heading error carry information on driver fatigue. The biological origin of fatigue is described and a simple method for calibration of fatigue based on the fuzzy approach is submitted. With the help of one type of fatigue indicators a possibility to identify the fatigue caused by sleep deprivacy is demonstrated.
-  Grebeníček F.: Sparse distributed memory - modifications of initialization, 317-336.
This paper is concentrated on the Kanerva's Sparse Distributed Memory as a kind of artificial neural net and associative memory. SDM captures some basic properties of human long-term memory. SDM may be regarded as a three-layered feed-forward neural net. Input layer neurons copy input vectors only, hidden layer neurons have radial basis functions and output layer neurons have linear basis functions. The hidden layer is initialized randomly in the basic SDM algorithm. The aim of the paper is to study of Kanerva's model behaviour for real input data (large input vectors, correlated data). A modification of the basic model is introduced and tested.
-  Sagiroglu S., Besdok E., Erler M.: Principal component analysis for control chart pattern recognition using artificial neural networks, 337-347.
Control chart pattern (CCP) recognition is important for monitoring process environments to achieve appropriate control precisely and quickly and to produce high quality products. CCPs are represented by a large number of inputs. The principal component analysis (PCA) is an effective procedure for reducing a large input vector to a small vector.
This paper describes an efficient approach to reducing the inputs of the networks for CCP recognition with the use of PCA. The reason for applying PCA to CCP recognition is to provide simplicity for the networks and to speed up the training procedure of them. Multilayered perceptrons (MLP) are used and trained with the resilient-propagation (RP) and the backpropagation (BP) learning algorithms. The results show that PCA provides less complex neural network structure for accurate and faster training. This helps to achieve the CCP recognition precisely and accurately and might even help us to implement the recognition easily within the VLSI technologies for this application.
-  Cavalli E.: Experimenting neural networks to forecast business insolvency, 349-361.
This research investigates the way in which Neural Networks (NNs) can be used to forecast insolvency. The research aimed at forecasting, one for three years in advance, a clear disclosure of the legal condition of insolvency in a joint-stock company based in Italy and operating in the textile industry, through the analysis of the official balance sheet. The results refer to experiments which have been carried out for three years, concerning a sample of about 500~companies in the textile sector, their balance sheets for a three-year period (1990-1992) and the 'state of the art' in the following two years (1993-1994). The research has pointed out how the problem of insolvency can be dealt with by using NNs in a satisfactory way and has proved useful to show how important the method of collecting data for a further analysis through NNs can be.
-  Kramosil I.: Degrees of belief in partially ordered sets, 363-389.
Belief functions can be taken as an alternative to the classical probability theory, as a generalization of this theory, but also as a non-traditional and sophisticated application of the probability theory. In this paper we abandon the idea of numerically quantified degrees of belief in favour of the case when belief functions take their values in partially ordered sets, perhaps enriched to lower or upper semilattices. Such structures seem to be the most general ones to which reasonable and nontrivial parts of the theory of belief functions can be extended and generalized.
-  Gorban A. N., Gorbunova K. O., Wunsch II D. C.: Liquid brain: the proof of algorithmic universality of quasichemical model of fine-grained parallelism, 391-412.
A new formal model of parallel computations - the Kirdin kinetic machine - was suggested by Kirdin. It is expected that this model will play the role for parallel computations similar to Markov normal algorithms, Kolmogorov and Turing machine or Post schemes for sequential computations. The basic ways in which computations are realized are described; the basic properties of the elementary programs for the Kirdin kinetic machine are investigated. It has been proved that the deterministic Kirdin kinetic machine is an effective computer. A simple application of the Kirdin kinetic machine - heap encoding - is suggested. Subprograms similar to usual programming enlarge the Kirdin kinetic machine.
-  Editorial, 205.
-  Kodogiannis V.S., Tomtsis D.: Neural network adaptive controller for unmanned underwater vehicles, 207-221.
Underwater robotic vehicles have become an important tool for various underwater tasks because they have greater speed, endurance, depth capability, and safety than human divers. The problem of controlling a remotely operated underwater vehicle in 6 degrees of freedom (DOF) is addressed in this paper, as an example of a system containing severe non-linearities. Neural networks are been used in a closed-loop to approximate the nonlinear vehicle dynamics. No prior off-line training phase and no explicit knowledge of the structure of the vehicle are required, and the proposed scheme exploits the advantages of both neural network control and adaptive control. A control law and a stable on-line adaptive law are derived using the Lyapunov theory, and the convergence of the tracking error to zero and the bounded-ness of signals are guaranteed by applying Barbalat's Lyapunov-like lemma. In this paper, a neural network architecture based on radial basis functions has been used to evaluate the performance of the proposed adaptive controller for the motion of the Norwegian Experimental Remotely Operated Vehicle (NEROV).
-  Sadegheih A., Drake P.R.: Network optimisation using linear programming and genetic algorithm, 223-233.
The problem of network is formulated as linear programming and genetic algorithm in spreadsheet model. GA's are based in concept on natural genetic and evolutionary mechanisms working on populations of solutions in contrast to other search techniques that work on a single solution. An example application is presented. An empirical analysis of the effects of the algorithm's parameters is also presented in the context of this novel application.
-  Jarabo Amores M. P., Rosa Zurera M., Lopez Ferreras F., Lopez Espi P.: Neural network based detection scheme for slow fluctuating radar targets in low Pfa conditions, 235-247.
Slow fluctuating radar targets have shown to be very difficult to classify by means of neural networks. This paper deals with the application of time-frequency decompositions for improving the performance of neural networks for this kind of targets. Several topics, such as dimensionality reduction of the time-frequency representations and the optimum value of SNR for training are discussed. The proposed detector is compared with a single neural network for radar detection, showing that he performance is improved for slow fluctuating radar targets, especially for low values of the probability of false alarm.
-  Leung W. K.: Solving application problems involving large real type data sets by single layered backpropagation networks, 249-257.
It is generally accepted that most benchmark problems known today can be solved by artificial neural networks with one single hidden layer. Networks with more than one hidden layer normally slow down learning dramatically. Furthermore, generalisation to new input patterns is generally better in small networks. However, most benchmark problems only involve a small training data set which is normally discrete (such as binary values 0 and 1) in nature. The ability of single hidden layer supervised networks to solve problems with large and continuous type of data (e.g. most engineering problems) is virtually unknown. A fast learning method for solving continuous type problems has been proposed by Evans et al. However, the method is based on the Kohonen competitive, and ART unsupervised network models. In addition, almost every benchmark problem has the training set containing all possible input patterns, so there is no study of the generalisation behaviour of the network. This study attempts to show that single hidden layer supervised networks can be used to solve large and continuous type problems within measurable algorithmic complexities.
-  Leung W. K.: The performance of backpropagation networks which use gradient descent on sigmoidal steepness, 259-256.
Backpropagation which uses gradient descent on the steepness of the sigmoid function (BPSA) has been widely studied (e.g. Kruschke et al.). However, most of these studies only analysed the BPSA empirically where no adequate measurements of the network's quality characteristics (e.g. efficiency and complexity) were given. This paper attempts to show that the BPSA is more efficient than the standard BPA by quantitatively comparing the convergence performance of both algorithms on several benchmark application problems. The convergence performance is measured by the values of the neural metrics evaluated in the training process.
-  Neruda R.: Neural network weight space symmetries can speed up genetic learning, 267-275.
A functional equivalence of feed-forward networks has been proposed to reduce the search space of learning algorithms. A novel genetic learning algorithm for RBF networks and perceptrons with one hidden layer that makes use of this theoretical property is proposed. Experimental results show that our procedure outperforms the standard genetic learning.
-  Rivas-Echeverría F., Ríos-Bolívar A., Casales-Echeverría J.: Neural network-based auto-tuning for PID controllers, 277-284.
PID controllers have become the most popular control strategy in industrial processes due to the versatility and tunning capabilities. The incorporation of auto-tunning tools have increased the use of this kind of controllers. In this paper we propose a neural network-based self-tunning scheme for on-line updating of PID parameters, which is based on integral error criteria (IAE, ISE, ITAE, ITSE).
-  Joghataie A.: Active control of trusses under heavy static loads, 285-292.
Trusses are suitable load-bearing structural systems for heavy concentrated loads. In this paper, it is shown that it is possible to use active control mechanisms to enhance the load-bearing capacity of the trusses. Under heavy loading, some elements of a truss might experience high stresses and show non-linear behavior, resulting in large deformations in the truss. Under such a condition, some elements of the truss might damage which can lead to the collapse of the truss. Application of control forces on some of the degrees of freedom of the truss can render help the truss tolerate larger forces before its collapse. A neural network can then be trained to learn the relationship between the information about the external loads on the truss, as input, and the required control forces, as output, and act as a neuro-controller for the truss. This method is explained and then tested on a small truss to show the capabilities of the method.
-  Mladenov V. M., Maratos N. G., Tsakoumis A. C., Tashev T. A., Mastorakis N. E.: On solving nonlinear programming problems via neural networks, 293-304.
In this paper we consider several Neural Network architectures for solving nonlinear programming problems with inequality constrains. This is an extension of previous authors' work and here we present a new architecture for convex programming problems. The architecture is based on alternative pseudo-cost function, which do not require large penalty parameter values. Simulation results based on SIMULINK models are given and compared.
-  Zhang Z., Manikopoulos C.: Neural networks in statistical anomaly intrusion detection, 305-316.
In this paper, we report on experiments in which we used neural networks for statistical anomaly intrusion detection systems. The five types of neural networks that we studied were: Perceptron; Backpropagation; Perceptron-Backpropagation-Hybrid; Fuzzy ARTMAP; and Radial-Based Function. We collected four separate data sets from different simulation scenarios, and these data sets were used to test various neural networks with different hidden neurons. Our results showed that the classification capabilities of BP and PBH outperform those of other neural networks.
-  Harrison S. A., Rayward-Smith V. J.: The limitations of node pair features for bisection problems, 101-107.
A key stage in the design of an effective and efficient genetic algorithm is the utilisation of domain specific knowledge. Once appropriate features have been identified, genetic operators can then be designed which manipulate these features in well defined ways. In particular, the crossover operator is designed so as to preserve in any offspring features common to both parental solutions and to guarantee that only features that appear in the parents appear in the offspring. Forma analysis provides a well-defined framework for such a design process.
In this paper we consider the class of bisection problems. Features proposed for set recombination are shown to be redundant when applied to bisection problems. Despite this inherent redundancy, approaches based on such features have been successfully applied to graph bisection problems.
In order to overcome this redundancy and to obtain performance gains over previous genetic algorithm based approaches to graph bisection a natural choice of features is one based on node pairs. However, such features result in a crossover operator that displays degenerative behaviour and is of no practical use.
-  Yii H. K., Morad N., Hitam M. S.: Optimisation of a solder paste printing process parameters using a hybrid intelligent approach, 109-127.
This paper describes a method of modelling and optimising the solder paste printing process using an artificial intelligence approach. A hybrid approach combining the backpropagation neural network and genetic algorithm to model and subsequently optimise the process is developed using actual data collected from a manufacturing plant. Results obtained showed that the neural network developed was able to model the process successfully and the genetic algorithm developed was able to optimise the process parameters using various optimisation criteria.
-  Češpiva L.: Scalable prime generator benchmark code, 129-143.
This paper is intended to open a serial of articles concerning the development of the simple parallel scalable benchmark code. The prime algorithms have been chosen as the first set of procedures in order to test in practice benchmarking ideas concerning the code properties. The prime routines were selected because of the simplicity of the problem formulation, an easy programming effort as well as an extremely simple data operation. The code has been implemented as seven routines of significant performance. Four routines belong to number generators and three procedures are prime selectors. Moreover two of the generators are single parametrical and the values of remaining two procedures are generated by means of twoparametrical mathematical formula.
The performance of different generators/selectors are measured for benchmark size scaled according to powers of two. A single curve represents the functional behavior of the elapsed time of a selected routine on the benchmark size. The obtained performance curves of a couple of routines are collected into one figure. The single time window where a couple of curves is compared one to each other represents a substantial unit of the benchmark analysis of hardware facility.
The attention of the paper is concentrated on the benchmark code properties itself. The first paper of the serial does not analyze any composition of hardware equipment or special hardware features. The code should finally be used to test capabilities of hardware facilities as well as to correlate software performance of practical packages with the performance characteristics of present code.
-  Okhonin S., Okhonin V., Ils A., Ilegems M.: Neural network based approach to the evaluation of degradation lifetime, 145-151.
The applicability of neural networks for the evaluation of the lifetime of semiconductor devices is demonstrated. The neural network based method can be used as a general tool for modeling. The commonly used main-accelerating-parameter models could be obtained by the neural net reducing. The neural network based method is attractive also due to the neural net ability to process "noisy" data. The method should find wide applications in degradation modeling.
-  Tay F. E. H., Cao L. J.: Saliency analysis of support vector machines for feature selection, 153-166.
This paper deals with the application of saliency analysis to Support Vector Machines (SVMs) for feature selection. The importance of feature is ranked by evaluating the sensitivity of the network output to the feature input in terms of the partial derivative. A systematic approach to remove irrelevant features based on the sensitivity is developed. Two simulated non-linear time series and five real financial time series are examined in the experiment. Based on the simulation results, it is shown that that saliency analysis is effective in SVMs for identifying important features.
-  Dündar P.: Accessibility number and the neighbour-integrity of generalised Petersen graphs, 167-174.
When the nodes or links of communication networks are destroyed, its effectiveness decreases. Thus, we must design the communication network as stable as possible, not only with respect to the initial disruption, but also with respect to the possible reconstruction of the network. A graph is considered as a modeling network, many graph theoretic parameters have been used to describe the stability of communication networks, including connectivity, integrity, tenacity. Several of these deal with two fundamental questions about the resulting graph. How many vertices can still communicate? How difficult is it to reconnect the graph? Stability numbers of a graph measure its durability respect to break down. The neighbour-integrity of a graph is a measure of graph vulnerability. In the neighbour-integrity, it is considered that any failure vertex effects its neighbour vertices. In this work, we define the accessible sets and accessibility number and we consider the neighbour-integrity of Generalised Petersen graphs and the relation with its accessibility number.
-  Kalkat M., Yildirim S., Uzmay I.: A neural network for analysis of vibration in mechanical systems arising from unbalance, 175-188.
The paper presents an investigation on vibrations of mechanical systems arising from unbalanced masses. At the experimental stage, a power transmission shaft is driven at different operating speeds, therefore, the parameters, such as displacement, velocity and acceleration in vertical direction due to body vibrations are measured at various points on the frame before and after balancing. Balancing has provided a definite decrease in the amplitudes of vibration parameters.
In addition to these studies mentioned above, the use of Neural Network (NN) for vibration analysis of a frame due to unbalanced transmission shaft is also achieved. The results show that the NN approach exactly follows the foregoing results. This implies the necessity of the non-linear modelling capabilities of the NN for vibration problems of mechanical systems.
-  Book review, 189-190.
-  Editorial, 1-2.
-  Sarkar D.: Empirical estimation of generalization ability of neural networks, 3-15.
This work concentrates on a novel method for empirical estimation of generalization ability of neural networks. Given a set of training (and testing) data, one can choose a network architecture (number of layers, number of neurons in each layer etc.), an initialization method, and a learning algorithm to obtain a network. One measure of the performance of a trained network is how closely its actual output approximates the desired output for an input that it has never seen before. Current methods provide a ``number'' that indicates the estimation of the generalization ability of the network. However, this number provides no further information to understand the contributing factors when the generalization ability is not very good. The method proposed uses a number of parameters to define the generalization ability. A set of the values of these parameters provide an estimate of the generalization ability. In addition, the value of each parameter indicates the contribution of such factors as network architecture, initialization method, training data set, etc. Furthermore, a method has been developed to verify the validity of the estimated values of the parameters.
-  Svítek M., Prchal J., Paclík P., Kárný M.: Parameter recuction method for transport network control, 17-26.
The main goal of this paper is to use traffic data measured automatically by inductive loops, reduce the dimensionality of measured data vector and apply the reduced data vector for imitation of the traffic operator's behaviour. The feature vector's dimensionality is reduced both by Fisher criterion and truncated SVD (singular value decomposition) methods. For the operator's imitation the Laplace classifier is applied.
-  Bob P., Faber J.: Neural complexity and dissociation within the framework of quantum theory, 27-31.
In paper connections among dissociation, neural and EEG complexity are presented. They implicate the EEG correlate for dissociated mental representations of neural assemblies which actually act in the brain-mind system. As a consequence of dissociation among these mental representations burst EEG activity is present. Burst activity is explained as a consequence of deterministic chaos, which leads to emerging of the underlying order of attractors in brain physiology. This chaos is comparable to the world of possibilities and their collapse in quantum theory. The chaos may thus serve to link quantum events to global brain dynamics and may be connected to the quantum superposition of brain states and the collapse.
-  Mihalík J., Labovský R.: Neural network approaches for predictive vector quantization of an image, 33-48.
The paper deals with a predictive vector quantization of an image based on a neural network architectures, where a vector predictor is implemented by three-layer neural network with various hidden nodes and bias units, sigmoid function as nonlinearity and where vector quantizer is implemented by Kohonen self-organizing feature maps, it means the codebook is obtained by neural network clustering algorithm . We have tested an influence of a number of hidden nodes, various convergention rates of a learning algorithm and a presence of the sigmoid function to a mean square prediction error. Next we have studied an influence of codebook size to a mean square quantization error, that means a performance of predictive vector quantization system for various bit rates. The image of Lena of size 512 x 512 pels was coded for various bit rates, where we have used one-dimensional and two-dimensional vector prediction of the blocks of pels.
-  Hernández-Espinosa C., Fernández-Redondo M., Gómez-Vilda P.: Neural network based input selection and diagnosis of pathologic voices, 49-63.
We present a neural network application to the diagnosis of vocal and voice disorders. These disorders normally cause changes in the voice signal, so we use acoustic parameters extracted from the voice as inputs for the neural network. The selected neural network structure is Multilayer Feedforward. In this paper, we focus our application on the classification between pathologic and non-pathologic voices. The performance of the neural network is very good, 100% correct in the test. Furthermore, having used neural network techniques to reduce the initial number of inputs (35), we conclude that only two acoustic parameters are needed for the classification between normal and pathological voices. The application can be a very useful diagnostic tool because it is non-invasive, makes it possible to develop an automatic computer-based diagnosis system, reduces the cost and time of the diagnosis, is objective and can also be useful for evaluation of surgical, pharmacological and rehabilitation processes. Finally, we discuss the limitation of our work and possible future research.
-  Balkarey Yu.I., Nagoutchev V.O., Evtikhov M.G., Elinson M.I.: Nonresonance parametric signal amplification in neural networks and brain rhythms, 65-72.
It is shown that gigantic nonresonance parametric amplification of weak external signals is possible in neural networks. The mechanism of amplification is determined by periodic modulation of neuron threshold or other parameters. Brain rhythms can play the role of periodic modulation. The paper develops Hopfield hypothesis about the connection of some brain rhythms and signal amplification. In artificial networks a special central generator for parametric modulation is necessary.
-  Bückle M., Strey A.: Specification and simulation of neural networks using epsiloNN, 73-89.
The language EpsiloNN allows a high-level specification of arbitrary neural network structures. It is especially designed for the automatic generation of simulation code, which can run efficiently on different parallel computer architectures. In this paper some applications of EpsiloNN are presented. First, the basic syntactical and semantical aspects of the language are described briefly. Then the EpsiloNN specifications of a popular multilayer perceptron (MLP) and of a more complex hybrid LVQ/RBF neural network architecture are presented. Further features of the language are explained by example.
-  Contents volume 10 (2000), 91-94.
-  Author's index volume 10 (2000), 95-100.
Thanks to CodeCogs.com