Contents of Volume 16 (2006)5/2006 4/2006 3/2006 2/2006 1/2006
-  Combe-Nencka H., Combe P. (France): Riemannian geometry on non-parametric probability space, 459-473.
The family Pλ of absolutely continuous probabilities w.r.t. the σ-finite measure λ is equipped with a structure of an infinite dimensional Riemannian manifold modeled on a real Hilbert. Firstly, the relation between the Hellinger distance and the Fischer metric is analysed on the positive cone Mλ+ of bounded measures absolutely continuous w.r.t. λ, appearing as a flat Riemannian manifold. Secondly, the statistical manifold Pλ is seen as a submanifold of Mλ+ and Amari-Chensov α-connections are derived. Some α-self-parallel curves are explicitely exhibited.
-  pánek R., Tůma M. (Czech Republic): Secure grid-based computing with social-network based trust management in the semantic web, 475-488.
The paper describes a new approach for treatment security issues in reconfigurable grids used for computing or communication, in particular, in the semantic web environment. The proposed strategy combines a convenient mathematical model, efficient combinatorial algorithms which are robust with respect to changes in the grid structure, and an efficient implementation. The mathematical model uses properties of weighted hypergraphs. Model flexibility enables to describe basic security relations between the nodes such that these relations are preserved under frequent changes in connections of the hypergraph nodes. The algorithms support construction of a grid with embedded security concepts on a given set of nodes. The proposed implementation makes use of the techniques developed for time and space-critical applications in numerical linear algebra. Our combination of the mentioned combined building blocks is targeted to the emerging field of the semantic web, where the security seems to be very important. Nevertheless, the ideas can be generalized to other concepts describable by weighted hypergraphs. The paper concentrates on explaining the model and the algorithms for the chosen application. The consistency of the proposed ideas for security management in the changing grid was verified in a couple of tests with our pilot implementation SECGRID.
-  Guney K., Akdagli A., Babayigit B. (Turkey): Shaped-beam pattern synthesis of linear antenna arrays with the use of a clonal selection algorithm, 489-501.
In this paper, the position-only, amplitude-only, and amplitude-phase synthesizes for the shaped-beam patterns of the linear antenna arrays have been achieved by using the clonal selection algorithm (CLONALG). The CLONALG is a relatively novel evolutionary optimization method based on the clonal selection principle of the human immune system. Numerical examples of the pencil, flat-topped, and cosecant patterns are given. Numerical results show that the CLONALG is capable of synthesizing the array patterns with a good performance both in the shaped region and in the sidelobe region.
-  Tian Haiting, Jin Jing, Song Ningfang, Zhang Chunxi (China): Satellite borne Fiber Optical Gyroscope thermal bias compensation network based on neuro-fuzzy logic, 503-512.
The Fiber Optical Gyroscope (FOG) will be affected by a Thermal Bias Error (TBE) in the space application, and a thermal bias error compensation is needed for keeping the accuracy of the FOG. A Thermal Bias Compensating Network (TBCN) based on the Neuro-Fuzzy logic is proposed in the paper. The network uses an Adaptive Neuro-Fuzzy Inference System (ANFIS) to compensate the TBE. The parameters of the ANFIS can be initialized by previous training in the ground and modified on space orbit working. Algorithm simulation and ground test have shown that the TBCN has a high capability of tracking and compensating the temperature bias error.
-  Mishra D., Yadav A., Kalra P. K. (India): A learning algorithm for a novel neural network architecture motivated by integrate-and-fire neuron model, 513-532.
In this paper, a learning algorithm for a novel neural network architecture motivated by Integrate-and-Fire Neuron Model (IFN) is proposed and tested for various applications where a multilayer perceptron (MLP) neural network is conventionally used. It is observed that inclusion of a few more biological phenomenon in the formulation of artificial neural networks make them more prevailing. Several benchmark and real-life problems of classification and function-approximation are illustrated.
-  Tučková J., Zetocha P. (Czech Republic): Speech analysis of children with developmental dysphasia by Supervised SOM, 533-545.
This study is supported by the collective project of Department of Circuit Theory in FEE-CTU in Prague and the Department of Paediatric Neurology in 2nd Faculty of Medicine of Charles University in Prague. One of the interests in paediatric neurology is a research on electroclinical syndromes area combined with speech disorders. The aim of our project is, among others, finding a connectivity between children's neurological disorders called developmental dysphasia ,  and the assessment of the degree of perception and impairment of speech. From the point of view the characterisation of language, it is very complicated to determine relevant and irrelevant information about speech and to connect it with a searching target. That is why a part of the project is solved by artificial neural networks (ANNs) with using knowledge of phonetics.
At first the analysis of vowels was researched using the ANN. An initial hypothesis says that developmental dysphasia can influence a shift of formant frequencies in spectral characteristics compared with the formant frequencies of healthy children.
It is necessary to have a comparative voice analysis of healthy children for evaluating the degree of these modifications. Our team created the healthy and ill children's speech databases with a comparative corpus. The healthy children's speech was recorded at kindergartens and on the first level of elementary school. The ill children's speech was recorded at hospital. The children were from 4 to 10 years old. The comparative corpus, which includes isolated vowels, monosyllables and polysyllables, was compiled by neurological specialists as related to medical therapy. The same corpus was used for the comparative analysis of healthy children. Our aim is a vowel recognition and visualisation by a Supervised Self-Organizing Maps - Supervised SOMs, which represent one of the types of the ANNs with better cluster separation based on the Kohonen map, see . Better cluster separation is useful for the visualisation analysis, which is easy for the current user. The Recognition Rate (RR) depends also on the knowledge of the children's voice evolution regularity related to their age and gender. Our main objective is not the highest RR, but to observe its trend. We assume that wrong mapped vowels should be one of the indicators of developmental dysphasia.
The application of the Supervised SOM should prove the ability not only to discriminate healthy and ill children, but also to describe a trend of the neurological disorders with the assistance of repeated three-month recordings during a medical therapy.
-  Contens volume 16 (2006), 591-593.
-  Author's index volume 16 (2006), 595-597.
-  Selaimia Y., Moussaoui A., Abbassi H. A. (Algeria): Multi neural networks based approach for fault detection and diagnosis of a DC-motor, 369-397
Recently, neural networks have emerged as potential tools in the area of fault detection and diagnosis. This paper deals with multi neural network based fault detection and diagnosis approach. The architecture adopted is a Radial Basis function neural network (RBF). The approach is applied for detection and diagnosis of suitable parameters failures on a DC-motor based on the patterns of parameters changes. The simulation results illustrated that after training the neural networks, the system is able to detect the different motor failures.
-  Cao L., Jingqing Z. (China): A mixture of support vector machines for time series forecasting, 381-397
A mixture of support vector machines (SVMs) is proposed for time series forecasting. The SVMs mixture is composed of a two-stage architecture. In the first stage, self-organizing feature map (SOM) is used as a clustering algorithm to partition the whole input space into several disjointed regions. A tree-structured architecture is adopted in the partition to avoid the problem of predetermining the number of partitioned regions. Then, in the second stage, multiple SVMs, also called SVM mixture, that best fit partitioned regions are constructed by finding the most appropriate kernel function and the optimal free parameters of SVMs. The experiment shows that the SVMs mixture achieves significant improvement in the generalization performance in comparison with the single SVMs model. In addition, the SVMs mixture also converges faster and use fewer support vectors.
-  Prentis P. (Czech Republic): Multi-resolution visualisation of data with self-organizing maps, 399-410
This paper discusses the cluster analysis and visualisation tool, the self-organizing map (SOM). The pros and cons of different network sizes are discussed, in particular how they are suited to the purposes of direct data browsing and also the cluster analysis with U-matrices. The tree-structured SOM (TS-SOM) [4, 5] is proposed as a method of acquiring multi-resolution/multi-purpose mappings of a given input space. The TS-SOM is discussed in detail and a novel modification to the algorithm that improves its reliability as a multi-resolution visualization method is presented.
-  Dundar P., Kilic E. (Turkey): Two measures for the stability of Extended Fibonacci Cubes, 411-419
The Fibonacci Cube is an interconnection network that gets many desirable properties that are very important in the network design, network stability and applications. The extended Fibonacci Cube is a new network topology. The vulnerability value of a communication network shows the resistance of the network after the disruption of some centres or connection lines until the communication breakdown. In a network, as the number of centres belonging to sub networks changes, the vulnerability of the network also changes and requires greater degrees of stability or less vulnerability. If the communication network is modelled by graph G, the deterministic measures tend to provide a worst-case analysis of some aspects of overall disconnection process. Many graph theoretical parameters have been used in the past to describe the stability of communication networks. There are few parameters such as integrity, neighbour-integrity and tenacity number giving the vulnerability. Also, in the neighbour-integrity, if a station is destroyed, the adjacent stations will be betrayed so that the betrayed stations become useless to network as a whole.
In this paper we study the stability of the Extended Fibonacci Cube using the integrity and neighbour-integrity. We compared the obtained results with the results of the other network topologies. We saw that, for two graphs G1 and G2 that have same number of vertices if k(G1) > k(G2), then I(G1) > I(G2) and NI(G1)< NI(G2).
-  Übeyli E. F. (Turkey): Fuzzy similarity index employing Lyapunov exponents for discrimination of EEG signals, 421-431
In this study, a new approach based on the computation of fuzzy similarity index was presented for discrimination of electroencephalogram (EEG) signals. The EEG, a highly complex signal, is one of the most common sources of information used to study the brain function and neurological disorders. The analyzed EEG signals were consisted of five sets (set A - healthy volunteer, eyes open; set B - healthy volunteer, eyes closed; set C - seizure-free intervals of five patients from the hippocampal formation of the opposite hemisphere; set D - seizure-free intervals of five patients from the epileptogenic zone; set E - epileptic seizure segments). The EEG signals were considered as chaotic signals and this consideration was tested successfully by the computation of Lyapunov exponents. The computed Lyapunov exponents were used to represent the EEG signals. The aim of the study is discriminating the EEG signals by the combination of Lyapunov exponents and fuzzy similarity index. Toward achieving this aim, fuzzy sets were obtained from the feature sets (Lyapunov exponents) of the signals under study. The results demonstrated that the similarity between the fuzzy sets of the studied signals indicated the variabilities in the EEG signals. Thus, the fuzzy similarity index could discriminate the healthy EEG segments (sets A and B) and the other three types of segments (sets C, D, and E) recorded from epileptic patients.
-  Cao L. J., Zhang Jingqing (China): An empirical study of feature selection in support vector machines, 433-453
Recently, a support vector machine (SVM) has been receiving increasing attention in the field of regression estimation due to its remarkable characteristics such as good generalization performance, the absence of local minima and sparse representation of the solution. However, within the SVMs framework, there are very few established approaches for identifying important features. Selecting significant features from all candidate features is the first step in regression estimation, and this procedure can improve the network performance, reduce the network complexity, and speed up the training of the network.
This paper investigates the use of saliency analysis (SA) and genetic algorithm (GA) in SVMs for selecting important features in the context of regression estimation. The SA measures the importance of features by evaluating the sensitivity of the network output with respect to the feature input. The derivation of the sensitivity of the network output to the feature input in terms of the partial derivative in SVMs is presented, and a systematic approach to remove irrelevant features based on the sensitivity is developed. GA is an efficient search method based on the mechanics of natural selection and population genetics. A simple GA is used where all features are mapped into binary chromosomes with a bit "1" representing the inclusion of the feature and a bit of "0" representing the absence of the feature. The performances of SA and GA are tested using two simulated non-linear time series and five real financial time series. The experiments show that with the simulated data, GA and SA detect the same true feature set from the redundant feature set, and the method of SA is also insensitive to the kernel function selection. With the real financial data, GA and SA select different subsets of the features. Both selected feature sets achieve higher generation performance in SVMs than that of the full feature set. In addition, the generation performance between the selected feature sets of GA and SA is similar. All the results demonstrate that that both SA and GA are effective in the SVMs for identifying important features.
-  Book review, 455-456
-  Book review, 457-458
-  Editorial, 275.
-  Marhon S. A., Al-Aghar D. N. U (Libya).: Speaker identification based on neural networks, 277-290.
Speaker identification is becoming an increasingly popular technology in today's society. Besides being cost effective and producing a strong return on investment in all the defined business cases, speaker identification lends itself well to a variety of uses and implementations. These implementations can range from corridor security to safer driving to increased productivity. By focusing on the technology and companies that drive today's voice recognition and identification systems, we can learn current implementations and predict future trends.
In this paper one-dimensional discrete cosine transform (DCT) is used as a feature extractor to reduce signal information redundancy and transfer the sampled human speech signal from time domain to frequency domain. Only a subset of these coefficients, which have large magnitude, are selected. These coefficients are necessary to save the most important information of the speech signal, which are enough to recognize the original speech signal, and then these coefficients are normalized globally. The normalized coefficients are fed to a multilayer momentum backpropagation neural network for classification. The recognition rate can be very high by using a very small number of the coefficients which are enough to reflect the specifications of the speaker voice.
An artificial neural network ANN is learned to classify the voices of eight speakers, five voice samples for each speaker are used in the learning phase. The network is tested using other five different samples for the same speakers. During the learning phase many parameters are tested which are: the number of selected coefficients, number of hidden nodes and the value of the momentum parameter. In the testing phase the identification performance is computed for each value of the above parameters.
-  Otair M. A., Salameh W. A. (Jordan): Efficient training of backpropagation neural networks, 291-311.
This paper focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or a separate adaptive learning rate for each weight. The learning-rate adaptation is based on descent techniques and estimates of the local constants that are obtained without additional error function and gradient evaluations. This paper proposes three algorithms to improve the different versions of backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. The new modification consists of a simple change in the error signal function. Experiments are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with three training problems: XOR, encoding problem and character recognition, which are popular training problems.
-  El-Qawasmeh E., Kattan A (Jordan).: Development and investigation of a novel compression technique using Boolean minimization, 313-326.
This paper suggests a new algorithm for data compression that depends on Boolean minimization of binary data. On the compressor side, the input bit-stream is chopped into chunks of 16-bit each, and a "sum of products" function is found for each chunk of bits using the Quine-McClusky algorithm. The minimized "sum of products" function is stored in a file. Later, the Huffman coding is applied to this file. The obtained Huffman code is used to convert the original file into a compressed one. On the decompression side, the Huffman tree is used to retrieve the original file. The experimental results of the proposed algorithm showed that the saving ratio on average is around 50%. In addition, the worst case was investigated and a remedy to it was suggested. The proposed technique can be used for various file formats including images and videos.
-  Souici L., Meslati D. (Algeria): Toward a generic hybrid neural system for handwriting recognition: an application to arabic words, 327-340.
In this article, we propose an automated construction of knowledge based artificial neural networks (KBANN) for the recognition of restricted sets of handwritten words or characters. The features that better describe the chosen vocabulary are first selected, according to the characteristics of the used script, language and lexicon. Then, ideal samples of lexicon elements (words or characters) are submitted to a feature extraction module to derive their description using the chosen primitives. The analysis of these descriptions generates a symbolic knowledge base reflecting a hierarchical classification of the words (or characters). The rules are then translated into a multilayer neural network by determining precisely its architecture and initializing its connections with specific values. This construction approach reduces the training stage, which enables the network to reach its final topology and to generalize. The proposed method has been tested on the automated construction of neuro-symbolic classifiers for two Arabic word lexicons.
-  Karlik B. (Turkiye): Medical image compression by using vector quantization neural network (VQNN), 341-348.
This paper presents a lossy compression scheme for biomedical images by using a new method. Image data compression using Vector Quantization (VQ) has received a lot of attention because of its simplicity and adaptability. VQ requires the input image to be processed as vectors or blocks of image pixels. The Finite-state vector quantization (FSVQ) is known to give better performance than the memory less vector quantization (VQ). This paper presents a novel combining technique for image compression based on the Hierarchical Finite State Vector Quantization (HFSVQ) and the neural network. The algorithm performs nonlinear restoration of diffraction-limited images concurrently with quantization. The neural network is trained on image pairs consisting of a lossless compression named hierarchical vector quantization. Simulations results are presented that demonstrate improvements in visual quality and peak signal-to-noise ratio of the restored images.
-  Al-Jawfi R. A. (Yemen): Nonlinear iterated function system coding using neural networks, 349-355.
In this paper we attempt to form a neural network to code nonlinear iterated function system. Our approach to this problem consists of finding an error function which will be minimized when the network coded attractor is equal to the desired attractor. First, we start with a given iterated function system attractor, with a random set of weights of the network. Second, we compare the consequent images using this neural network with the original image. On the basis of the result of this comparison, we can update the weight functions and the code of the nonlinear iterated function system (NLIFS). A common metric or error function used to compare between the two image fractal attractors is the Hausdorff distance. The error function gives us good means to measurement the difference between the two images.
-  Raghavendra K. T., Tripathy A. K. (India): An efficient channel equalizer using artificial neural networks, 357-368.
When digital signals are transmitted through frequency selective communication channels, one of the problems that arise is inter-symbol interference (ISI). To compensate corruptions caused by ISI and to find original information being transmitted, an equalization process is performed at the receiver. Since communication channels are time varying and random in nature, adaptive equalizers must be used to learn and subsequently track the time varying characteristics of the channel. Traditional equalizers are based on finding the inverse of the channel and compensating the channel's influence using inverse filter technique. There exists no equalizer for non-invertible channels. Artificial Neural Networks (ANN) can be applied to this for achieving better performance than conventional methods. We have proposed a model of a neural equalizer using MLP (multi layer perceptron), which reduces the mean square error to minimum and eliminates the effects of ISI. Empirically we have found that this neural equalizer is more efficient than conventional adaptive equalizers.
-  Cao L. J., Zhang JingQing, Cai Zongwu, Lim Kian Guan (China, USA, Singapore): An empirical study of dimensionality reduction in support vector machine, 177-192.
Recently, the the support vector machine (SVM) has become a popular tool in time series forecasting. In developing a successful SVM forecaster, the first step is feature extraction. This paper proposes the applications of principal component analysis (PCA), kernel principal component analysis (KPCA) and independent component analysis (ICA) to SVM for feature extraction. The PCA linearly transforms the original inputs into new uncorrelated features. The KPCA is a nonlinear PCA developed by using the kernel method. In ICA, the original inputs are linearly transformed into features which are mutually statistically independent. By examining the sunspot data, Santa Fe data set A and five real futures contracts, the experiment shows that SVM by feature extraction using PCA, KPCA or ICA can perform better than that without feature extraction. Furthermore, among the three methods, there is the best performance in the KPCA feature extraction, followed by the ICA feature extraction.
-  Dunis C. L., Laws J., Evans B. (United Kingdom): Modelling and trading the soybean-oil crush spread with recurrent and higher order networks: A comparative analysis, 193-213.
This paper investigates the soybean-oil "crush" spread, that is the profit margin gained by processing soybeans into soyoil. Soybeans form a large proportion (over 1/5th of the agricultural output of US farmers and the profit margins gained will therefore have a wide impact on the US economy in general.
The paper uses a number of techniques to forecast and trade the soybean crush spread. A traditional regression analysis is used as a benchmark against more sophisticated models such as a MultiLayer Perceptron (MLP), Recurrent Neural Networks and Higher Order Neural Networks. These are then used to trade the spread, the implementation of a number of filtering techniques as used in the literature are utilised to further refine the trading statistics of the models.
The results show that the best model before transactions costs both in- and out-of-sample is the Recurrent Network generating a superior risk adjusted return to all other models investigated. However in the case of most of the models investigated the cost of trading the spread all but eliminates any profit potential.
-  Holota R. (Czech Republic): Software and hardware realisation of neural network based on Min/Max Nodes for image recognition, 215-225.
This article deals with a neural network based on Min/Max nodes and its utilisation for image recognition purposes. The general concepts of the Min/Max nodes and the single-layer neural networks are outlined. The developed software systems for simulation are briefly introduced and the results of simulations with the various settings of a neural net are presented. The subject of simulations was the recognition of human faces. Finally, the hardware design of the neural network in VHDL is shown. The design demonstrates the ease of systems realisation and the achieving of high performance.
-  Iskandarani M. Z. (Amman-Jordan): Design, modeling, and implementation of a re-programmable neural switch (RNS), 227-238.
The design, build and test of a re-programmable neural switch (RNS) are carried out. The function of such a switch is to operate as a synaptic processor behaving in an adaptive manner and suitable to be used as a compact programmable device with other artificial neural network hardware. Interaction between constituent materials forming the switch is discussed and carrier interaction during the Programming cycles is explained. Programmability of the switch is proved to be bi-directional and reversible with hysteresis effect which is due to excess charge storage.
-  Kramosil I. (Czech Republic): Convergence of sequences of sets with respect to lattice-valued possibilistic measures, 239-255.
Convergence in, or with respect to, s-additive measure, in particular, convergence in probability, can be taken as an important notion of the standard measure and probability theory, and as a powerful tool when analyzing and processing sequences of subsets of the universe of discourse and, more generally, sequences of real-valued measurable functions defined on this universe. Our aim is to propose an alternative of this notion of convergence supposing that the measure under consideration is a (complete) non-numerical and, in particular, lattice-valued possibilistic measure, i.e., a set function obeying the demand of (complete) maxitivity instead of that of s-additivity. Focusing our attention to sequences of sets converging in a lattice-valued possibilistic measure, some more or less elementary properties of such sequences are stated and proved.
-  Übeyli E. D. (Turkey): Analysis of EEG signals using Lyapunov exponents, 257-273.
In this study, a new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The computed Lyapunov exponents of the EEG signals were used as inputs of the MLPNNs trained with backpropagation, delta-bar-delta, extended delta-bar-delta, quick propagation, and Levenberg-Marquardt algorithms. The performances of the MLPNN classifiers were evaluated in terms of training performance and classification accuracies. Receiver operating characteristic (ROC) curves were used to assess the performance of the detection process. The results confirmed that the proposed MLPNN trained with the Levenberg-Marquardt algorithm has potentiality in detecting the electroencephalographic changes.
-  Venkatalakshmi K., Sridhar S., MercyShalinie S. (India): Neuro-statistical classification of multispectral images based on decision fusion, 97-107.
Artificial Neural Networks have gained increasing popularity as an alternative to statistical methods for classification of remote sensed images. The superiority of neural networks is that, if they are trained with representative training samples they show improvement over statistical methods in terms of overall accuracies. However, if the distribution functions of the information classes are known, statistical classification algorithms work very well. To retain the advantages of both the classifiers, decision fusion is used to integrate the decisions of individual classifiers. In this paper a new unsupervised neural network has been proposed for the classification of multispectral images. Classification is initially achieved using Maximum Likelihood and Minimum-Distance-to-Means classifier followed by neural network classifier and the decisions of these classifiers are fused in the decision fusion center implemented using Majority-Voting technique. The results show that the scheme is effective in terms of increased classification accuracies (98%) compared to the conventional methods.
-  De Leone R., Marchitto E., Quaranta A. G. (Italy): Autoregression and artificial neural networks for financial market forecast, 109-128.
In recent years the interest of the investors in efficient methods for the forecasting price trend of a share in financial markets has grown steadily. The aim is to accurately forecast the future behavior of the market in order to identificate the so-called "correct timing".
In this paper we analyze three different approaches for forecasting financial data: Autoregression, artificial neural networks and support vector machines and we will determine potentials and limits of these methods. Application to the Italian financial market is also presented.
-  Jafar M. H. Ali, Aboul Ella Hassanien (Kuwait): PCNN for detection of masses in digital mammogram, 129-141.
The pulse-coupled neural network (PCNN) is a neural network that has the ability to extract edges, image segments and texture information from images. Only a few changes to the PCNN parameters are necessary to effective operating on different types of data. This is an advantage over the published image segmentation algorithms which generally require information about the target before they are effective.
This paper introduces the PCNN algorithm to provide an accurate segmentation of potential masses in mammogram images to assist radiologists in making their decisions. The fuzzy histogram hyperbolization algorithm is first applied to increase the contrast of the mammogram image before reasonable segmentation. It is followed by the PCNN algorithm to extract the region of interest to arrive at the final result. To test the effectiveness of the introduces algorithm on high quality images, a set of mammogram images was chosen and obtained from the Digital Databases for Mammography Image Analysis Society (MIAS). Four measures of quantifying enhancement have been adapted in this work. Each measure is based on the statistical information obtained from the labeled region of interest and a border area surrounding it. A comparison with the fuzzy c-mean clustering algorithm has been made.
-  Xiaolin Zhou, Jie Xu, Yongbo Zhao (China): Artificial neural networks allow the prediction of anxiety in Alzheimer's patients, 143-149.
Objective: The anxiety of Alzheimer's disease (AD) contributes significantly to decreased quality of life, increased morbidity, higher levels of caregiver distress, and the decision to institutionalize a patient. However, the incidence of anxiety in AD patients hasn't been discussed. In this study, artificial neural networks were used to predict the incidence of anxiety inAD patients.
Methods: A large randomized controlled clinical trial was analyzed in this study, which involved AD patients and caregivers from 6 different sites in the United States. The incidence of anxiety in AD patients was predicted by backpropagation artificial neural networks with one and hidden layers. After cross validation, the Predictive Accuracy (PA) of the models was measured to select the best structure of artificial neural networks.
Results: Among all models for predicting the incidence of anxiety in AD patients, the artificial neural network with respectively 6 and 3 neurons in the first and second hidden layers achieved the highest predictive accuracy of 85.56%.
Conclusions: The incidence of anxiety in AD patients can be predicted by an accuracy of over 80%. When used for anxiety prediction, neural networks with two hidden layers perform better than those with one hidden layer. These findings will benefit the prevention and early intervention of anxiety in Alzheimer's patients.
-  ter Borg R. W., Rothkrantz L. J. M. (The Netherlands): Short-term wind power prediction with radial basis networks, 151-161.
In this paper, we propose a method to predict wind power production with radial basis function networks. In this case, the power production is the aggregated production of all wind farms of one electricity company. The method uses wind speed predictions supplied by a meteorological agency, and predicts up to several days ahead. The coarse resolution of one meter per second is overcome by combining the weather data from several meteorological stations. The wind direction is mapped on a circle, so it is more compatible with a radial basis. These ingredients have been combined with a kernel machine, which has been implemented and tested. Test results are presented in the paper.
-  Hainc L., Kukal J. (Czech Republic): Role of robust processing in ANN de-noising of 2D image, 163-176.
Image de-noising is a traditional task related to linear and non-linear 2D filtering methods. The artificial neural network (ANN) can be also used as a kind of a sophisticated non-linear filter on local pixel neighborhood (3x3). The disadvantage of linear systems and neural networks is their sensitivity to impulse (isolated) noise. That is why the median and the other rank based filters are better in this case. The opposite situation is in the case of Gaussian noise when the mean filtering has higher value of signal / noise ratio (SNR) than the median filter. The first aim of our paper is to define k-robustness of local de-noising. Then it is easy to build up a new class of k-robust de-noising systems consisting of input frame, robust preprocessing, ANN and robust postprocessing. Implementation details related to signal processing and learning are also included. The third aim of our paper is to learn 1-robust and 2-robust systems to have maximum possible SNR for Gaussian noise on real MR image of human brain.
-  Editorial, 1-4.
-  Dündar P., Aytaç A. (Turkey): On the neighbour vulnerability of recursive graphs, 5-14.
The vulnerability of the communication network measures the resistance of the network to disruption of operation after the failure of certain stations or communication links. Cable cuts, node interruptions, software errors or hardware failures and transmission failure at various points can cause interrupt service for long periods of time. High levels of service dependability have traditionally characterised communication services. In communication networks, requiring greater degrees of stability or less vulnerability. If we think of graph G as modelling a network, the neighbour-integrity and edge-neighbour-integrity of a graph, which are considered as the neighbour vulnerability, are two measures of graph vulnerability. In the neighbour-integrity, it is considered that any failure vertex affects its neighbour vertices. In the edge-neighbour-integrity it is consider that any failure edge affects its neighbour edges.
In this paper we study classes of recursive graphs that are used to design communication networks and represent the molecular structure, and we show neighbour-integrity (vertex and edge) among the recursive graphs.
-  Ding Gang, Zhong Shisheng (China): Aircraft engine lubricating oil monitoring by process neural network, 15-24.
The aircraft engine lubricating oil monitoring is essential in terms of the flight safety and also for reduction of the maintenance cost. The concentration of metal elements in the lubricating oil includes a large amount of information about the health condition of the aircraft engine. By monitoring the lubricating oil, maintenance engineers can judge the performance deterioration of the aircraft engine and can find the latent mechanical faults in the aircraft engine in advance. But it is difficult for traditional methods to predict the tendency of the mental elements concentration in the lubricating oil. In this paper, a time series prediction method based on process neural network (PNN) is proposed to solve this problem. The inputs and the connection weights of the PNN are time-varied functions. A corresponding learning algorithm is developed. To simplify the learning algorithm, a set of appropriate orthogonal basis functions are introduced to expand the input functions and the connection weight functions of the PNN. The effectiveness of the proposed method is proved by the Mackey-Glass time series prediction. Finally, the proposed method is utilized to predict the Fe concentration in the aircraft engine lubricating oil monitoring, and the test results indicate that the proposed model seems to perform well and appears suitable for using as a predictive maintenance tool.
-  Faber J., Novák M., Tichý T., Svoboda P., Tatarinov V. (Czech Republic): Driver psychic state analysis based on EEG signals, 25-39.
EEG activities with open eyes in a quiet state (OA), during pseudo-Raven's test (PRA), in a hypnagogic state (HYP) and REM sleep (REM) are marked by similar, nearly flat curves. Further we observed states with eyes closed (OC), with hyperventilation (HV), with calculation (CAL) and in NONREM 1 sleep (NR 1). During OA, the EEG spectrum contains some delta and but rudimental alpha activity while during PRA and in HY there is an increase in delta-theta and a significant decrease in alpha activities. Hence, not even Fast Fourier Transformation (FFT) can differentiate between the states with fkat curves. This made us introduce another EEG curve analysis for coherence function (CF). We investigated 24 healthy volunteers aged 22 -- 55 years, 19 men and 5 women, in the above mentioned eight states with simultaneous EEG recording.
Vigilance was controlled by means of acoustic stimulation, reactivity was expressed in reaction time (ReT), i.e. latency of response in milliseconds (ms). Imitation Raven's test (pseudo-Raven' = PRA) was used for psychic testing. Recorded in the afternoon hours after partial sleep deprivation, the EEG curve was described optically using FFT and CF as well. FFT results have already been mentioned above. CF showed lower values during OA with up to 400 ms of ReT, a diffuse increase during HYP with ReT of 800 - 1200 ms, and a multifocal rise of delta activity in the EEG curve during PRA.
Consequently, EEG analysis can help differentiate between the above eight states, otherwise barely distinguishable with the naked eye especially in cases with flat EEG curves. Using similar analyses, it is possible to discriminate all stages of NONREM and REM sleep without polysomnography.
-  Maršálek T.,
Matoušek V., Mautner T., Merta M., Mouček R. (Czech Republic):
Coherence of EEG Signals and Biometric Signals of Handwriting under Influence of Nicotine, Alcohol, and Light Drugs,
Subject matter of investigations being carried out at the University of West Bohemia in Pilsen and described in this chapter is the objective evaluation of possible coherence of EEG signals and signals of handwriting generated by special developed BiSP pen. The influence of nicotine, alcohol and light drugs on the vigility and activity of human operators was investigated and evaluated; the results of the experiments being realized during the last five months are summarized in the last paragraph.
-  Svoboda P. (Czech Republic): Detection of vigilance changes by linear and nonlinear EEG signal analysis, 61-75.
This paper presents advanced methodology for the analysis of the electroencephalographic activity (EEG) of the brain aimed to monitor the cognitive states of an operator. The methodology of EEG analysis is based on two main approaches: linear methods based on Fourier transform, Linear Stochastic Models, Multi-covariance analysis, and nonlinear methods based on estimation of state space attractor, state space dimension, D2 dimension and the Largest Lyapunov Exponent (LLE). The correct application of these methods is supported by the study of stability, dynamics and space distribution of EEG signal. The uncertainty of adopting a new methodology, such as presented chaos theory, for EEG signal analysis is minimized by the adequate setup of experiments and by evaluation of results against well adopted power spectral estimates calculated by Fourier transform. For better understanding of the underlying processes behind EEG, the basic mental states such as relaxation, single and complex number count, and Raven test are analyzed and compared with the vigilance states. The averaged behavior of the computed markers of the EEG signal is studied with respect to a reaction time scale by the evaluation of a set of experiments. Because of this complex approach, the presented methodology is able to track the ongoing changes in EEG activity during the process of falling asleep. The automatic detection of vigilance changes is a consequent step to this work. Usability of such device in various fields of everyday life is of the high importance.
-  Tatarinov V. (Czech Republic): Classification of vigilance based on EEG signal analysis by use of neural network and statistical pattern recognition, 77-92.
Decrease of attention and an eventual microsleep of an artificial system operator is very dangerous and its early detection can prevent great losses. This chapter deals with a classification of states of vigilance based on analysis of an electroencefalographic activity of the brain. Preprocessing of data is done by the discrete Fourier transform. For the recognition radial basis functions (RBF), learning vector quantization (LVQ), multi-layer perceptron networks, k-nearest neighbor and a method based on Bayesian theory are used. Coefficients of bayes classifier are found using the maximum likelihood estimation. The experiments deal with analysis of human vigilance while their eyes are open. Then the reaction on visual stimuli is investigated. For this experiment 10 volunteers were repeatedly measured. The chapter shows that it is possible to classify vigilance in such conditions.
-  Book review, 93-94.
-  Book review, 95-96.