Contents of Volume 21 (2011)1/2011 2/2011 3/2011 4/2011 5/2011 6/2011
-  Coskun Ozkan, Celal Ozturk, Filiz Sunar, Dervis Karaboga (Turkiye):
The Artificial Bee Colony algorithm in training Artificial Neural Network for oil spill detection, 473-492.
Full text DOI: 10.14311/NNW.2011.21.028
Nowadays, remote sensing technology is being used as an essential tool for monitoring and detecting oil spills to take precautions and to prevent the damages to the marine environment. As an important branch of remote sensing, satellite based synthetic aperture radar imagery (SAR) is the most effective way to accomplish these tasks. Since a marine surface with oil spill seems as a dark object because of much lower backscattered energy, the main problem is to recognize and differentiate the dark objects of oil spills from others to be formed by oceanographic and atmospheric conditions. In this study, Radarsat-1 images covering Lebanese coasts were employed for oil spill detection. For this purpose, a powerful classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) was used. As the original contribution of the paper, the network was trained by a novel heuristic optimization algorithm known as Artificial Bee Colony (ABC) method besides the conventional Backpropagation (BP) and Levenberg-Marquardt (LM) learning algorithms. A comparison and evaluation of different network training algorithms regarding reliability of detection and robustness show that for this problem best result is achieved with the Artificial Bee Colony algorithm (ABC).
-  Rastovic D. (Croatia):
Tokamak design as one sustainable system, 493-504.
Full text DOI: 10.14311/NNW.2011.21.029
We derive the phenomena of Landau damping as a stationary point of entropy functions with Lagrangian methods. The steady states are described inside of some interval of numbers with infinite fuzzy logic controllers. The results are also true for local equilibriums, i.e. for some global non-equilibriums functions.
-  Svítek M. (Czech Republic):
From quantum transfer functions to complex quantum circuits, 505-517.
Full text DOI: 10.14311/NNW.2011.21.030
The goal of the paper is to analyze the behavior of quantum systems which are connected in more complex circuits through serial, parallel or feedback ordering of various quantum subsystems. The Quantum State Transform (QST) is introduced to define a Quantum Transfer Function (QTF) that can be used to characterize behavior of complex circuits like e.g. stability better. It is shown that ordering more general quantum systems into feedback can yield to the definition of hierarchical quantum systems that are very close to well-known scale-free networks. Finally, all identified mathematical instruments are used to define quantum information/knowledge circuits as ordering of 2-port quantum subsystems covering both input/output information flow and content.
-  Askarunisa A., Arockia Jackulin Punitha K., Ramaraj N. (India):
Test case generation and prioritization for composite web service based on OWL-S, 519-537.
Full text DOI: 10.14311/NNW.2011.21.031
Web services are the basic building blocks for the business which is different from web applications. Testing of web services is difficult and increases the cost due to the unavailability of source coder. In previous work, web services were tested based on the syntactic structure using Web Service Description Language (WSDL) for atomic web services. This paper proposes an automated testing framework for composite web services based semantics, where the domain knowledge of the web services is described by protege tool  and the behavior of the entire business operation flow for the composite web service is provided by Ontology Web Language for services (OWL-S). Prioritization of test cases is performed based on various coverage criteria for composite web services. Series of experiments were conducted to assess the effects of prioritization on the coverage values and benefits of prioritization techniques were found.
-  Dostálová S., Šonka K. (Czech Republic):
The influence of a short daytime nap and the influence of its timing on psychomotor efficiency, 539-550.
Full text DOI: 10.14311/NNW.2011.21.032
A decrease in quality and quantity of sleep has a negative impact on efficiency during wakefulness, which shows particularly in case of people who interact with technological systems, for example system operators, vehicle drivers, etc. Day sleep can positively influence the following vigilance but in the time immediately after the sleep, the psychomotor performance is influenced by sleep inertia whose intensity depends on time and length of sleep.
The aim of the study was to compare day psychomotor performances of people suffering from sleep disorders and a control group of healthy people, and to test the hypothesis that a short, 15 minute long sleep causes more important sleep inertia at 3 p.m. than at 1 p.m.
Sleepiness was objectively evaluated on a group of 35 tested probands, consisting of 29 patients (13 women and 16 men) with given excessive daytime sleepiness accompanying sleep disorder, and of a control group of 6 healthy subjects, with help of Multiple Sleep Latency Test (MSLT) and subjectively with help of Alertness Visual Analogue Scale (VAS). Psychomotor performance was examined by Psychomotor Vigilance Task (PVT).
We found out an unimportant difference in the intensity of sleep inertia after a sleep at 1 p.m. and 3 p.m. We proved significant prolongation of a reaction time and an increase in number of lapses on the group with pathologically shortened sleep latency in MSLT compared to the group with the normal sleep latency. Our work also shows the difference between the subjective and objective evaluation of sleepiness of subjects. Our results show that the prolonged reaction time and increase in number of lapses of the patient group are significant in all PVT examinations compared to the control group. Further, it is obvious that the PVT test is a more sensitive method for judging psychomotor performance and indirectly for judging sleepiness than the MSLT.
These facts seem to be important especially from the two following reasons:
- They can be a help for recommendation of improved regime for driver relaxation.
- They can help in search for deeper understanding of mechanisms of attention decreases.
-  Ji Wu, Xiao-Lei Zhang (China):
Sparse kernel maximum margin clustering, 551-574.
Full text DOI: 10.14311/NNW.2011.21.033
Recently, a new clustering method called maximum margin clustering (MMC) was proposed. It extended the support vector machine (SVM) thoughts to unsupervised scenarios and had shown promising performances. Traditionally, it was formulated as a non-convex integer optimization problem which was difficult to solve. In order to alleviate the computational burden, the efficient cutting-plane MMC (CPMMC) [wang2010mmc] was proposed which solved the MMC problem in its primal. However, the CPMMC is restricted to linear kernel. In this paper, we extend the CPMMC algorithm to the nonlinear kernel scenarios, which is the proposed sparse kernel MMC (SKMMC). Specifically, we propose to solve an adaptive threshold version of CPMMC in its dual and alleviate its computational complexity by employing the cutting plane subspace pursuit (CPSP) algorithm [joachims2009sparse]. Eventually, the SKMMC algorithm could work with nonlinear kernels at a linear computational complexity and a linear storage complexity. Our experimental results on several real-world data sets show that the SKMMC has higher accuracies than existing MMC methods, and takes less time and storage demands than existing kernel MMC methods.
-  Contents volume 21 (2011), 575-577.
-  Author's index volume 21 (2011), 579-581.
-  Rastovic D. (Croatia):
-  Karhunen J. (Finland):
Robust PCA methods for complete and missing data, 357-392.
Full text DOI: 10.14311/NNW.2011.21.022
In this paper, we consider and introduce methods for robust principal component analysis (PCA), including also cases where there are missing values in the data. PCA is a widely applied standard statistical method for data preprocessing, compression, and analysis. It is based on the second-order statistics of the data and is optimal for Gaussian data, but it is often applied to data sets having unknown or other types of probability distributions. PCA can be derived from minimization of the mean-square representation error or maximization of variances under orthonormality constraints. However, these quadratic criteria are sensitive to outliers in the data and long-tailed distributions, which may considerably degrade the results given by PCA. We introduce robust methods for estimation of both the PCA eigenvectors directly or the PCA subspace spanned by them. Experimental results show that our methods provide often better results than standard PCA when outliers are present in the data. Furthermore, we extend our methods to incomplete data with missing values. The problems arising in such cases have several features typical for nonlinear models.
-  Li Leong-Kwan, Shao S. (Hong Kong, USA):
Discrete-time recurrent neural networks and its application to compression of infra-red spectrum, 393-406.
Full text DOI: 10.14311/NNW.2011.21.023
We study the discrete-time recurrent neural network that derived from the Leaky-integrator model and its application to compression of infra-red spectrum. Our results show that the discrete-time Leaky-integrator recurrent neural network (RNN) model can be used to approximate the continuous-time model and inherit its dynamical characters if a proper step size is chosen. Moreover, the discrete-time Leaky-integrator RNN model is absolutely stable. By developing the double discrete integral method and employing the state space search algorithm for the discrete-time recurrent neural network model, we demonstrate with quality spectra regenerated from the compressed data how to compress the infra-red spectrum effectively. The information we stored is the parameters of the system and its initial states. The method offers an ideal setting to carry out the recurrent neural network approach to chaotic cases of data compression.
-  Kramosil I. (Czech Republic):
On a particular class of lattice-valued possibilistic distributions, 407-427.
Full text DOI: 10.14311/NNW.2011.21.024
Investigated are possibilistic distributions taking as their values sequences from the infinite Cartesian product of identical copies of a fixed finite subset of the unit interval of real numbers. Uniform and lexicographic partial orderings on the space of these sequences are defined and the related complete lattices introduced. Lattice-valued entropy function is defined in the common pattern for both the orderings, naturally leading to different entropy values for the particular ordering applied in the case under consideration. The mappings on possibilistic distributions with uniform partial ordering under which the corresponding entropy values are conserved as well as approximations of possibilistic distributions with respect to this entropy function are also investigated.
-  Sarina Sulaiman, Siti Mariyam Shamsuddin, Ajith Abraham, Shahida Sulaiman (Malaysia, USA, Czech Republic):
Intelligent web caching using machine learning methods, 429-452.
Full text DOI: 10.14311/NNW.2011.21.025
Web caching is a technology to improve network traffic on the Internet. It is a temporary storage of Web objects for later retrieval. Three significant advantages of Web caching include reduction in bandwidth consumption, server load, and latency. These advantages make the Web to be less expensive yet it provides better performance. This research aims to introduce an advanced machine learning method for a classification problem in Web caching that requires a decision to cache or not to cache Web objects in a proxy cache server. The challenges in this classification problem include the issues in identifying attributes ranking and improve the classification accuracy significantly. This research includes four methods that are Classification and Regression Trees (CART), Multivariate Adaptive Regression Splines (MARS), Random Forest (RF) and TreeNet (TN) for classification on Web caching. The experimental results reveal that CART performed extremely well in classifying Web objects from the existing log data with a size of Web objects as a significant attribute for Web cache performance enhancement.
-  Leso M., Musil T. (Czech Republic):
Safety core approach for the system with high demands for a safety and reliability design in a partially dynamically reconfigurable field-programmable gate array (FPGA), 453-460.
Full text DOI: 10.14311/NNW.2011.21.026
This paper deals with a new approach to designing the micro-electronic system suitable for mass-parallel and neuronal structures realizations in which the high demand on safety and reliability is given. The presented concept is based on the FPGA platform. Authors point out various kinds of faults which can possibly occur during system cycle. Furthermore, authors introduce the Safety Core principle and define systems for which it is applicable. There are possibilities of using partial dynamic reconfiguration shown in this paper in the context of FPGA fabric testing, faults catching and correcting.
-  Long Li, Jie Yang, Wei Wu (China):
Intuitionistic fuzzy Hopfield neural network and its stability, 461-472.
Full text DOI: 10.14311/NNW.2011.21.027
Intuitionistic fuzzy sets (IFSs) are generalization of fuzzy sets by adding an additional attribute parameter called non-membership degree. In this paper, a max-min intuitionistic fuzzy Hopfield neural network (IFHNN) is proposed by combining IFSs with Hopfield neural networks. The stability of IFHNN is investigated. It is shown that for any given weight matrix and any given initial intuitionistic fuzzy pattern, the iteration process of IFHNN converges to a limit cycle. Furthermore, under suitable extra conditions, it converges to a stable point within finite iterations. Finally, a kind of Lyapunov stability of the stable points of IFHNN is proved, which means that if the initial state of the network is close enough to a stable point, then the network states will remain in a small neighborhood of the stable point. These stability results indicate the convergence of memory process of IFHNN. A numerical example is also provided to show the effectiveness of the Lyapunov stability of IFHNN.
-  Li Leong-Kwan, Shao S. (Hong Kong, USA):
-  Darwish A., Abraham A. (Czech Republic, Egypt, USA):
The use of computational intelligence in digital watermarking: Review, challenges, and new trends, 277-297.
Full text DOI: 10.14311/NNW.2011.21.017
Digital Watermarking (DW) based on computational intelligence (CI) is currently attracting considerable interest from the research community. This article provides an overview of the research progress in applying CI methods to the problem of DW. The scope of this review will encompass core methods of CI, including rough sets (RS), fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GA), swarm intelligence (SI), and hybrid intelligent systems. The research contributions in each field are systematically summarized and compared to highlight promising new research directions. The findings of this review should provide useful insights into the current DW literature and be a good source for anyone who is interested in the application of CI approaches to DW systems or related fields. In addition, hybrid intelligent systems are a growing research area in CI.
-  Fábera V., Zelenka J., Jáneš V., Jánešová M. (Czech Republic):
Regular grammar transformation inspired by the graph distance using GA, 299-309.
Full text DOI: 10.14311/NNW.2011.21.018
This paper introduces a method how to transform one regular grammar to the second one. The transformation is based on regular grammar distance computation. Regular grammars are equivalent to finite states machines and they are represented by oriented graphs or by transition matrices, respectively. Thus, the regular grammar distance is defined analogously to the distance between two graphs. The distance is measured as the minimal count of elementary operations over the grammar which transform the first grammar to the second one. The distance is computed by searching an optimal mapping of non-terminal symbols of both grammars. The computation itself is done by the genetic algorithm because the exhaustive evaluation of mapping leads to combinatorial explosion. Transformation steps are derived from differences in matrices. Differences are identified during the computation of the distance.
-  el Hindi K., AL-Akhras M. (Jordan):
Smoothing decision boundaries to avoid overfitting in neural network training, 311-325.
Full text DOI: 10.14311/NNW.2011.21.019
This work addresses the problem of overfitting the training data. We suggest smoothing the decision boundaries by eliminating border instances from the training set before training Artificial Neural Networks (ANNs). This is achieved by using a variety of instance reduction techniques. A large number of experiments were performed using 21 benchmark data sets from UCI machine learning repository, the experiments were performed with and without the introduction of noise in the data set. Our empirical results show that using a noise filtering algorithm to filter out border instances before training an ANN does not only improve the classification accuracy but also speeds up the training process by reducing the number of training epochs. The effectiveness of the approach is more obvious when the training data contains noisy instances.
-  Kwak Y. T., Hwang J. W., Yoo C. J. (Korea):
A new damping strategy of Levenberg-Marquardt algorithm for Multilayer Perceptrons, 327-340.
Full text DOI: 10.14311/NNW.2011.21.020
In this paper, a new adjustment to the damping parameter of the Levenberg-Marquardt algorithm is proposed to save training time and to reduce error oscillations. The damping parameter of the Levenberg-Marquardt algorithm switches between a gradient descent method and the Gauss-Newton method. It also affects training speed and induces error oscillations when a decay rate is fixed. Therefore, our damping strategy decreases the damping parameter with the inner product between weight vectors to make the Levenberg-Marquardt algorithm behave more like the Gauss-Newton method, and it increases the damping parameter with a diagonally dominant matrix to make the Levenberg-Marquardt algorithm act like a gradient descent method. We tested two simple classifications and a handwritten digit recognition for this work. Simulations showed that our method improved training speed and error oscillations were fewer than those of other algorithms.
-  Xiaofeng Ling, Xinbao Gong, Xiaogang Zang, Ronghong Jin (China):
A two-stage learning method to configure RBF centers and widths in dynamic environment employing immune operations, 341-355.
Full text DOI: 10.14311/NNW.2011.21.021
This paper proposes an immunity-based RBF training algorithm for nonlinear dynamic problems. Exploiting the locally-tuned structure of RBF network through immunological metaphor, a two-stage learning technique is proposed to configure RBF centers and widths in the hidden layer. Inspired by affinity maturation process of immune response, immune evolutionary mechanism (IEM) with memory operations is implemented in the learning stages to dynamically fine-tune the network performance. Experiment results also demonstrate that the algorithm has reached good performance with relatively low computational efforts in dynamic environments.
-  Fábera V., Zelenka J., Jáneš V., Jánešová M. (Czech Republic):
-  Kaan Yetilmezsoy, Bestamin Ozkaya, Mehmet Cakmakci (Turkey):
Artificial intelligence-based prediction models for environmental engineering, 193-218.
Full text DOI: 10.14311/NNW.2011.21.012
A literature survey was conducted to appraise the recent applications of artifical intelligence (AI)-based modeling studies in the environmental engineering field. A number of studies on artificial neural networks (ANN), fuzzy logic and adaptive neuro-fuzzy systems (ANFIS) were reviewed and important aspects of these models were highlighted. The results of the extensive literature survey showed that most AI-based prediction models were implemented for the solution of water/wastewater (55.7%) and air pollution (30.8%) related environmental problems compared to solid waste (13.5%) management studies. The present literature review indicated that among the many types of ANNs, the three-layer feed-forward and back-propagation (FFBP) networks were considered as one of the simplest and the most widely used network type. In general, the Levenberg-Marquardt algorithm (LMA) was found as the best-suited training algorithm for several complex and nonlinear real-life problems of environmental engineering. The literature survey showed that for water and wastewater treatment processes, most of AI-based prediction models were introduced to estimate the performance of various biological and chemical treatment processes, and to control effluent pollutant loads and flowrates from a specific system. In air polution related environmental problems, forecasting of ozone (O3) and nitrogen dioxide (NO2) levels, daily and/or hourly particulate matter (PM2.5) and PM10) emissions, and sulfur dioxide (SO2) and carbon monoxide (CO) concentrations were found to be widely modeled. For solid waste management applications, reseachers conducted studies to model weight of waste generation, solid waste composition, and total rate of waste generation.
-  Kulkarni S. (Australia):
Fingerprint feature extraction and classification by learning the characteristics of fingerprint patterns, 219-226.
Full text DOI: 10.14311/NNW.2011.21.013
This paper presents a two stage novel technique for fingerprint feature extraction and classification. Fingerprint images are considered as texture patterns and Multi Layer Perceptron (MLP) is proposed as a feature extractor. The same fingerprint patterns are applied as input and output of MLP. The characteristics output is taken from single hidden layer as the properties of the fingerprints. These features are applied as an input to the classifier to classify the features into five broad classes. The preliminary experiments were conducted on small benchmark database and the found results were promising. The results were analyzed and compared with other similar existing techniques.
-  Plebe A., Mazzone M., De La Cruz Vivian M. (Italy):
A biologically inspired neural model of vision-language integration, 227-249.
Full text DOI: 10.14311/NNW.2011.21.014
One crucial step in the construction of the human representation of the world is found at the boundary between two basic stimuli: visual experience and the sounds of language. In the developmental stage when the ability of recognizing objects consolidates, and that of segmenting streams of sounds into familiar chunks emerges, the mind gradually grasps the idea that utterances are related to the visible entities of the world. The model presented here is an attempt to reproduce this process, in its basic form, simulating the visual and auditory pathways, and a portion of the prefrontal cortex putatively responsible for more abstract representations of object classes. Simulations have been performed with the model, using a set of images of 100 real world objects seen from many different viewpoints and waveforms of labels of various classes of objects. Subsequently, categorization processes with and without language are also compared.
-  Samsudin R., Saad P., Shabri A. (Malaysia):
A hybrid GMDH and least squares support vector machines in time series forecasting, 251-268.
Full text DOI: 10.14311/NNW.2011.21.015
Time series consists of complex nonlinear and chaotic patterns that are difficult to forecast. This paper proposes a novel hybrid forecasting model which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for the LSSVM model and the LSSVM model that works as time series forecasting. Three well-known time series data sets are used in this study to demonstrate the effectiveness of the forecasting model. These data are utilized to forecast through an application aimed to handle real life time series. The results found by the proposed model were compared with the results of the GMDH and LSSVM models. Experiment result indicates that the hybrid model was a powerful tool to model time series data and provides a promising technique in time series forecasting methods.
-  Svítek M. (Czech Republic):
Wave probabilistic information power, 269-276.
Full text DOI: 10.14311/NNW.2011.21.016
Paper summarizes the results in the area of information physics that is a new progressively developing field of study trying to introduce basics of information variables into physics. New parameters, like wave information flow, wave information/knowledge content or wave information impedance, are first defined and then represented by wave probabilistic functions. Next, relations between newly defined parameters are used to compute information power or to build wave information circuits covering feedbacks, etc.
-  Kulkarni S. (Australia):
-  Frolov A., Husek D., Bobrov P. (Russia, Czech Republic):
Comparison of four classification methods for brain-computer interface, 101-115.
Full text DOI: 10.14311/NNW.2011.21.007
This paper examines the performance of four classifiers for Brain Computer Interface (BCI) systems based on multichannel EEG recordings. The classifiers are designed to distinguish EEG patterns corresponding to performance of several mental tasks. The first one is the basic Bayesian classifier (BC) which exploits only interchannel covariance matrices corresponding to different mental tasks. The second classifier is also based on Bayesian approach but it takes into account EEG frequency structure by exploiting interchannel covariance matrices estimated separately for several frequency bands (Multiband Bayesian Classifier, MBBC). The third one is based on the method of Multiclass Common Spatial Patterns (MSCP) exploiting only interchannel covariance matrices as BC. The fourth one is based on the Common Tensor Discriminant Analysis (CTDA), which is a generalization of MCSP, taking EEG frequency structure into account. The MBBC and CTDA classifiers are shown to perform significantly better than the two other methods. Computational complexity of the four methods is estimated. It is shown that for all classifiers the increase in the classifying quality is always accompanied by a significant increase of computational complexity.
-  Kadlecek B., Pejsa L., Kovanda J. (Czech Republic):
Vehicles' weight method under operation, 117-127.
Full text DOI: 10.14311/NNW.2011.21.008
The article contains an example detailing the application and analysis of the accuracy of the method called continuous dynamic vehicle weighing during travel, using the telematic system. A model was built using the input data obtained through laboratory experiments on a running vehicle. This model was then applied to study the effects of error probability for the measured input data on the resulting expected accuracy of vehicle weight measuring in a normal running regime. Based on this point, hereby presented is a proposal of selected measured variables of great importance, and a proposal for the computation relationship of the used variables. By statistically processing a set of measured data taken during one vehicle travel, it is possible to attain -10.7% extreme relative error of the given method for the dynamic determination of vehicle weight.
-  Mu-Yen Chen (Taiwan):
A hybrid model for business failure prediction -- Utilization of particle swarm optimization and support vector machines, 129-152.
Full text DOI: 10.14311/NNW.2011.21.009
Bankruptcy has long been an important topic in finance and accounting research. Recent headline bankruptcies have included Enron, Fannie Mae, Freddie Mac, Washington Mutual, Merrill Lynch, and Lehman Brothers. These bankruptcies and their financial fallout have become a serious public concern due to huge influence these companies play in the real economy. Many researchers began investigating bankruptcy predictions back in the early 1970s. However, until recently, most research used prediction models based on traditional statistics. In recent years, however, newly-developed data mining techniques have been applied to various fields, including performance prediction systems. This research applies particle swarm optimization (PSO) to obtain suitable parameter settings for a support vector machine (SVM) model and to select a subset of beneficial features without reducing the classification accuracy rate. Experiments were conducted on an initial sample of 80 electronic companies listed on the Taiwan Stock Exchange Corporation (TSEC).
This paper makes four critical contributions: (1) The results indicate the business cycle factor mainly affects financial prediction performance and has a greater influence than financial ratios. (2) The closer we get to the actual occurrence of financial distress, the higher the accuracy obtained both with and without feature selection under the business cycle approach. For example, PSO-SVM without feature selection provides 89.37% average correct cross-validation for two quarters prior to the occurrence of financial distress. (3) Our empirical results show that PSO integrated with SVM provides better classification accuracy than the Grid search, and genetic algorithm (GA) with SVM approaches for companies as normal or under threat. (4) The PSO-SVM model also provides better prediction accuracy than do the Grid-SVM, GA-SVM, SVM, SOM, and SVR-SOM approaches for seven well-known UCI datasets. Therefore, this paper proposes that the PSO-SVM approach could be a more suitable method for predicting potential financial distress.
-  Andrejević Stošović M., Milovanović D., Litovski V.: (Serbia):
Hierarchical approach to diagnosis of mixed-mode circuits using artificial neural networks, 153-168.
Full text DOI: 10.14311/NNW.2011.21.010
Feed-forward artificial neural networks (ANNs) have been applied to the diagnosis of mixed-mode electronic circuit. In order to tackle the circuit complexity and to reduce the number of test points, hierarchical approach to the diagnosis generation was implemented with two levels of decision: the system level and the circuit level. For every level, using the simulation-before-test (SBT) approach, fault dictionary was created first, containing data relating to the fault code and the circuit response for a given input signal. ANNs were used to model the fault dictionaries. During the learning phase, the ANNs were considered as an approximation algorithm to capture the mapping enclosed within the fault dictionary. Later on, in the diagnostic phase, the ANNs were used as an algorithm for mapping the measured data into fault code, which is equivalent to searching the fault dictionary performed by some other diagnostic procedures. At the topmost level, the fault dictionary was split into parts simplifying the implementation of the concept. A voting system was created at the topmost level in order to distinguish which ANN's output is to be accepted as the final diagnostic statement. The approach was tested on an example of an analog-to-digital converter, and only one test point was used, i.e. the digital output. Full diversity of faults was considered in both digital (stuck-at and delay faults) and analog (parametric and catastrophic faults) parts of the diagnosed system. Special attention was paid to the faults related to the A/D and D/A interfaces within the circuit.
-  Faber J., Novák M. (Czech Republic):
Thalamo-cortical reverberation in the brain produces alpha and delta rhythms as iterative convergence of fuzzy cognition in an uncertain environment, 169-192.
Full text DOI: 10.14311/NNW.2011.21.011
In this paper, an extended analysis of the human electroencephalographic signals (EEG) in the region of alpha rhythms is presented. The consequences of the existence of spindle-like (fusiform) shape are discussed and verified on the set of experimental measurements. The hypothesis of possible interrelations of the EEG alpha fuses with a tested person's psychical state and restrictions is presented.
-  Kadlecek B., Pejsa L., Kovanda J. (Czech Republic):
-  Editorial, 1-2.
-  AL-Akhras M., ALMomani I., Sleit A. (Jordan):
An improved E-model using artificial neural network VoIP quality predictor, 3-26.
Full text DOI: 10.14311/NNW.2011.21.001
Voice over Internet Protocol (VoIP) networks are an increasingly important field in the world of telecommunication due to many involved advantages and potential revenue. Measuring speech quality in VoIP networks is an important aspect of such networks for legal, commercial and technical reasons. The E-model is a widely used objective approach for measuring the quality as it is applicable to monitoring live-traffic, automatically and non-intrusively. The E-model suffers from several drawbacks. Firstly, it considers the effect of packet loss on the speech quality collectively without looking at the content of the speech signal to check whether the loss occurred in voiced or unvoiced parts of the signal. Secondly, it depends on subjective tests to calibrate its parameters, which makes it applicable to limited conditions corresponding to specific subjective experiments. In this paper, a solution is proposed to overcome these two problems. The proposed solution improves the accuracy of the E-model by differentiating between packet loss during speech and silence periods. It also avoids the need for subjective tests, which makes it extendable to new network conditions. The proposed solution is based on an Artificial Neural Networks (ANN) approach and is compared with the accurate Perceptual Evaluation of Speech Quality (PESQ) model and the original E-model to confirm its accuracy. Several experiments are conducted to test the effectiveness of the proposed solution on two well-known ITU-T speech codecs; namely, G.723.1 and G.729.
-  Askarunisa A., Ramaraj N. (India):
An algorithm for test data set reduction for web application testing, 27-43.
Full text DOI: 10.14311/NNW.2011.21.002
Web Applications have become a critical component of the global information infrastructure, and it is important that they be validated to ensure their reliability. Exploiting user session data is a promising approach to testing Web applications. However, the effectiveness of user session testing technique depends on the set of collected user session data: The wider this set, the greater the capability of the approach to detect failures, but the wider the user session data set, the greater the cost of collecting, analyzing and storing data. In this paper, a technique for reducing a set of user sessions to an equivalent smaller one is implemented. This technique allows reducing of a wider set of user sessions to an equivalent reduced user session and pages, sufficient to test a Web application effectively. Reduction of a user session for several web applications like TCENet Web application, Portal application, Social Networking, Online shopping, Online Library is carried out in order to validate the proposed technique; and our technique is compared with HGS, Random Reduction technique and the Concept Lattice technique to evaluate its efficiency.
-  Jie Yang, Wenyu Yang, Wei Wu (China):
A novel spiking perceptron that can solve XOR problem, 45-50.
Full text DOI: 10.14311/NNW.2011.21.003
In this short note, we introduce a new architecture for spiking perceptron: The actual output is a linear combination of the firing time of the perceptron and the spiking intensity (the gradient of the state function) at the firing time. It is shown by numerical experiments that this novel spiking perceptron can solve the XOR problem, while a classical spiking neuron usually needs a hidden layer to solve the XOR problem.
-  Jude Hemanth D., Kezi Selva Vijila C., Anitha J. (India):
A high speed back propagation neural network for multistage MR brain tumor image segmentation, 51-66.
Full text DOI: 10.14311/NNW.2011.21.004
Artificial neural networks (ANN) are one of the highly preferred artificial intelligence techniques for brain image segmentation. The commonly used ANN is the supervised ANN, namely Back Propagation Neural Network (BPN). Even though BPNs guarantee high efficiency, they are computationally non-feasible due to the huge convergence time period. In this work, the aspect of computational complexity is tackled using the proposed high speed BPN algorithm (HSBPN). In this modified approach, the weight vectors are calculated without any training methodology. Magnetic resonance (MR) brain tumor images of three stages, namely severe, moderate and mild, are used in this work. An extensive feature set is extracted from these images and used as input for the neural network. A comparative analysis is performed between the conventional BPN and the HSBPN in terms of convergence time period and segmentation efficiency. Experimental results show the superior nature of HSBPN in terms of the performance measures.
-  Svítek M. (Czech Republic):
Conditional combinations of quantum systems, 67-73.
Full text DOI: 10.14311/NNW.2011.21.005
The paper presents a new method of conditional combination of quantum systems that takes into account the external environmental conditions. As a practical example of the method presented here, the well-known Bell states are modeled as conditional combination of two q-bits. Analogous approach can be applied in modeling conditional combinations of two and more quantum system sequences.
-  Zhang H., Wu W. (China):
Convergence of split-complex backpropagation algorithm with momentum, 75-90.
Full text DOI: 10.14311/NNW.2011.21.006
This paper investigates a split-complex backpropagation algorithm with momentum (SCBPM) for complex-valued neural networks. Convergence results for SCBPM are proved under relaxed conditions and compared with the existing results. Monotonicity of the error function during the training iteration process is also guaranteed. Two numerical examples are given to support the theoretical findings.
-  Contents volume 20 (2010), 91-94.
-  Author's index volume 20 (2010), 95-99.
-  AL-Akhras M., ALMomani I., Sleit A. (Jordan):