Dipartimento di Scienze dell'Informazione


... still under construction ...

Neural Systems at DSI

Publications

M. A. Alberti, A. Bertoni, P. Campadelli, G. Grossi, and R. Posenato. A Neural Circuit for the Maximum 2-Satisfiability Problem. In Mateo Valero and Antonio Gonzalez, editors, Euromicro Workshop on Parallel and Distributed Processing, pages 319-323, Los Alamitos, CA, January 25-27 1995. EUROMICRO, IEEE Computer Society Press.
Abstract.In this paper we discuss a uniform family of circuits, realizing neural networks to solve approximately the maximum 2-satisfiability problem. An implementation on FPGA for the problem instances of 16 variables and 480 clauses is presented. The circuit shows a good performance solving problem instances in 20 microsec. with relative error less than 0.003.
B. Apolloni, G. Arduini S. Braghieri, and A. Piccolboni. Neural cooperating procedures for fractal analisys of artificial drawings. Chaos, Solitons & Fractals, 5(3):747-760, 1995.
Abstract. We assess a procedure for discovering the mixing ratios of transformations within the bench of affine transformations of a Probabilistic Iterated Fuzzy Functions System which gives rise to artificial 256 colors drawings. A first hypothesis on these ratios is obtained from a pseudo-ergodic estimate of the probabilities of matching special paths during the image generation. The job of the neural network is to remove biases from these hypotheses, after a proper training. The reconstructed images are nearly identical to their original ones also on instances not used to train the network and their representation through the above ratios gains a very high compression rate.

B. Apolloni, A. Piccolboni and E. Sozio. A hybrid synbolic subsymbolic controller for complex dynamical systems. EANN, Glasgow, , 1995.

F. Archetti, S. Silani and F. Stella. Neural networks for traffic forecasting. Proceedings of the AIRO95, Ancona, 20-22 Settembre 1995.

Abstract. In this paper we outline the general problem of short term forecasting of traffic and road conditions which is a most demanding application. This is mainly due to the inherent structural complexity of the road network and the rapidly varying traffic conditions. Reliable fast and possibly simpler ways of forecasting are a critical ingredient of adaptive traffic control and route guidance systems. We describe a system which allows to describe urban traffic conditions using the concept of time-space window. Such approach allows to design specialized neural networks depending upon a given time-space window.
P. Auer and N. Cesa-Bianchi. On-line Learning with Malicious Noise and the Closure Algorithm. Annals of AI and Mathematics. To appear, 1995.

S. Ben-David, N. Cesa-Bianchi, D. Haussler, and P.M. Long. Characterizations of learnability for classes of {0-n}-valued functions. Journal of Computer and Systems Sciences, 50(1995)74-86.

A. Bertoni, P. Campadelli, and R. Posenato. Polynomial Time Approximation of Min-Energy in Hopfield Networks. In Seventh Italian Workshop on Neural Nets "WIRN VIETRI-95". IIASS, World Scientific, May, 18-20 1995.

Abstract. We consider polynomial-time algorithms for finding approximate solutions of the ground state problem for Hopfield network where the neurons are arranged on a two-level grid. We prove:
  1. there is an approximate polynomial-time algorithm with absolute error
    O(n/log n);
  2. there exists a constant k such that for c every approximate polynomial-time algorithm has absolute error greater than k n^c infinitely often, unless P=NP.
A. Bertoni, P. Campadelli, R. Posenato, and M. Santini. A relation between approximability of GROUND STATE problem on tridimensional Ising spin glasses. Accepted to Fifth Italian Conference on Theoretical Computer Science, May 1995.

G. Braga, G. Cattaneo, P. Flocchini, and C. Quaranta Vogliotti. Pattern Growth in Elementary Cellular Automata. Theoretical Computer Science 145, pages 1-26, 1995.

Abstract. A classification of elementary Cellular Automata (CA) based on their pattern growth is introduced. It is shown that this classification is effective, that is, there exists an algorithm to determine to which class a given CA belongs. This algorithm is based on the properties of the local rule of CAs, not requiring the observation of their evolution. Furthermore, necessary and sufficient conditions to detect all the elementary CAs exhibiting a shift-like behavior are given; these CAs have interesting dynamical properties and chaotic characteristics.
P. Campadelli, P. Mora, and R. Schettini. Color set selection for nominal color coding by Hopfield networks. The Visual Computer, 11:150-155, 1995.
Abstract.The specification of high contrast color sets is a fundamental step toward the optimal use of color to represent qualitative data. The problem is formulated here as a combinatorial optimization problem on graphs, and a Hopfield neural network of analog neurons is designed to find approximate solutions. The network's performance, heuristically evaluated, appears satisfactory. The simplicity, versatility and robusteness of the algorithm make it a valid tool in nominal color coding.
P. Campadelli, D. Medici, and R. Schettini. Using Hopfield networks to segment color images. In 8th Int. Conf. on Image Analysis and Processing, S. Remo, September 1995. To appear.
Abstract. In this paper we present two algorithms for color image segmentation based on Huang's idea of describing the segmentation problem as the one of minimizing a suitable energy function for a Hopfield network. The first algorithm builds three different networks (one for each color feature) and then combine the results. The second builds a unique network according to the number of clusters obtained by histogram analysis. Experimental results, heuristically and quantitatively evaluated, are encouraging.
G. Cattaneo, E. Formenti, and G. Mauri. Rule space transformations and one-dimensional cellular automata. In print on Proc. Second International Conference: Developments in Language Theory, J.Dassow (ed.), World Scientific, Singapore, 1995.
Abstract. In this paper we study some classes of transformations on the one dimensional CAs rule space, which preserves relevant properties of the CA behaviour such as language complexity, dynamical evolution and so on. At the end we investigate a class of transformations which we call minimal since it is minimal the cardinality of its quotient set with respect to the induced equivalence relation.
N. Cesa-Bianchi, Y. Freund, D.P. Helmbold, and M.K. Warmuth. On-line Prediction and Conversion Strategies. Machine Learning. To appear, 1995.
Abstract. We study the problem of deterministically predicting boolean values by combining the boolean predictions of several experts. Previous on-line algorithms for this problem predict with the weighted majority of the experts' predictions. These algorithms give each expert an exponential weight. We show that it is better to use sums of binomials as weights. In particular, we present a deterministic algorithm using binomial weights that has a better worst case mistake bound than the best deterministic algorithm using exponential weights. The binomial weights naturally arise from a version space argument. We also show how both exponential and binomial weighting schemes can be used to make prediction algorithms robust against noise.
N. Cesa-Bianchi, P. Long, and M. Warmuth. Worst-Case Quadratic Loss Bounds for Prediction using Linear Functions and Gradient Descent. IEEE Trans. on Neural Networks. To Appear.
Abstract. In this paper we study the performance of gradient descent when applied to the problem of on-line linear prediction in arbitrary inner product spaces. We show worst-case bounds on the sum of the squared prediction errors under various assumptions concerning the amount of a priori information about the sequence to predict. The algorithms we use are variants and extensions of on-line gradient descent. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of on-line prediction for classes of smooth functions.
G. Grossi, R. Posenato, M. Costa, D. Palmisano, and E. Pasero. Fast Prototyping for Hardware Neural Networks. In Procedings of International Conference on Artificial Neural Networks. European Neural Network Society, October 9-13 1995. To be published.
Abstract. Neural algorithm and architectures are rarely tested on actual hardware testbeds owing to the high cost and long time required to develop neural chips. A Fast Prototyping Neural System FPNS where neural architectures can be easily programmed and configured on programmable chips is here presented. Two different case studies were developed and used as benchmark for our system, showing a good performance.
J.J. McKeown, F. Stella, and G. Hall. Some numerical aspects of the training problem for feed-forward neural nets. Submitted to Neural Networks, 1995.
Abstract. This paper presents the view that, in the literature on the neural network training problem, sufficient attention is seldom given to establishing the numerical propeties of the error function and its minima. The numerical conditioning of such solutions, as well as their optimality, is particularly important in the training context, uniqueness of solution and the search for global optima. A brief outline of the main optimization algorithms is given in order to provide a background for the discussion.
B. Apolloni, E. Battistini, D. De Falco. Higher order Boltzmann machines and entropy bounds. Submitted to Annals of Statistics, 1994.

B. Apolloni A. Piccolboni. Processing of instrumental tracks via recurrent neural networks. In Informatica e Neuroscienze, 1994.

Abstract. We outline the capabilities of the paradigm of the Recurrent Neural Networks in processing dynamical signal, coming for instance from continuous measurement instrumentation. The key idea is to compare these tracks with the evolution through time of the state vector of a neural network trained to simulate the process grounding the above signal. The training procedure is described and some examples are supplied.
B. Apolloni, G. Ronchini. Dynamic sizing of multilayer perceptrons. In Biological Cybernetics,71:49-63, 1994.

A. Bertoni and P. Campadelli. On the approximability of the Energy Function. In International Conference on Artificial Neural Networks. Springer, 1994.

A. Bertoni, P. Campadelli, and G. Molteni. On the approximability of the Energy Function of Ising spin glasses. J. Phys. A: Math., (27):6719-6729, 1994.

P. Campadelli, P. Mora, and R. Schettini. Using Hopfield networks in the nominal color coding. In 12th IAPR International Conference on Pattern Recognition, volume 2, pages 112-116. The I.E.E.E. press, 1994.

Abstract. Nominal color coding is widely used by the image processing community to represent the output of a classification-segmentation process. In order to automate color-class association we propose a suitable description scheme for both the color set to be used and the image to be coded. Using this description we then define a suitable energy function for Hopfield's neural networks. The objective is to associate more ``conspicuous'' colors with less ``visible'' classes, assigning highly contrasting colors to classes with a high ``adjacency'' .
M. A. Alberti, P. Marelli, and R. Posenato. A Neural Algorithm for the Maximum Satisfiability Problem. In E.R. Caianello, editor, Parallel Architectures and Neural Networks, pages 173-179, Singapore, May 1993. World Scientific Publishing Co.
Abstract. In this paper we design a modified version of a Hopfield network to provide an approximate solution of the maximum 2-satisfiability problem. We prove that the algorithm relative error is 0.25 and we provide some statistics on the empirical results, showing the algorithm satisfactory behaviour on the average, compared with an exact solution of the problem involving only sixteen variables. The interest for this new algorithm, called NAMS in the paper, lies in the easiness of its implementation on neural hardware, therefore offering an efficient and fast neural solution of the problem for a variety of applications.
N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability. In 33rd Annual Symposium on the Foundations of Computer Science, pages 292-301. IEEE Computer Society Press, 1993.
Abstract. Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine', and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or "agnostic") framework. Furthermore, we show a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.
B. Apolloni, A. Bertoni, P. Campadelli, and G. Mauri. Formal model of learning by examples. In Parallel Architectures and Neural Networks - 5th Italian Workshop. World Scientific, 1993.

B. Apolloni, D. Crivelli, M. Amato. Neural time warping. In Proc. of Eurospeech 93, Berlin, 1993.

B. Apolloni, A. Fanelli and D. Marini. Fractal analysis of artificial drawings through neural networks. Neural Networks World , 1:73-86, 1993.

A. Bertoni, P. Campadelli, and A. Manfredi. Robust learning by means of single neurons. In Parallel Architectures and Neural Networks - 5th Italian Workshop. World Scientific, 1993.

A. Bertoni, P. Campadelli, and G. Mauri. PAC learning and neural networks. In Parallel Architectures and Neural Networks - 5th Italian Workshop. World Scientific, 1993.

G. Braga, G. Cattaneo, P. Flocchini, and G. Mauri. Complex Chaotic behavior of a class of Subshift Cellular Automata. Complex Systems 7, pages 269-296, 1993.

Abstract. A class of parameterized boolean, one-dimensional, bi-infinite, Cellular Automata has been studied observing their behavior when some parameters of the local function are changed. These automata are equivalent to a particular class of boolean Neural Networks and the change in the parameters corresponds to a change in the symmetricity of the connection matrix. The purpose is to analyze the different dynamics, beginning with a symmetric connection matrix and moving towards an antisymmetric one. We have observed simple dynamics in correspondence to the symmetric situation whereas approaching the antisymmetrical case a more and more complex behavior appears. On the basis of these observations, we have detected a new class of cellular automata which is characterized by shift-like dynamics (Simple and Complex Subshift Rules); these CAs correspond to the asymmetric situations and they are chaotic dynamical systems.
N. Cesa-Bianchi, Y. Freund, D.P. Helmbold, D. Haussler, R. Schapire, and M.K. Warmuth. How to use expert advice. In 25th ACM Symposium on the Theory of Computation, pages 382-391. ACM Press, 1993.
Abstract. We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called "experts". Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context. We also extend our analysis to the case in which log loss is used instead of the expected number of mistakes.
P. Flocchini, F. Gardin, G. Mauri, M.P. Pensini, and P. Stofella. Combining image processing operators and neural networks in a face recognition system. International Journal of Pattern Recognition and Artificial Intelligence, 1993. To appear.

B. Apolloni. Design of Algorithms for Neural Networks. Supervised Learning. In Computers and artificial intelligence, n. 5/92, 457-480, 1992.

B. Apolloni, A. Armelloni, G. Bollani, D. De Falco. Some experimental results on asymmetric Boltzmann machines. In Proc of wks. on Complexity in Physics and Technology, M. Garrido & R. Vilela Mendes eds., 151-166, World Scientific, 1992.

B. Apolloni, D. De Falco, R. Pisano. The Boltzmann machine: experimental results and theoretical perspectives. In Proc. of Int. Workshop on Probabilistic Methods in Mathematical Physics, F. Guerra, M. I Loffredo and C. Marchioro eds., 58-69, World Scientific, 1992.

B. Apolloni, G. Mauri, C. Trevisson, P. Valota, and A.M. Zanaboni. Learning to solve PP-attachment ambiguities in natural language processing through neural networks. In COMPEURO 92, pages 199-205, 1992.

B. Apolloni, G. Mauri, C. Trevisson, P. Valota, and A.M. Zanaboni. PP-attachment disambiguation in natural language processing through neural networks. In Int. Conf. Artificial Intelligence, Expert Systems, Natural Language, pages 93-108, 1992.

A. Bertoni, P. Campadelli, G. Mauri. PAC Learning and neural networks. In Structure: from Physics to General Systems, pages 48-63. World Scientific, 1992.

A. Bertoni, P. Campadelli, A. Morpurgo, and S. Panizza. Polynomial uniform convergence of relative frequencies to probabilities. In 1991 Conference on Advances in Neural Information Processing Systems, pages 904-911, 1992.

A. Bertoni, P. Campadelli, A. Morpurgo, and S. Panizza. Polynomial uniform convergence and polynomial-sample learnability. In 5th Annual ACM Workshop on Computational Learning Theory, pages 265-271. ACM Press, 1992.
A. Bertoni, P. Campadelli, and S. Panizza. Learnability by fixed distributions: an information theoretic approach. In 4th Italian Conference on Theoretical Computer Science, pages 68-79. World Scientific, 1992.

P. Campadelli and A. Morpurgo. Learning classes of linearly separable boolean functions from positive examples. International Journal of Foundations of Computer Science, 3(1):41-54, 1992.

Abstract. This paper deals with learnability from positive examples of subclasses of linearly separable boolean functions in the framework of the probably approximately correct learning model. We prove that classes of functions defined by binary threshold neurons with n inputs and g(n) unknown weights are learnable in polynomial time iff g(n) = O(logn). Besides we give an upper and a lower bound on the size of the sample needed to learn and estimate the time complexity of the learning algorithm.
Keywords: learning from examples, pac learning, linearly separable functions, binary threshold neurons.
M.A. Alberti, P. Marelli, and M. Scagliotti. Graphical simulations for neural nets. In Parallel Architectures and Neural Networks - 3rd Italian Workshop, pages 265-269, 1991.
Abstract. The goal of this work is to provide a teaching tool for experimenting and modelling neural nets. An environment, that allows to design and simulate neural nets and to show the learning process has been implemented with a highly interactive graphical user interface. A graphical editor allows to design the net diagram, that can later be associated to some neural models, after an automatic check for consistency. The environment runs on a HP 9000/300 and it is written in Objective-C. The work is developed as part of the COLOS project (COnceptual Learning Of Science), supported by EC-COMETT I; it has been partially financed by CNR "Sistemi informatici e calcolo parallelo" and by MURST.
B. Apolloni, A. Bertoni, P. Campadelli, and D. De Falco. Asymmetric Boltzmann machines. Biological Cybernetics, 66:61-70, 1991.

B. Apolloni, D. De Falco. Learning by Parallel Boltzmann Machines. In IEEE Inf. Sci., 37:1162-1164, 1991.

B. Apolloni, D. De Falco. Learning by asymmetric parallel Boltzmann machines. In Neural Computation, 3:378-384, 1992.

A. Bertoni, R. Brivio, and P. Campadelli. Stabilization and size of attraction basins in symmetric networks. In Parallel Architectures and Neural Networks - 3rd Italian Workshop, pages 281-284, 1991.

A. Bertoni, P. Campadelli, and G. Mauri. Some notes on computational learning theory. EATCS Bulletin, (43):140-158, 1991.

A. Bertoni, P. Campadelli, A. Morpurgo, and R. Posenato. An algorithm for learning from positive examples classes of linearly separable boolean functions. In Parallel Architectures and Neural Networks - 3rd Italian Workshop, pages 265-269, 1991.

Abstract. This paper deals with learnability from positive examples of subclasses of linearly separable boolean functions in the framework of the probably approximately correct learning model. We present a program that learns the classes of functions defined by binary threshold neurons with n inputs and g(n) unknown weights where g(n) = O(log n) and the results of the simulation of the learning process.
G. Cattaneo and R. Nani. Lyapunov functions for attractive discrete time boolean dynamical systems. In Ohio State University - Università di Genova Joint Conference on New Trends in System Theory, page 202. Birkhauser Verlag, 1991.

M.P. Pensini, G. Mauri, and F. Gardin. Flowshop and TSP. In Lectures notes in artificial intelligence - WOPPLOT 89, pages 157-182. Springer, 1991.

B. Apolloni, G. Avanzini, N. Cesa-Bianchi, and G. Ronchini. Diagnosis of epilepsy via backpropagation. In International Joint Conference on Neural Networks, volume 2, pages 571-574. Lawrence Erlbaum Associates, 1990.

B. Apolloni, A. Bertoni, P. Campadelli, and D. De Falco. Binary networks with parallel updating. In Parallel Architectures and Neural Networks - 2nd Italian Workshop, pages 47-56. World Scientific, 1990.

B. Apolloni, A. Bertoni, P. Campadelli, and D. De Falco. Lyapunov functions and neural networks. In Parallel Architectures and Neural Networks - 2nd Italian Workshop, pages 53-68. World Scientific, 1990.

B. Apolloni, N. Cesa-Bianchi, and G. Ronchini. Training neural networks to break the knapsack cryptosystem. In Parallel Architectures and Neural Networks - 2nd Italian Workshop, pages 377-382. World Scientific, 1990.

B. Apolloni, D. De Falco. Learning by feedforward Boltzmann machines. In Theoretical aspects of neural computing, M. Novak and E. Pelikan eds.,94-102, World Scientific, 1990.

B. Apolloni, P. Donzelli, G. Gianelli, and A. Peverelli. Neural networks for satellites orbit control systems. In Parallel Architectures and Neural Networks - 2nd Italian Workshop, pages 281-293. World Scientific, 1990.

B. Apolloni, F. Perego. High precision control of a flexible arm through neural network. Proc. Int. Phoenix Conf. on Computers and Communications , 1990.

A. Bertoni and P. Campadelli. Analysis of parallel and sequential Boltzmann Machines. In International Neural Network Conference, page 1015. Kluwer, 1990.

P. Campadelli and A. Morpurgo. PAC and mistake bounded learning: a comparison and applications. In Algorithms and Complexity - First Italian Conference, pages 188-198. World Scientific, 1990.

Abstract. This paper deals with learnability of boolean functions by positive examples. A comparison between the efficiency measures defined in the `probably approximately correct learning' model and those defined in the `mistake bounded learning' model is made. A notion of strong learnability is introduced and the classes of learnable and strongly learnable boolean functions are characterized. As application of the results, the learnability by positive examples of some parameters of a formal neuron is investigated.
G. Cattaneo and R. Nani. A method for the analysis of parallel boolean networks with low connectivity. In Parallel Architectures and Neural Networks - 2nd Italian Workshop, pages 397-402. World Scientific, 1990.

N. Cesa-Bianchi. Learning the distribution in the extended PAC model. In Proceedings of the First International Workshop on Algorithmic Learning Theory, pages 236-246. Japanese Society for Artificial Intelligence, 1990.

C. Ferretti and G. Mauri. NNET: some tools for Neural NETworks simulation. In 9th IEEE Computer Society Press, pages 236-246, 1990.

B. Apolloni, A. Bertoni, P. Campadelli, and D. De Falco. Neural networks: Deterministic and stochastic dynamics. Lecture Notes in Physics, 355:27-41, 1989.

B. Apolloni and D. De Falco. Probabilistical aspects of machine learning. In Int. Conf. on White Noise Analysis, pages 22-30. Mathematics and Applications - Springer Verlag, 1989.

A. Bertoni, P. Campadelli, and F. Grassani. Full parallelism in Boltzmann Machines. In Neuro-Nimes, pages 361-370, 1989.

A. Bertoni, P. Campadelli, and A. Morpurgo. Total stabilization in symmetric networks. In Proc. NEURO-NIMES 88, pages 183-194, Nîmes, France, Nov. 1988.

Abstract. Symmetric neural networks evolving according to arbitrary iteration modes are considered. For this class the problem STABILIZATION is proved to be PSPACE complete; the same problem for the class of symmetric networks with parallel mode is proved to be coNP-complete.

Pages: Home - The people - Research areas - Publications - Projects