Stochastic GPU-based Multithread Implementation of Multiple Back-Propagation
Authors
Abstract
Graphics Processing Units (GPUs) have evolved into a highly parallel, multi-threaded, many-core processor with enormous computational power. The GPU is especially well suited to address pattern recognition problems that can be expressed as data-parallel computations. Thus it provides a viable alternative to the use of dedicated hardware in the neural network (NN) field, where the long training times have always been a major drawback. In this paper, we propose a GPU implementation of the online (stochastic) training mode of the Multiple Back- Propagation (MBP) algorithm and compare it with corresponding standalone CPU version and with the batch training mode GPU implementation. For a fair and unbiased comparison we run the experiments with benchmarks from machine learning and pattern recognition field and we show that the GPU performance excel the CPU results in particular for high complex problems.
Keywords
GPU Computing, Parallel Programming, Neural Networks
Subject
GPU Computing, Neural networks
Conference
Second International Conference on Agents and Artificial Intelligence (ICAART 2010), pp. 271-276, January 2010
Cited by
No citations found