Deep feedforward neural networks Artificial intelligence
deep learning in artificial neural networks many layers has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing , others.
according survey, expression deep learning introduced machine learning community rina dechter in 1986 , gained traction after igor aizenberg , colleagues introduced artificial neural networks in 2000. first functional deep learning networks published alexey grigorevich ivakhnenko , v. g. lapa in 1965. these networks trained 1 layer @ time. ivakhnenko s 1971 paper describes learning of deep feedforward multilayer perceptron 8 layers, deeper many later networks. in 2006, publication geoffrey hinton , ruslan salakhutdinov introduced way of pre-training many-layered feedforward neural networks (fnns) 1 layer @ time, treating each layer in turn unsupervised restricted boltzmann machine, using supervised backpropagation fine-tuning. similar shallow artificial neural networks, deep neural networks can model complex non-linear relationships. on last few years, advances in both machine learning algorithms , computer hardware have led more efficient methods training deep neural networks contain many layers of non-linear hidden units , large output layer.
deep learning uses convolutional neural networks (cnns), origins can traced neocognitron introduced kunihiko fukushima in 1980. in 1989, yann lecun , colleagues applied backpropagation such architecture. in 2000s, in industrial application cnns processed estimated 10% 20% of checks written in us. since 2011, fast implementations of cnns on gpus have won many visual pattern recognition competitions.
deep feedforward neural networks used in conjunction reinforcement learning alphago, google deepmind s program first beat professional human go player.
Comments
Post a Comment