Tools Artificial intelligence




1 tools

1.1 search , optimization
1.2 logic
1.3 probabilistic methods uncertain reasoning
1.4 classifiers , statistical learning methods
1.5 neural networks
1.6 deep feedforward neural networks
1.7 deep recurrent neural networks
1.8 control theory
1.9 languages
1.10 evaluating progress





tools

in course of 60+ years of research, ai has developed large number of tools solve difficult problems in computer science. few of general of these methods discussed below.


search , optimization

many problems in ai can solved in theory intelligently searching through many possible solutions: reasoning can reduced performing search. example, logical proof can viewed searching path leads premises conclusions, each step application of inference rule. planning algorithms search through trees of goals , subgoals, attempting find path target goal, process called means-ends analysis. robotics algorithms moving limbs , grasping objects use local searches in configuration space. many learning algorithms use search algorithms based on optimization.


simple exhaustive searches sufficient real world problems: search space (the number of places search) grows astronomical numbers. result search slow or never completes. solution, many problems, use heuristics or rules of thumb eliminate choices unlikely lead goal (called pruning search tree ). heuristics supply program best guess path on solution lies. heuristics limit search solutions smaller sample size.


a different kind of search came prominence in 1990s, based on mathematical theory of optimization. many problems, possible begin search form of guess , refine guess incrementally until no more refinements can made. these algorithms can visualized blind hill climbing: begin search @ random point on landscape, , then, jumps or steps, keep moving our guess uphill, until reach top. other optimization algorithms simulated annealing, beam search , random optimization.


evolutionary computation uses form of optimization search. example, may begin population of organisms (the guesses) , allow them mutate , recombine, selecting fittest survive each generation (refining guesses). forms of evolutionary computation include swarm intelligence algorithms (such ant colony or particle swarm optimization) , evolutionary algorithms (such genetic algorithms, gene expression programming, , genetic programming).


logic

logic used knowledge representation , problem solving, can applied other problems well. example, satplan algorithm uses logic planning , inductive logic programming method learning.


several different forms of logic used in ai research. propositional or sentential logic logic of statements can true or false. first-order logic allows use of quantifiers , predicates, , can express facts objects, properties, , relations each other. fuzzy logic, version of first-order logic allows truth of statement represented value between 0 , 1, rather true (1) or false (0). fuzzy systems can used uncertain reasoning , have been used in modern industrial , consumer product control systems. subjective logic models uncertainty in different , more explicit manner fuzzy-logic: given binomial opinion satisfies belief + disbelief + uncertainty = 1 within beta distribution. method, ignorance can distinguished probabilistic statements agent makes high confidence.


default logics, non-monotonic logics , circumscription forms of logic designed default reasoning , qualification problem. several extensions of logic have been designed handle specific domains of knowledge, such as: description logics; situation calculus, event calculus , fluent calculus (for representing events , time); causal calculus; belief calculus; , modal logics.


probabilistic methods uncertain reasoning

many problems in ai (in reasoning, planning, learning, perception , robotics) require agent operate incomplete or uncertain information. ai researchers have devised number of powerful tools solve these problems using methods probability theory , economics.


bayesian networks general tool can used large number of problems: reasoning (using bayesian inference algorithm), learning (using expectation-maximization algorithm), planning (using decision networks) , perception (using dynamic bayesian networks). probabilistic algorithms can used filtering, prediction, smoothing , finding explanations streams of data, helping perception systems analyze processes occur on time (e.g., hidden markov models or kalman filters).


a key concept science of economics utility : measure of how valuable intelligent agent. precise mathematical tools have been developed analyze how agent can make choices , plan, using decision theory, decision analysis, , information value theory. these tools include models such markov decision processes, dynamic decision networks, game theory , mechanism design.


classifiers , statistical learning methods

the simplest ai applications can divided 2 types: classifiers ( if shiny diamond ) , controllers ( if shiny pick ). controllers do, however, classify conditions before inferring actions, , therefore classification forms central part of many ai systems. classifiers functions use pattern matching determine closest match. can tuned according examples, making them attractive use in ai. these examples known observations or patterns. in supervised learning, each pattern belongs predefined class. class can seen decision has made. observations combined class labels known data set. when new observation received, observation classified based on previous experience.


a classifier can trained in various ways; there many statistical , machine learning approaches. used classifiers neural network, kernel methods such support vector machine, k-nearest neighbor algorithm, gaussian mixture model, naive bayes classifier, , decision tree. performance of these classifiers have been compared on wide range of tasks. classifier performance depends on characteristics of data classified. there no single classifier works best on given problems; referred no free lunch theorem. determining suitable classifier given problem still more art science.


neural networks


a neural network interconnected group of nodes, akin vast network of neurons in human brain.


neural networks modeled after neurons in human brain, trained algorithm determines output response input signals. study of non-learning artificial neural networks began in decade before field of ai research founded, in work of walter pitts , warren mccullouch. frank rosenblatt invented perceptron, learning network single layer, similar old concept of linear regression. pioneers include alexey grigorevich ivakhnenko, teuvo kohonen, stephen grossberg, kunihiko fukushima, christoph von der malsburg, david willshaw, shun-ichi amari, bernard widrow, john hopfield, eduardo r. caianiello, , others.


the main categories of networks acyclic or feedforward neural networks (where signal passes in 1 direction) , recurrent neural networks (which allow feedback , short-term memories of previous input events). among popular feedforward networks perceptrons, multi-layer perceptrons , radial basis networks. neural networks can applied problem of intelligent control (for robotics) or learning, using such techniques hebbian learning, gmdh or competitive learning.


today, neural networks trained backpropagation algorithm, had been around since 1970 reverse mode of automatic differentiation published seppo linnainmaa, , introduced neural networks paul werbos.


hierarchical temporal memory approach models of structural , algorithmic properties of neocortex.


deep feedforward neural networks

deep learning in artificial neural networks many layers has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing , others.


according survey, expression deep learning introduced machine learning community rina dechter in 1986 , gained traction after igor aizenberg , colleagues introduced artificial neural networks in 2000. first functional deep learning networks published alexey grigorevich ivakhnenko , v. g. lapa in 1965. these networks trained 1 layer @ time. ivakhnenko s 1971 paper describes learning of deep feedforward multilayer perceptron 8 layers, deeper many later networks. in 2006, publication geoffrey hinton , ruslan salakhutdinov introduced way of pre-training many-layered feedforward neural networks (fnns) 1 layer @ time, treating each layer in turn unsupervised restricted boltzmann machine, using supervised backpropagation fine-tuning. similar shallow artificial neural networks, deep neural networks can model complex non-linear relationships. on last few years, advances in both machine learning algorithms , computer hardware have led more efficient methods training deep neural networks contain many layers of non-linear hidden units , large output layer.


deep learning uses convolutional neural networks (cnns), origins can traced neocognitron introduced kunihiko fukushima in 1980. in 1989, yann lecun , colleagues applied backpropagation such architecture. in 2000s, in industrial application cnns processed estimated 10% 20% of checks written in us. since 2011, fast implementations of cnns on gpus have won many visual pattern recognition competitions.


deep feedforward neural networks used in conjunction reinforcement learning alphago, google deepmind s program first beat professional human go player.


deep recurrent neural networks

early on, deep learning applied sequence learning recurrent neural networks (rnns) general computers , can run arbitrary programs process arbitrary sequences of inputs. depth of rnn unlimited , depends on length of input sequence. rnns can trained gradient descent suffer vanishing gradient problem. in 1992, shown unsupervised pre-training of stack of recurrent neural networks can speed subsequent supervised learning of deep sequential problems.


numerous researchers use variants of deep learning recurrent nn called long short-term memory (lstm) network published hochreiter & schmidhuber in 1997. lstm trained connectionist temporal classification (ctc). @ google, microsoft , baidu approach has revolutionised speech recognition. example, in 2015, google s speech recognition experienced dramatic performance jump of 49% through ctc-trained lstm, available through google voice billions of smartphone users. google used lstm improve machine translation, language modeling , multilingual language processing. lstm combined cnns improved automatic image captioning , plethora of other applications.


control theory

control theory, grandchild of cybernetics, has many important applications, in robotics.


languages

ai researchers have developed several specialized languages ai research, including lisp, prolog, python, , c++.


evaluating progress

in 1950, alan turing proposed general procedure test intelligence of agent known turing test. procedure allows major problems of artificial intelligence tested. however, difficult challenge , @ present agents fail.


artificial intelligence can evaluated on specific problems such small problems in chemistry, hand-writing recognition , game-playing. such tests have been termed subject matter expert turing tests. smaller problems provide more achievable goals , there ever-increasing number of positive results.


for example, performance @ draughts (i.e. checkers) optimal, performance @ chess high-human , nearing super-human (see computer chess: computers versus human) , performance @ many everyday tasks (such recognizing face or crossing room without bumping something) sub-human.


a quite different approach measures machine intelligence through tests developed mathematical definitions of intelligence. examples of these kinds of tests start in late nineties devising intelligence tests using notions kolmogorov complexity , data compression. 2 major advantages of mathematical definitions applicability nonhuman intelligences , absence of requirement human testers.


a derivative of turing test automated public turing test tell computers , humans apart (captcha). name implies, helps determine user actual person , not computer posing human. in contrast standard turing test, captcha administered machine , targeted human opposed being administered human , targeted machine. computer asks user complete simple test generates grade test. computers unable solve problem, correct solutions deemed result of person taking test. common type of captcha test requires typing of distorted letters, numbers or symbols appear in image undecipherable computer.








Comments

Popular posts from this blog

Types Raffinate

Biography Michał Vituška

Caf.C3.A9 Types of restaurant