Philosophy and ethics Artificial intelligence




1 philosophy , ethics

1.1 limits of artificial general intelligence
1.2 potential risks , moral reasoning

1.2.1 existential risk
1.2.2 devaluation of humanity
1.2.3 decrease in demand human labor
1.2.4 artificial moral agents
1.2.5 machine ethics
1.2.6 malevolent , friendly ai


1.3 machine consciousness, sentience , mind

1.3.1 consciousness
1.3.2 computationalism , functionalism
1.3.3 strong ai hypothesis
1.3.4 robot rights


1.4 superintelligence

1.4.1 technological singularity
1.4.2 transhumanism







philosophy , ethics

there 3 philosophical questions related ai:



the limits of artificial general intelligence

can machine intelligent? can think ?



alan turing s polite convention
we need not decide if machine can think ; need decide if machine can act intelligently human being. approach philosophical problems associated artificial intelligence forms basis of turing test.


the dartmouth proposal
every aspect of learning or other feature of intelligence can precisely described machine can made simulate it. conjecture printed in proposal dartmouth conference of 1956, , represents position of working ai researchers.


newell , simon s physical symbol system hypothesis
physical symbol system has necessary , sufficient means of general intelligent action. newell , simon argue intelligence consists of formal operations on symbols. hubert dreyfus argued that, on contrary, human expertise depends on unconscious instinct rather conscious symbol manipulation , on having feel situation rather explicit symbolic knowledge. (see dreyfus critique of ai.)


gödelian arguments
gödel himself, john lucas (in 1961) , roger penrose (in more detailed argument 1989 onwards) made highly technical arguments human mathematicians can consistently see truth of own gödel statements , therefore have computational abilities beyond of mechanical turing machines. however, modern consensus in scientific , mathematical community these gödelian arguments fail.


the artificial brain argument
the brain can simulated machines , because brains intelligent, simulated brains must intelligent; machines can intelligent. hans moravec, ray kurzweil , others have argued technologically feasible copy brain directly hardware , software, , such simulation identical original.


the ai effect
machines intelligent, observers have failed recognize it. when deep blue beat garry kasparov in chess, machine acting intelligently. however, onlookers commonly discount behavior of artificial intelligence program arguing not real intelligence after all; real intelligence whatever intelligent behavior people can machines still cannot. known ai effect: ai whatever hasn t been done yet.

potential risks , moral reasoning

widespread use of artificial intelligence have unintended consequences dangerous or undesirable. scientists future of life institute, among others, described short-term research goals how ai influences economy, laws , ethics involved ai , how minimize ai security risks. in long-term, scientists have proposed continue optimizing function while minimizing possible security risks come along new technologies.


machines intelligence have potential use intelligence make ethical decisions. research in area includes machine ethics , artificial moral agents , , study of malevolent vs. friendly ai .


existential risk


the development of full artificial intelligence spell end of human race. once humans develop artificial intelligence, take off on own , redesign @ ever-increasing rate. humans, limited slow biological evolution, couldn t compete , superseded.




a common concern development of artificial intelligence potential threat pose humanity. concern has gained attention after mentions celebrities including stephen hawking, bill gates, , elon musk. group of prominent tech titans including peter thiel, amazon web services , musk have committed $1billion openai nonprofit company aimed @ championing responsible ai development. opinion of experts within field of artificial intelligence mixed, sizable fractions both concerned , unconcerned risk eventual superhumanly-capable ai.


in book superintelligence, nick bostrom provides argument artificial intelligence pose threat mankind. argues sufficiently intelligent ai, if chooses actions based on achieving goal, exhibit convergent behavior such acquiring resources or protecting being shut down. if ai s goals not reflect humanity s - 1 example ai told compute many digits of pi possible - might harm humanity in order acquire more resources or prevent being shut down, better achieve goal.


for danger realized, hypothetical ai have overpower or out-think of humanity, minority of experts argue possibility far enough in future not worth researching. other counterarguments revolve around humans being either intrinsically or convergently valuable perspective of artificial intelligence.


concern on risk artificial intelligence has led high-profile donations , investments. in january 2015, elon musk donated ten million dollars future of life institute fund research on understanding ai decision making. goal of institute grow wisdom manage growing power of technology. musk funds companies developing artificial intelligence such google deepmind , vicarious keep eye on s going on artificial intelligence. think there potentially dangerous outcome there.


development of militarized artificial intelligence related concern. currently, 50+ countries researching battlefield robots, including united states, china, russia, , united kingdom. many people concerned risk superintelligent ai want limit use of artificial soldiers.


devaluation of humanity

joseph weizenbaum wrote ai applications cannot, definition, simulate genuine human empathy , use of ai technology in fields such customer service or psychotherapy misguided. weizenbaum bothered ai researchers (and philosophers) willing view human mind nothing more computer program (a position known computationalism). weizenbaum these points suggest ai research devalues human life.


decrease in demand human labor

martin ford, author of lights in tunnel: automation, accelerating technology , economy of future, , others argue specialized artificial intelligence applications, robotics , other forms of automation result in significant unemployment machines begin match , exceed capability of workers perform routine , repetitive jobs. ford predicts many knowledge-based occupations—and in particular entry level jobs—will increasingly susceptible automation via expert systems, machine learning , other ai-enhanced applications. ai-based applications may used amplify capabilities of low-wage offshore workers, making more feasible outsource knowledge work.


artificial moral agents

this raises issue of how ethically machine should behave towards both humans , other ai agents. issue addressed wendell wallach in book titled moral machines in introduced concept of artificial moral agents (ama). wallach, amas have become part of research landscape of artificial intelligence guided 2 central questions identifies humanity want computers making moral decisions , can (ro)bots moral . wallach question not centered on issue of whether machines can demonstrate equivalent of moral behavior in contrast constraints society may place on development of amas.


machine ethics

the field of machine ethics concerned giving machines ethical principles, or procedure discovering way resolve ethical dilemmas might encounter, enabling them function in ethically responsible manner through own ethical decision making. field delineated in aaai fall 2005 symposium on machine ethics: past research concerning relationship between technology , ethics has largely focused on responsible , irresponsible use of technology human beings, few people being interested in how human beings ought treat machines. in cases, human beings have engaged in ethical reasoning. time has come adding ethical dimension @ least machines. recognition of ethical ramifications of behavior involving machines, recent , potential developments in machine autonomy, necessitate this. in contrast computer hacking, software property issues, privacy issues , other topics ascribed computer ethics, machine ethics concerned behavior of machines towards human users , other machines. research in machine ethics key alleviating concerns autonomous systems—it argued notion of autonomous machines without such dimension @ root of fear concerning machine intelligence. further, investigation of machine ethics enable discovery of problems current ethical theories, advancing our thinking ethics. machine ethics referred machine morality, computational ethics or computational morality. variety of perspectives of nascent field can found in collected edition machine ethics stems aaai fall 2005 symposium on machine ethics.


malevolent , friendly ai

political scientist charles t. rubin believes ai can neither designed nor guaranteed benevolent. argues sufficiently advanced benevolence may indistinguishable malevolence. humans should not assume machines or robots treat favorably, because there no priori reason believe sympathetic our system of morality, has evolved along our particular biology (which ais not share). hyper-intelligent software may not decide support continued existence of humanity, , extremely difficult stop. topic has begun discussed in academic publications real source of risks civilization, humans, , planet earth.


physicist stephen hawking, microsoft founder bill gates, , spacex founder elon musk have expressed concerns possibility ai evolve point humans not control it, hawking theorizing spell end of human race .


one proposal deal ensure first intelligent ai friendly ai , , able control subsequently developed ais. question whether kind of check remain in place.


leading ai researcher rodney brooks writes, think mistake worrying developing malevolent ai anytime in next few hundred years. think worry stems fundamental error in not distinguishing difference between real recent advances in particular aspect of ai, , enormity , complexity of building sentient volitional intelligence.


machine consciousness, sentience , mind

if ai system replicates key aspects of human intelligence, system sentient – have mind has conscious experiences? question closely related philosophical problem nature of human consciousness, referred hard problem of consciousness.


consciousness


computationalism , functionalism

computationalism position in philosophy of mind human mind or human brain (or both) information processing system , thinking form of computing. computationalism argues relationship between mind , body similar or identical relationship between software , hardware , may solution mind-body problem. philosophical position inspired work of ai researchers , cognitive scientists in 1960s , proposed philosophers jerry fodor , hilary putnam.


strong ai hypothesis

the philosophical position john searle has named strong ai states: appropriately programmed computer right inputs , outputs thereby have mind in same sense human beings have minds. searle counters assertion chinese room argument, asks inside computer , try find mind might be.


robot rights

mary shelley s frankenstein considers key issue in ethics of artificial intelligence: if machine can created has intelligence, feel? if can feel, have same rights human? idea appears in modern science fiction, such film a.i.: artificial intelligence, in humanoid machines have ability feel emotions. issue, known robot rights , being considered by, example, california s institute future, although many critics believe discussion premature. critics of transhumanism argue hypothetical robot rights lie on spectrum animal rights , human rights. subject profoundly discussed in 2010 documentary film plug & pray.


superintelligence

are there limits how intelligent machines – or human-machine hybrids – can be? superintelligence, hyperintelligence, or superhuman intelligence hypothetical agent possess intelligence far surpassing of brightest , gifted human mind. ‘’superintelligence’’ may refer form or degree of intelligence possessed such agent.


technological singularity

if research strong ai produced sufficiently intelligent software, might able reprogram , improve itself. improved software better @ improving itself, leading recursive self-improvement. new intelligence increase exponentially , dramatically surpass humans. science fiction writer vernor vinge named scenario singularity . technological singularity when accelerating progress in technologies cause runaway effect wherein artificial intelligence exceed human intellectual capacity , control, radically changing or ending civilization. because capabilities of such intelligence may impossible comprehend, technological singularity occurrence beyond events unpredictable or unfathomable.


ray kurzweil has used moore s law (which describes relentless exponential improvement in digital technology) calculate desktop computers have same processing power human brains year 2029, , predicts singularity occur in 2045.


transhumanism


you awake 1 morning find brain has lobe functioning. invisible, auxiliary lobe answers questions information beyond realm of own memory, suggests plausible courses of action, , asks questions bring out relevant facts. come rely on new lobe stop wondering how works. use it. dream of artificial intelligence.




robot designer hans moravec, cyberneticist kevin warwick , inventor ray kurzweil have predicted humans , machines merge in future cyborgs more capable , powerful either. idea, called transhumanism, has roots in aldous huxley , robert ettinger, has been illustrated in fiction well, example in manga ghost in shell , science-fiction series dune.


in 1980s artist hajime sorayama s sexy robots series painted , published in japan depicting actual organic human form lifelike muscular metallic skins , later gynoids book followed used or influenced movie makers including george lucas , other creatives. sorayama never considered these organic robots real part of nature unnatural product of human mind, fantasy existing in mind when realized in actual form.


edward fredkin argues artificial intelligence next stage in evolution , idea first proposed samuel butler s darwin among machines (1863), , expanded upon george dyson in book of same name in 1998.








Comments

Popular posts from this blog

Types Raffinate

Biography Michał Vituška

Caf.C3.A9 Types of restaurant