Potential risks and moral reasoning Artificial intelligence




1 potential risks , moral reasoning

1.1 existential risk
1.2 devaluation of humanity
1.3 decrease in demand human labor
1.4 artificial moral agents
1.5 machine ethics
1.6 malevolent , friendly ai





potential risks , moral reasoning

widespread use of artificial intelligence have unintended consequences dangerous or undesirable. scientists future of life institute, among others, described short-term research goals how ai influences economy, laws , ethics involved ai , how minimize ai security risks. in long-term, scientists have proposed continue optimizing function while minimizing possible security risks come along new technologies.


machines intelligence have potential use intelligence make ethical decisions. research in area includes machine ethics , artificial moral agents , , study of malevolent vs. friendly ai .


existential risk


the development of full artificial intelligence spell end of human race. once humans develop artificial intelligence, take off on own , redesign @ ever-increasing rate. humans, limited slow biological evolution, couldn t compete , superseded.




a common concern development of artificial intelligence potential threat pose humanity. concern has gained attention after mentions celebrities including stephen hawking, bill gates, , elon musk. group of prominent tech titans including peter thiel, amazon web services , musk have committed $1billion openai nonprofit company aimed @ championing responsible ai development. opinion of experts within field of artificial intelligence mixed, sizable fractions both concerned , unconcerned risk eventual superhumanly-capable ai.


in book superintelligence, nick bostrom provides argument artificial intelligence pose threat mankind. argues sufficiently intelligent ai, if chooses actions based on achieving goal, exhibit convergent behavior such acquiring resources or protecting being shut down. if ai s goals not reflect humanity s - 1 example ai told compute many digits of pi possible - might harm humanity in order acquire more resources or prevent being shut down, better achieve goal.


for danger realized, hypothetical ai have overpower or out-think of humanity, minority of experts argue possibility far enough in future not worth researching. other counterarguments revolve around humans being either intrinsically or convergently valuable perspective of artificial intelligence.


concern on risk artificial intelligence has led high-profile donations , investments. in january 2015, elon musk donated ten million dollars future of life institute fund research on understanding ai decision making. goal of institute grow wisdom manage growing power of technology. musk funds companies developing artificial intelligence such google deepmind , vicarious keep eye on s going on artificial intelligence. think there potentially dangerous outcome there.


development of militarized artificial intelligence related concern. currently, 50+ countries researching battlefield robots, including united states, china, russia, , united kingdom. many people concerned risk superintelligent ai want limit use of artificial soldiers.


devaluation of humanity

joseph weizenbaum wrote ai applications cannot, definition, simulate genuine human empathy , use of ai technology in fields such customer service or psychotherapy misguided. weizenbaum bothered ai researchers (and philosophers) willing view human mind nothing more computer program (a position known computationalism). weizenbaum these points suggest ai research devalues human life.


decrease in demand human labor

martin ford, author of lights in tunnel: automation, accelerating technology , economy of future, , others argue specialized artificial intelligence applications, robotics , other forms of automation result in significant unemployment machines begin match , exceed capability of workers perform routine , repetitive jobs. ford predicts many knowledge-based occupations—and in particular entry level jobs—will increasingly susceptible automation via expert systems, machine learning , other ai-enhanced applications. ai-based applications may used amplify capabilities of low-wage offshore workers, making more feasible outsource knowledge work.


artificial moral agents

this raises issue of how ethically machine should behave towards both humans , other ai agents. issue addressed wendell wallach in book titled moral machines in introduced concept of artificial moral agents (ama). wallach, amas have become part of research landscape of artificial intelligence guided 2 central questions identifies humanity want computers making moral decisions , can (ro)bots moral . wallach question not centered on issue of whether machines can demonstrate equivalent of moral behavior in contrast constraints society may place on development of amas.


machine ethics

the field of machine ethics concerned giving machines ethical principles, or procedure discovering way resolve ethical dilemmas might encounter, enabling them function in ethically responsible manner through own ethical decision making. field delineated in aaai fall 2005 symposium on machine ethics: past research concerning relationship between technology , ethics has largely focused on responsible , irresponsible use of technology human beings, few people being interested in how human beings ought treat machines. in cases, human beings have engaged in ethical reasoning. time has come adding ethical dimension @ least machines. recognition of ethical ramifications of behavior involving machines, recent , potential developments in machine autonomy, necessitate this. in contrast computer hacking, software property issues, privacy issues , other topics ascribed computer ethics, machine ethics concerned behavior of machines towards human users , other machines. research in machine ethics key alleviating concerns autonomous systems—it argued notion of autonomous machines without such dimension @ root of fear concerning machine intelligence. further, investigation of machine ethics enable discovery of problems current ethical theories, advancing our thinking ethics. machine ethics referred machine morality, computational ethics or computational morality. variety of perspectives of nascent field can found in collected edition machine ethics stems aaai fall 2005 symposium on machine ethics.


malevolent , friendly ai

political scientist charles t. rubin believes ai can neither designed nor guaranteed benevolent. argues sufficiently advanced benevolence may indistinguishable malevolence. humans should not assume machines or robots treat favorably, because there no priori reason believe sympathetic our system of morality, has evolved along our particular biology (which ais not share). hyper-intelligent software may not decide support continued existence of humanity, , extremely difficult stop. topic has begun discussed in academic publications real source of risks civilization, humans, , planet earth.


physicist stephen hawking, microsoft founder bill gates, , spacex founder elon musk have expressed concerns possibility ai evolve point humans not control it, hawking theorizing spell end of human race .


one proposal deal ensure first intelligent ai friendly ai , , able control subsequently developed ais. question whether kind of check remain in place.


leading ai researcher rodney brooks writes, think mistake worrying developing malevolent ai anytime in next few hundred years. think worry stems fundamental error in not distinguishing difference between real recent advances in particular aspect of ai, , enormity , complexity of building sentient volitional intelligence.








Comments

Popular posts from this blog

Types Raffinate

Biography Michał Vituška

Caf.C3.A9 Types of restaurant