Friday, October 25, 2013

Possible Causes of the Singularity here on Earth and beyond

artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity.[3] Futurist Ray Kurzweil cited von Neumann's use of the term in a foreword to von Neumann's classic The Computer and the Brain.

2nd quote same source:
Hawkins (1983) writes that "mindsteps", dramatic and irreversible changes to paradigms or world views, are accelerating in frequency as quantified in his mindstep equation. He cites the inventions of writing, mathematics, and the computer as examples of such changes.
Kurzweil's analysis of history concludes that technological progress follows a pattern of exponential growth, following what he calls the "Law of Accelerating Returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[82] Kurzweil believes that the singularity will occur before the end of the 21st century, setting the date at 2045.[83] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.
Presumably, a technological singularity would lead to rapid development of a Kardashev Type I civilization, one that has achieved mastery of the resources of its home planet.[84

end quote from:

Technological singularity - Wikipedia, the free encyclopedia

 The Three basic potential causes of the Singularity we are presently in or approaching (depending upon your point of view) are quoted as being:artificial intelligence, human biological enhancement, or brain-computer interfaces.

I thought it might be useful to explore these three potential causes:

First Artificial intelligence:

Artificial intelligence

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Artificial intelligence (AI) is technology and a branch of computer science that studies and develops intelligent machines and software. Major AI researchers and textbooks define the field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]
AI research is highly technical and specialised, deeply divided into subfields that often fail to communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. There are subfields which are focused on the solution of specific problems, on one of several possible approaches, on the use of widely differing tools and towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") is still among the field's long term goals.[7] Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others.
The field was founded on the claim that a central ability of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.[8] This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.[9] Artificial intelligence has been the subject of tremendous optimism[10] but has also suffered stunning setbacks.[11] Today it has become an essential part of the technology industry and many of the most difficult problems in computer science.[12]

History

Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea.[13] Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshiped in Egypt and Greece[14] and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari.[15] It was also widely believed that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and Paracelsus.[16] By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots).[17] Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, "to forge the gods".[9] Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.
Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction.[18][19] This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.[20]
The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.[21] The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades.[22] They and their students wrote programs that were, to most people, simply astonishing:[23] Computers were solving word problems in algebra, proving logical theorems and speaking English.[24] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[25] and laboratories had been established around the world.[26] AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[27]
They had failed to recognize the difficulty of some of the problems they faced.[28] In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years would later be called an "AI winter",[29] a period when funding for AI projects was hard to find.
In the early 1980s, AI research was revived by the commercial success of expert systems,[30] a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[31] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.[32]
In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry.[12] The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[33]
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[34] In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[35] Two years later, a team from CMU won the DARPA Urban Challenge when their vehicle autonomously navigated 55 miles in an urban environment while adhering to traffic hazards and all traffic laws.[36] In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.[37] The Kinect, which provides a 3D body–motion interface for the Xbox 360, uses algorithms that emerged from lengthy AI research[38] as does the iPhone's Siri.

Goals

The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.[6]

Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[39] By the late 1980s and 1990s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[40]
For difficult problems, most of these algorithms can require enormous computational resources – most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AI research.[41]
Human beings solve most of their problems using fast, intuitive judgements rather than the conscious, step-by-step deduction that early AI research was able to model.[42] AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess.

Knowledge representation

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
Knowledge representation[43] and knowledge engineering[44] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[45] situations, events, states and time;[46] causes and effects;[47] knowledge about knowledge (what we know about what other people know);[48] and many other, less well researched domains. A representation of "what exists" is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge.[49]
Among the most difficult problems in knowledge representation are:
Default reasoning and the qualification problem
Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[50] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[51]
The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering — they must be built, by hand, one complicated concept at a time.[52] A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.[citation needed]
The subsymbolic form of some commonsense knowledge
Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"[53] or an art critic can take one look at a statue and instantly realize that it is a fake.[54] These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically.[55] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.[55]

Planning

A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.
Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.[57]
In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be.[58] However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.[59]
Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[60]

Learning

Machine learning is the study of computer algorithms that improve automatically through experience[61][62] and has been central to AI research since the field's inception.[63]
Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[64] the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[65]
Within developmental robotics, developmental learning approaches were elaborated for lifelong cumulative acquisition of repertoires of novel skills by a robot, through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.[66][67][68][69]

Natural language processing

A parse tree represents the syntactic structure of a sentence according to some formal grammar.
Natural language processing[70] gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as Internet texts. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.[71]
A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the users input much more efficient.

Motion and manipulation

The field of robotics[72] is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation[73] and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion - where the robot moves while maintaining physical contact with an object).[74][75]

Perception

Machine perception[76] is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision[77] is the ability to analyze visual input. A few selected subproblems are speech recognition,[78] facial recognition and object recognition.[79]

Social intelligence

Kismet, a robot with rudimentary social skills[80]
Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects.[81][82] It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science.[83] While the origins of the field may be traced as far back as to early philosophical inquiries into emotion,[84] the more modern branch of computer science originated with Rosalind Picard's 1995 paper[85] on affective computing.[86][87] A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions.
Emotion and social skills[88] play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-computer interaction, an intelligent machine might want to be able to display emotions—even if it does not actually experience them itself—in order to appear sensitive to the emotional dynamics of human interaction.

Creativity

A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial imagination.

General intelligence

Most researchers think that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[7] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[89][90]
Many of the problems above may require general intelligence to be considered solved. For example, even a straightforward, specific task like machine translation requires that the machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). A problem like machine translation is considered "AI-complete". In order to solve this particular problem, you must solve all the problems.[91]

Approaches

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[92] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[93] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[94] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?[95] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[96] a term which has since been adopted by some non-GOFAI researchers.[97][98]

Cybernetics and brain simulation

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[20] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

Symbolic

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".[99] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[100] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.
Cognitive simulation
Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[101][102]
Logic-based
Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[93] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[103] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[104]
"Anti-logic" or "scruffy"
Researchers at MIT (such as Marvin Minsky and Seymour Papert)[105] found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford).[94] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.[106]
Knowledge-based
When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[107] This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[30] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

Sub-symbolic

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[95]
Bottom-up, embodied, situated, behavior-based or nouvelle AI
Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[108] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.
Computational intelligence
Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s.[109] These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.[110]

Statistical

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a "revolution" and "the victory of the neats."[33] Critics argue that these techniques are too focused on particular problems and have failed to address the long term goal of general intelligence.[111] There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.[112][113]

Integrating the approaches

Intelligent agent paradigm
An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. The simplest intelligent agents are programs that solve specific problems. More complicated agents include human beings and organizations of human beings (such as firms). The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works – some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s.[2]
Agent architectures and cognitive architectures
Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system.[114] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.[115] Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system.[116]

Tools

In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Search and optimization

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[117] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[118] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[119] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[73] Many learning algorithms use search algorithms based on optimization.
Simple exhaustive searches[120] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that eliminate choices that are unlikely to lead to the goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies.[121] Heuristics limit the search for solutions into a smaller sample size.[74]
A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[122]
Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[123] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[124]

Logic

Logic[125] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[126] and inductive logic programming is a method for learning.[127]
Several different forms of logic are used in AI research. Propositional or sentential logic[128] is the logic of statements which can be true or false. First-order logic[129] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[130] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[131] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.
Default logics, non-monotonic logics and circumscription[51] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[45] situation calculus, event calculus and fluent calculus (for representing events and time);[46] causal calculus;[47] belief calculus; and modal logics.[48]

Probabilistic methods for uncertain reasoning

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[132]
Bayesian networks[133] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[134] learning (using the expectation-maximization algorithm),[135] planning (using decision networks)[136] and perception (using dynamic Bayesian networks).[137] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[137]
A key concept from the science of economics is "utility": a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[138] information value theory.[57] These tools include models such as Markov decision processes,[139] dynamic decision networks,[137] game theory and mechanism design.[140]

Classifiers and statistical learning methods

The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[141]
A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network,[142] kernel methods such as the support vector machine,[143] k-nearest neighbor algorithm,[144] Gaussian mixture model,[145] naive Bayes classifier,[146] and decision tree.[147] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Determining a suitable classifier for a given problem is still more an art than science.[148]

Neural networks

A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.
The study of artificial neural networks[142] began in the decade before the field AI research was founded, in the work of Walter Pitts and Warren McCullough. Other important early researchers were Frank Rosenblatt, who invented the perceptron and Paul Werbos who developed the backpropagation algorithm.[149]
The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[150] Among recurrent networks, the most famous is the Hopfield net, a form of attractor network, which was first described by John Hopfield in 1982.[151] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning and competitive learning.[152]
Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[153]

Control theory

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[154]

Languages

AI researchers have developed several specialized languages for AI research, including Lisp[155] and Prolog.[156]

Evaluating progress

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[157]
Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[158]
One classification for outcomes of an AI test is:[159]
  1. Optimal: it is not possible to perform better.
  2. Strong super-human: performs better than all humans.
  3. Super-human: performs better than most humans.
  4. Sub-human: performs worse than most humans.
For example, performance at draughts is optimal,[160] performance at chess is super-human and nearing strong super-human (see computer chess: computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.
A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[161] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.
An area that artificial intelligence had contributed greatly to is Intrusion detection.[162]

Applications

An automated online assistant providing customer service on a web page – one of many very primitive applications of artificial intelligence.
Artificial intelligence techniques are pervasive and are too numerous to list. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.[163]

Competitions and prizes

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Platforms

A platform (or "computing platform") is defined as "some sort of hardware architecture or software framework (including application frameworks), that allows software to run." As Rodney Brooks[164] pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.
A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems, albeit PC-based but still an entire real-world system, to various robot platforms such as the widely available Roomba with open interface.[165]

Philosophy

Artificial intelligence, by claiming to be able to recreate the capabilities of the human mind, is both a challenge and an inspiration for philosophy. Are there limits to how intelligent machines can be? Is there an essential difference between human intelligence and artificial intelligence? Can a machine have a mind and consciousness? A few of the most influential answers to these questions are given below.[166]
Turing's "polite convention"
We need not decide if a machine can "think"; we need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test.[157]
The Dartmouth proposal
"Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." This conjecture was printed in the proposal for the Dartmouth Conference of 1956, and represents the position of most working AI researchers.[167]
Newell and Simon's physical symbol system hypothesis
"A physical symbol system has the necessary and sufficient means of general intelligent action." Newell and Simon argue that intelligences consist of formal operations on symbols.[168] Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge. (See Dreyfus' critique of AI.)[169][170]
Gödel's incompleteness theorem
A formal system (such as a computer program) cannot prove all true statements.[171] Roger Penrose is among those who claim that Gödel's theorem limits what machines can do. (See The Emperor's New Mind.)[172]
Searle's strong AI hypothesis
"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[173] John Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the "mind" might be.[174]
The artificial brain argument
The brain can be simulated. Hans Moravec, Ray Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software, and that such a simulation will be essentially identical to the original.[90]

Predictions and ethics

Artificial intelligence is a common topic in both science fiction and projections about the future of technology and society. The existence of an artificial intelligence that rivals human intelligence raises difficult ethical issues, and the potential power of the technology inspires both hopes and fears.
In fiction, artificial intelligence has appeared fulfilling many roles.
These include:
Mary Shelley's Frankenstein considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel? If it can feel, does it have the same rights as a human? The idea also appears in modern science fiction, including the films I Robot, Blade Runner and A.I.: Artificial Intelligence, in which humanoid machines have the ability to feel human emotions. This issue, now known as "robot rights", is currently being considered by, for example, California's Institute for the Future, although many critics believe that the discussion is premature.[175] The subject is profoundly discussed in the 2010 documentary film Plug & Pray.[176]
Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future,[177] and others argue that specialized artificial intelligence applications, robotics and other forms of automation will ultimately result in significant unemployment as machines begin to match and exceed the capability of workers to perform most routine and repetitive jobs. Ford predicts that many knowledge-based occupations—and in particular entry level jobs—will be increasingly susceptible to automation via expert systems, machine learning[178] and other AI-enhanced applications. AI-based applications may also be used to amplify the capabilities of low-wage offshore workers, making it more feasible to outsource knowledge work.[179]
Joseph Weizenbaum wrote that AI applications can not, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy[180] was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum these points suggest that AI research devalues human life.[181]
Many futurists believe that artificial intelligence will ultimately transcend the limits of progress. Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029. He also predicts that by 2045 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "singularity".[182]
Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either.[183] This idea, called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, has been illustrated in fiction as well, for example in the manga Ghost in the Shell and the science-fiction series Dune. In the 1980s artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with life-like muscular metallic skins and later "the Gynoids" book followed that was used by or influenced movie makers including George Lucas and other creatives. Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form. Almost 20 years later, the first AI robotic pet (AIBO) came available as a companion to people. AIBO grew out of Sony's Computer Science Laboratory (CSL). Famed engineer Dr. Toshitada Doi is credited as AIBO's original progenitor: in 1994 he had started work on robots with artificial intelligence expert Masahiro Fujita within CSL of Sony. Doi's, friend, the artist Hajime Sorayama, was enlisted to create the initial designs for the AIBO's body. Those designs are now part of the permanent collections of Museum of Modern Art and the Smithsonian Institution, with later versions of AIBO being used in studies in Carnegie Mellon University. In 2006, AIBO was added into Carnegie Mellon University's "Robot Hall of Fame".
Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be friendly.[184] He argues that "any sufficiently advanced benevolence may be indistinguishable from malevolence." Humans should not assume machines or robots would treat us favorably, because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).
Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" (1863), and expanded upon by George Dyson in his book of the same name in 1998.[185]

See also

References

Notes

  1. Jump up ^ Definition of AI as the study of intelligent agents:
  2. ^ Jump up to: a b The intelligent agent paradigm: The definition used in this article, in terms of goals, actions, perception and environment, is due to Russell & Norvig (2003). Other definitions also include knowledge and learning as additional criteria.
  3. Jump up ^ Although there is some controversy on this point (see Crevier (1993, p. 50)), McCarthy states unequivocally "I came up with the term" in a c|net interview. (Skillings 2006) McCarthy first used the term in the proposal for the Dartmouth conference, which appeared in 1955. (McCarthy et al. 1955)
  4. Jump up ^ McCarthy's definition of AI:
  5. Jump up ^ Pamela McCorduck (2004, pp. 424) writes of "the rough shattering of AI in subfields—vision, natural language, decision theory, genetic algorithms, robotics ... and these with own sub-subfield—that would hardly have anything to say to each other."
  6. ^ Jump up to: a b This list of intelligent traits is based on the topics covered by the major AI textbooks, including:
  7. ^ Jump up to: a b General intelligence (strong AI) is discussed in popular introductions to AI:
  8. Jump up ^ See the Dartmouth proposal, under Philosophy, below.
  9. ^ Jump up to: a b This is a central idea of Pamela McCorduck's Machines Who Think. She writes: "I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition." (McCorduck 2004, p. 34) "Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized." (McCorduck 2004, p. xviii) "Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn't, we have engaged for a long time in this odd form of self-reproduction." (McCorduck 2004, p. 3) She traces the desire back to its Hellenistic roots and calls it the urge to "forge the Gods." (McCorduck 2004, pp. 340–400)
  10. Jump up ^ The optimism referred to includes the predictions of early AI researchers (see optimism in the history of AI) as well as the ideas of modern transhumanists such as Ray Kurzweil.
  11. Jump up ^ The "setbacks" referred to include the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973 and the collapse of the Lisp machine market in 1987.
  12. ^ Jump up to: a b AI applications widely used behind the scenes:
  13. Jump up ^ AI in myth:
  14. Jump up ^ Cult images as artificial intelligence: These were the first machines to be believed to have true intelligence and consciousness. Hermes Trismegistus expressed the common belief that with these statues, craftsman had reproduced "the true nature of the gods", their sensus and spiritus. McCorduck makes the connection between sacred automatons and Mosaic law (developed around the same time), which expressly forbids the worship of robots (McCorduck 2004, pp. 6–9)
  15. Jump up ^ Humanoid automata:
    Yan Shi:
    Hero of Alexandria: Al-Jazari: Wolfgang von Kempelen:
  16. Jump up ^ Artificial beings:
    Jābir ibn Hayyān's Takwin:
    Judah Loew's Golem: Paracelsus' Homunculus:
  17. Jump up ^ AI in early science fiction.
  18. Jump up ^ This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.
  19. Jump up ^ Formal reasoning:
  20. ^ Jump up to: a b AI's immediate precursors: See also Cybernetics and early neural networks (in History of artificial intelligence). Among the researchers who laid the foundations of AI were Alan Turing, John Von Neumann, Norbert Wiener, Claude Shannon, Warren McCullough, Walter Pitts and Donald Hebb.
  21. Jump up ^ Dartmouth conference:
    • McCorduck 2004, pp. 111–136
    • Crevier 1993, pp. 47–49, who writes "the conference is generally recognized as the official birthdate of the new science."
    • Russell & Norvig 2003, p. 17, who call the conference "the birth of artificial intelligence."
    • NRC 1999, pp. 200–201
  22. Jump up ^ Hegemony of the Dartmouth conference attendees:
  23. Jump up ^ Russell and Norvig write "it was astonishing whenever a computer did anything kind of smartish." Russell & Norvig 2003, p. 18
  24. Jump up ^ "Golden years" of AI (successful symbolic reasoning programs 1956–1973): The programs described are Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
  25. Jump up ^ DARPA pours money into undirected pure research into AI during the 1960s:
  26. Jump up ^ AI in England:
  27. Jump up ^ Optimism of early AI:
  28. Jump up ^ See The problems (in History of artificial intelligence)
  29. Jump up ^ First AI Winter, Mansfield Amendment, Lighthill report
  30. ^ Jump up to: a b Expert systems:
  31. Jump up ^ Boom of the 1980s: rise of expert systems, Fifth Generation Project, Alvey, MCC, SCI:
  32. Jump up ^ Second AI winter:
  33. ^ Jump up to: a b Formal methods are now preferred ("Victory of the neats"):
  34. Jump up ^ McCorduck 2004, pp. 480–483
  35. Jump up ^ DARPA Grand Challenge – home page
  36. Jump up ^ "Welcome". Archive.darpa.mil. Retrieved 31 October 2011.
  37. Jump up ^ Markoff, John (16 February 2011). "On 'Jeopardy!' Watson Win Is All but Trivial". The New York Times.
  38. Jump up ^ Kinect's AI breakthrough explained
  39. Jump up ^ Problem solving, puzzle solving, game playing and deduction:
  40. Jump up ^ Uncertain reasoning:
  41. Jump up ^ Intractability and efficiency and the combinatorial explosion:
  42. Jump up ^ Psychological evidence of sub-symbolic reasoning:
  43. Jump up ^ Knowledge representation:
  44. Jump up ^ Knowledge engineering:
  45. ^ Jump up to: a b Representing categories and relations: Semantic networks, description logics, inheritance (including frames and scripts):
  46. ^ Jump up to: a b Representing events and time:Situation calculus, event calculus, fluent calculus (including solving the frame problem):
  47. ^ Jump up to: a b Causal calculus:
  48. ^ Jump up to: a b Representing knowledge about knowledge: Belief calculus, modal logics:
  49. Jump up ^ Ontology:
  50. Jump up ^ Qualification problem: While McCarthy was primarily concerned with issues in the logical representation of actions, Russell & Norvig 2003 apply the term to the more general issue of default reasoning in the vast network of assumptions underlying all our commonsense knowledge.
  51. ^ Jump up to: a b Default reasoning and default logic, non-monotonic logics, circumscription, closed world assumption, abduction (Poole et al. places abduction under "default reasoning". Luger et al. places this under "uncertain reasoning"):
  52. Jump up ^ Breadth of commonsense knowledge:
  53. Jump up ^ Dreyfus & Dreyfus 1986
  54. Jump up ^ Gladwell 2005
  55. ^ Jump up to: a b Expert knowledge as embodied intuition: Note, however, that recent work in cognitive science challenges the view that there is anything like sub-symbolic human information processing, i.e., human cognition is essentially symbolic regardless of the level and of the consciousness status of the processing:
    • Augusto, Luis M. (2013). "Unconscious representations 1: Belying the traditional model of human cognition". Axiomathes. doi:10.1007/s10516-012-9206-z.
    • Augusto, Luis M. (2013). "Unconscious representations 2: Towards an integrated cognitive architecture". Axiomathes. doi:10.1007/s10516-012-9207-y.
  56. Jump up ^ Planning:
  57. ^ Jump up to: a b Information value theory:
  58. Jump up ^ Classical planning:
  59. Jump up ^ Planning and acting in non-deterministic domains: conditional planning, execution monitoring, replanning and continuous planning:
  60. Jump up ^ Multi-agent planning and emergent behavior:
  61. Jump up ^ This is a form of Tom Mitchell's widely quoted definition of machine learning: "A computer program is set to learn from an experience E with respect to some task T and some performance measure P if its performance on T as measured by P improves with experience E."
  62. Jump up ^ Learning:
  63. Jump up ^ Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing Machinery and Intelligence".(Turing 1950) In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".(pdf scanned copy of the original) (version published in 1957, An Inductive Inference Machine," IRE Convention Record, Section on Information Theory, Part 2, pp. 56–62)
  64. Jump up ^ Reinforcement learning:
  65. Jump up ^ Computational learning theory:
  66. Jump up ^ Weng, J., McClelland, Pentland, A.,Sporns, O., Stockman, I., Sur, M., and E. Thelen (2001) "Autonomous mental development by robots and animals", Science, vol. 291, pp. 599–600.
  67. Jump up ^ Lungarella, M., Metta, G., Pfeifer, R. and G. Sandini (2003). "Developmental robotics: a survey". Connection Science, 15:151–190.
  68. Jump up ^ Asada, M., Hosoda, K., Kuniyoshi, Y., Ishiguro, H., Inui, T., Yoshikawa, Y., Ogino, M. and C. Yoshida (2009) "Cognitive developmental robotics: a survey". IEEE Transactions on Autonomous Mental Development, Vol.1, No.1, pp.12--34.
  69. Jump up ^ Oudeyer, P-Y. (2010) "On the impact of robotics in behavioral and cognitive sciences: from insect navigation to human cognitive development", IEEE Transactions on Autonomous Mental Development, 2(1), pp. 2--16.
  70. Jump up ^ Natural language processing:
  71. Jump up ^ Applications of natural language processing, including information retrieval (i.e. text mining) and machine translation:
  72. Jump up ^ Robotics:
  73. ^ Jump up to: a b Moving and configuration space:
  74. ^ Jump up to: a b Tecuci, G. (2012), Artificial intelligence. WIREs Comp Stat, 4: 168–180. doi: 10.1002/wics.200
  75. Jump up ^ Robotic mapping (localization, etc):
  76. Jump up ^ Machine perception:
  77. Jump up ^ Computer vision:
  78. Jump up ^ Speech recognition:
  79. Jump up ^ Object recognition:
  80. Jump up ^ "Kismet". MIT Artificial Intelligence Laboratory, Humanoid Robotics Group.
  81. Jump up ^ Thro, Ellen (1993). Robotics. New York.
  82. Jump up ^ Edelson, Edward (1991). The Nervous System. New York: Remmel Nunn.
  83. Jump up ^ Tao, Jianhua; Tieniu Tan (2005). "Affective Computing: A Review". Affective Computing and Intelligent Interaction. LNCS 3784. Springer. pp. 981–995. doi:10.1007/11573548.
  84. Jump up ^ James, William (1884). "What is Emotion". Mind 9: 188–205. doi:10.1093/mind/os-IX.34.188. Cited by Tao and Tan.
  85. Jump up ^ "Affective Computing" MIT Technical Report #321 (Abstract), 1995
  86. Jump up ^ Kleine-Cosack, Christian (October 2006). "Recognition and Simulation of Emotions" (PDF). Archived from the original on 28 May 2008. Retrieved 13 May 2008. "The introduction of emotion to computer science was done by Pickard (sic) who created the field of affective computing."
  87. Jump up ^ Diamond, David (December 2003). "The Love Machine; Building computers that care". Wired. Archived from the original on 18 May 2008. Retrieved 13 May 2008. "Rosalind Picard, a genial MIT professor, is the field's godmother; her 1997 book, Affective Computing, triggered an explosion of interest in the emotional side of computers and their users."
  88. Jump up ^ Emotion and affective computing:
  89. Jump up ^ Gerald Edelman, Igor Aleksander and others have both argued that artificial consciousness is required for strong AI. (Aleksander 1995; Edelman 2007)
  90. ^ Jump up to: a b Artificial brain arguments: AI requires a simulation of the operation of the human brain A few of the people who make some form of the argument: The most extreme form of this argument (the brain replacement scenario) was put forward by Clark Glymour in the mid-1970s and was touched on by Zenon Pylyshyn and John Searle in 1980.
  91. Jump up ^ AI complete: Shapiro 1992, p. 9
  92. Jump up ^ Nils Nilsson writes: "Simply put, there is wide disagreement in the field about what AI is all about" (Nilsson 1983, p. 10).
  93. ^ Jump up to: a b Biological intelligence vs. intelligence in general:
    • Russell & Norvig 2003, pp. 2–3, who make the analogy with aeronautical engineering.
    • McCorduck 2004, pp. 100–101, who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
    • Kolata 1982, a paper in Science, which describes McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real"[1]. McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006).
  94. ^ Jump up to: a b Neats vs. scruffies:
  95. ^ Jump up to: a b Symbolic vs. sub-symbolic AI:
  96. Jump up ^ Haugeland 1985, p. 255.
  97. Jump up ^ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.38.8384&rep=rep1&type=pdf
  98. Jump up ^ Pei Wang (2008). Artificial general intelligence, 2008: proceedings of the First AGI Conference. IOS Press. p. 63. ISBN 978-1-58603-833-5. Retrieved 31 October 2011.
  99. Jump up ^ Haugeland 1985, pp. 112–117
  100. Jump up ^ The most dramatic case of sub-symbolic AI being pushed into the background was the devastating critique of perceptrons by Marvin Minsky and Seymour Papert in 1969. See History of AI, AI winter, or Frank Rosenblatt.
  101. Jump up ^ Cognitive simulation, Newell and Simon, AI at CMU (then called Carnegie Tech):
  102. Jump up ^ Soar (history):
  103. Jump up ^ McCarthy and AI research at SAIL and SRI International:
  104. Jump up ^ AI research at Edinburgh and in France, birth of Prolog:
  105. Jump up ^ AI at MIT under Marvin Minsky in the 1960s :
  106. Jump up ^ Cyc:
  107. Jump up ^ Knowledge revolution:
  108. Jump up ^ Embodied approaches to AI:
  109. Jump up ^ Revival of connectionism:
  110. Jump up ^ Computational intelligence
  111. Jump up ^ Pat Langley, "The changing science of machine learning", Machine Learning, Volume 82, Number 3, 275–279, doi:10.1007/s10994-011-5242-y
  112. Jump up ^ Yarden Katz, "Noam Chomsky on Where Artificial Intelligence Went Wrong", The Atlantic, November 1, 2012
  113. Jump up ^ Peter Norvig, "On Chomsky and the Two Cultures of Statistical Learning"
  114. Jump up ^ Agent architectures, hybrid intelligent systems:
  115. Jump up ^ Hierarchical control system:
  116. Jump up ^ Subsumption architecture:
  117. Jump up ^ Search algorithms:
  118. Jump up ^ Forward chaining, backward chaining, Horn clauses, and logical deduction as search:
  119. Jump up ^ State space search and planning:
  120. Jump up ^ Uninformed searches (breadth first search, depth first search and general state space search):
  121. Jump up ^ Heuristic or informed searches (e.g., greedy best first and A*):
  122. Jump up ^ Optimization searches:
  123. Jump up ^ Artificial life and society based learning:
  124. Jump up ^ Genetic programming and genetic algorithms:
  125. Jump up ^ Logic:
  126. Jump up ^ Satplan:
  127. Jump up ^ Explanation based learning, relevance based learning, inductive logic programming, case based reasoning:
  128. Jump up ^ Propositional logic:
  129. Jump up ^ First-order logic and features such as equality:
  130. Jump up ^ Fuzzy logic:
  131. Jump up ^ Subjective logic:
  132. Jump up ^ Stochastic methods for uncertain reasoning:
  133. Jump up ^ Bayesian networks:
  134. Jump up ^ Bayesian inference algorithm:
  135. Jump up ^ Bayesian learning and the expectation-maximization algorithm:
  136. Jump up ^ Bayesian decision theory and Bayesian decision networks:
  137. ^ Jump up to: a b c Stochastic temporal models: Dynamic Bayesian networks: Hidden Markov model: Kalman filters:
  138. Jump up ^ decision theory and decision analysis:
  139. Jump up ^ Markov decision processes and dynamic decision networks:
  140. Jump up ^ Game theory and mechanism design:
  141. Jump up ^ Statistical learning methods and classifiers:
  142. ^ Jump up to: a b Neural networks and connectionism:
  143. Jump up ^ kernel methods such as the support vector machine, Kernel methods:
  144. Jump up ^ K-nearest neighbor algorithm:
  145. Jump up ^ Gaussian mixture model:
  146. Jump up ^ Naive Bayes classifier:
  147. Jump up ^ Decision tree:
  148. Jump up ^ Classifier performance:
  149. Jump up ^ Backpropagation:
  150. Jump up ^ Feedforward neural networks, perceptrons and radial basis networks:
  151. Jump up ^ Recurrent neural networks, Hopfield nets:
  152. Jump up ^ Competitive learning, Hebbian coincidence learning, Hopfield networks and attractor networks:
  153. Jump up ^ Hierarchical temporal memory:
  154. Jump up ^ Control theory:
  155. Jump up ^ Lisp:
  156. Jump up ^ Prolog:
  157. ^ Jump up to: a b The Turing test:
    Turing's original publication:
    Historical influence and philosophical implications:
  158. Jump up ^ Subject matter expert Turing test:
  159. Jump up ^ Rajani, Sandeep (2011). "Artificial Intelligence - Man or Machine". International Journal of Information Technology and Knowlede Management 4 (1): 173–176. Retrieved 24 September 2012.
  160. Jump up ^ Game AI:
  161. Jump up ^ Mathematical definitions of intelligence:
  162. Jump up ^
  163. Jump up ^ "AI set to exceed human brain power" (web article). CNN. 26 July 2006. Archived from the original on 19 February 2008. Retrieved 26 February 2008.
  164. Jump up ^ Brooks, R.A., "How to build complete creatures rather than isolated cognitive simulators," in K. VanLehn (ed.), Architectures for Intelligence, pp. 225–239, Lawrence Erlbaum Associates, Hillsdale, NJ, 1991.
  165. Jump up ^ Hacking Roomba » Search Results » atmel
  166. Jump up ^ Philosophy of AI. All of these positions in this section are mentioned in standard discussions of the subject, such as:
  167. Jump up ^ Dartmouth proposal:
  168. Jump up ^ The physical symbol systems hypothesis:
  169. Jump up ^ Dreyfus criticized the necessary condition of the physical symbol system hypothesis, which he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules". (Dreyfus 1992, p. 156)
  170. Jump up ^ Dreyfus' critique of artificial intelligence:
  171. Jump up ^ This is a paraphrase of the relevant implication of Gödel's theorems.
  172. Jump up ^ The Mathematical Objection: Making the Mathematical Objection: Refuting Mathematical Objection: Background:
    • Gödel 1931, Church 1936, Kleene 1935, Turing 1937
  173. Jump up ^ This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980, p. 1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
  174. Jump up ^ Searle's Chinese room argument: Discussion:
  175. Jump up ^ Robot rights: Prematurity of: In fiction:
  176. Jump up ^ Independent documentary Plug & Pray, featuring Joseph Weizenbaum and Raymond Kurzweil
  177. Jump up ^ Ford, Martin R. (2009), The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, Acculant Publishing, ISBN 978-1448659814. (e-book available free online.)
  178. Jump up ^ "Machine Learning: A Job Killer?"
  179. Jump up ^ AI could decrease the demand for human labor:
  180. Jump up ^ In the early 1970s, Kenneth Colby presented a version of Weizenbaum's ELIZA known as DOCTOR which he promoted as a serious therapeutic tool. (Crevier 1993, pp. 132–144)
  181. Jump up ^ Joseph Weizenbaum's critique of AI: Weizenbaum (the AI researcher who developed the first chatterbot program, ELIZA) argued in 1976 that the misuse of artificial intelligence has the potential to devalue human life.
  182. Jump up ^ Technological singularity:
  183. Jump up ^ Transhumanism:
  184. Jump up ^ Rubin, Charles (Spring 2003). "Artificial Intelligence and Human Nature". The New Atlantis 1: 88–100.
  185. Jump up ^ AI as evolution:

References

AI textbooks

History of AI

Other sources

Further reading

External links


This page was last modified on 15 October 2013 at 19:08.
 end quote from:

artificial intelligence

 

Next, let us explore:

Human enhancement

From Wikipedia, the free encyclopedia
Jump to: navigation, search
An electrically powered exoskeleton suit currently in development by Tsukuba University of Japan.
Human enhancement refers to any attempt to temporarily or permanently overcome the current limitations of the human body through natural or artificial means. The term is sometimes applied to the use of technological means to select or alter human characteristics and capacities, whether or not the alteration results in characteristics and capacities that lie beyond the existing human range. Here, the test is whether the technology is used for non-therapeutic purposes. Some bioethicists restrict the term to the non-therapeutic application of specific technologiesneuro-, cyber-, gene-, and nano-technologies — to human biology.[1][2]

Technologies

Human enhancement technologies (HET) are techniques that can be used not simply for treating illness and disability, but also for enhancing human characteristics and capacities.[3] In some circles, the expression "human enhancement technologies" is synonymous with emerging technologies or converging technologies.[4] In other circles, the expression "human enhancement" is roughly synonymous with human genetic engineering,[5][6] it is used most often to refer to the general application of the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) to improve human performance.[4]

Existing technologies

Emerging technologies

Speculative technologies

  • Mind uploading, the hypothetical process of "transferring"/"uploading" or copying a conscious mind from a brain to a non-biological substrate by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device.
  • Exocortex, a theoretical artificial external information processing system that would augment a brain's biological high-level cognitive processes.

Ethics

While in some circles the expression "human enhancement" is roughly synonymous with human genetic engineering,[5][6] it is used most often to refer to the general application of the convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) to improve human performance.[4]
Since the 1990s, several academics (such as some of the fellows of the Institute for Ethics and Emerging Technologies[10]) have risen to become cogent advocates of the case for human enhancement while other academics (such as the members of President Bush's Council on Bioethics[11]) have become its most outspoken critics.[12]
Advocacy of the case for human enhancement is increasingly becoming synonymous with “transhumanism”, a controversial ideology and movement which has emerged to support the recognition and protection of the right of citizens to either maintain or modify their own minds and bodies; so as to guarantee them the freedom of choice and informed consent of using human enhancement technologies on themselves and their children.[13]
Neuromarketing consultant Zack Lynch argues that neurotechnologies will have a more immediate effect on society than gene therapy and will face less resistance as a pathway of radical human enhancement. He also argues that the concept of "enablement" needs to be added to the debate over "therapy" versus "enhancement".[14]
Although many proposals of human enhancement rely on fringe science, the very notion and prospect of human enhancement has sparked public controversy.[15][16][17]
Many critics argue that "human enhancement" is a loaded term which has eugenic overtones because it may imply the improvement of human hereditary traits to attain a universally accepted norm of biological fitness (at the possible expense of human biodiversity and neurodiversity), and therefore can evoke negative reactions far beyond the specific meaning of the term. Furthermore, they conclude that enhancements which are self-evidently good, like "fewer diseases", are more the exception than the norm and even these may involve ethical tradeoffs, as the controversy about ADHD arguably demonstrates.[18]
However, the most common criticism of human enhancement is that it is or will often be practiced with a reckless and selfish short-term perspective that is ignorant of the long-term consequences on individuals and the rest of society, such as the fear that some enhancements will create unfair physical or mental advantages to those who can and will use them, or unequal access to such enhancements can and will further the gulf between the "haves" and "have-nots".[19][20][21]
Accordingly, some advocates, who want to use more neutral language, and advance the public interest in so-called "human enhancement technologies", prefer the term "enablement" over "enhancement";[22] defend and promote rigorous, independent safety testing of enabling technologies; as well as affordable, universal access to these technologies.[12]

Effects on identity

Human enhancement technologies can impact human identity by affecting one's self-conception.[23] This is problematic because enhancement technologies threaten to alter the self fundamentally to the point where the result is a different and inauthentic person. For example, extreme changes in personality may affect the individual's relationships because others can no longer relate to the new person.

See also

References

  1. Jump up ^ Hughes, James (2004). Human Enhancement on the Agenda. Retrieved 2007-02-02.
  2. Jump up ^ Moore, P., "Enhancing Me: The Hope and the Hype of Human Enhancement", John Wiley, 2008
  3. Jump up ^ Enhancement Technologies Group (1998). Writings by group participants. Retrieved 2007-02-02.
  4. ^ Jump up to: a b c Roco, Mihail C. and Bainbridge, William Sims, eds. (2004). Converging Technologies for Improving Human Performance. Springer. ISBN 1-4020-1254-3.
  5. ^ Jump up to: a b Agar, Nicholas (2004). Liberal Eugenics: In Defence of Human Enhancement. ISBN 1-4051-2390-7.
  6. ^ Jump up to: a b Parens, Erik (2000). Enhancing Human Traits: Ethical and Social Implications. Georgetown University Press. ISBN 0-87840-780-4.
  7. Jump up ^ "Dorlands Medical Dictionary". Archived from the original on 2008-01-30.
  8. Jump up ^ Lanni C, Lenzken SC, Pascale A, et al. (March 2008). "Cognition enhancers between treating and doping the mind". Pharmacol. Res. 57 (3): 196–213. doi:10.1016/j.phrs.2008.02.004. PMID 18353672.
  9. Jump up ^ "So you're a cyborg -- now what?". Retrieved 2013-3-22.
  10. Jump up ^ Bailey, Ronald (2006). The Right to Human Enhancement: And also uplifting animals and the rapture of the nerds. Retrieved 2007-03-03.
  11. Jump up ^ Members of the President's Council on Bioethics (2003). Beyond Therapy: Biotechnology and the Pursuit of Happiness. President's Council on Bioethics.
  12. ^ Jump up to: a b Hughes, James (2004). Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future. Westview Press. ISBN 0-8133-4198-1.
  13. Jump up ^ Ford, Alyssa (May / June 2005). "Humanity: The Remix". Utne Magazine. Retrieved 2007-03-03.
  14. Jump up ^ R. U. Sirius (2005). "The NeuroAge: Zack Lynch In Conversation With R.U. Sirius". Life Enhancement Products.
  15. Jump up ^ The Royal Society & The Royal Academy of Engineering (2004). Nanoscience and nanotechnologies (Ch. 6). Retrieved 2006-12-05.
  16. Jump up ^ European Parliament (2006). Technology Assessment on Converging Technologies. Retrieved 2006-12-06.
  17. Jump up ^ European Parliament (2009). Human Enhancement. Retrieved 2010-01-10.
  18. Jump up ^ Carrico, Dale (2007). Modification, Consent, and Prosthetic Self-Determination. Retrieved 2007-04-03.
  19. Jump up ^ Mooney, Pat Roy (2002). Beyond Cloning: Making Well People "Better". Retrieved 2007-02-02.
  20. Jump up ^ Fukuyama, Francis (2002). Our Posthuman Future: Consequences of the Biotechnology Revolution. Farrar Straus & Giroux. ISBN 0-374-23643-7.
  21. Jump up ^ Institute on Biotechnology and the Human Future. Human "Enhancement". Retrieved 2007-02-02.
  22. Jump up ^ Good, Better, Best: The Human Quest for Enhancement Summary Report of an Invitational Workshop. Convened by the Scientific Freedom, Responsibility and Law Program. American Association for the Advancement of Science. June 1–2, 2006. Author: Enita A. Williams. Edited by: Mark S. Frankel.
  23. Jump up ^ DeGrazia, David (2005). "Enhancement Technologies and Human Identity". Journal of Medicine and Philosophy 30: 261–283. Retrieved 12 May 2013.

External links

This page was last modified on 18 September 2013 at 21:29.
end quote from:
 human biological enhancement

and finally let us explore:
brain-computer interfaces 

Brain–computer interface

From Wikipedia, the free encyclopedia
Jump to: navigation, search
A brain–computer interface (BCI), often called a mind-machine interface (MMI), or sometimes called a direct neural interface or a brain–machine interface (BMI), is a direct communication pathway between the brain and an external device. BCIs are often directed at assisting, augmenting, or repairing human cognitive or sensory-motor functions.
Research on BCIs began in the 1970s at the University of California Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA.[1][2] The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature.
The field of BCI research and development has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels.[3] Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.

History

The history of brain–computer interfaces (BCIs) starts with Hans Berger's discovery of the electrical activity of the human brain and the development of electroencephalography (EEG). In 1924 Berger was the first to record human brain activity by means of EEG. By analyzing EEG traces, Berger was able to identify oscillatory activity in the brain, such as the alpha wave (8–13 Hz), also known as Berger's wave.
Berger's first recording device was very rudimentary. He inserted silver wires under the scalps of his patients. These were later replaced by silver foils attached to the patients' head by rubber bandages. Berger connected these sensors to a Lippmann capillary electrometer, with disappointing results. More sophisticated measuring devices, such as the Siemens double-coil recording galvanometer, which displayed electric voltages as small as one ten thousandth of a volt, led to success.
Berger analyzed the interrelation of alternations in his EEG wave diagrams with brain diseases. EEGs permitted completely new possibilities for the research of human brain activities.

BCI versus neuroprosthetics

Neuroprosthetics is an area of neuroscience concerned with neural prostheses. That is, using artificial devices to replace the function of impaired nervous systems and brain related problems, or of sensory organs. The most widely used neuroprosthetic device is the cochlear implant which, as of December 2010, had been implanted in approximately 220,000 people worldwide.[4] There are also several neuroprosthetic devices that aim to restore vision, including retinal implants.
The difference between BCIs and neuroprosthetics is mostly in how the terms are used: neuroprosthetics typically connect the nervous system to a device, whereas BCIs usually connect the brain (or nervous system) with a computer system. Practical neuroprosthetics can be linked to any part of the nervous system—for example, peripheral nerves—while the term "BCI" usually designates a narrower class of systems which interface with the central nervous system.
The terms are sometimes, however, used interchangeably. Neuroprosthetics and BCIs seek to achieve the same aims, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques.

Animal BCI research

Several laboratories have managed to record signals from monkey and rat cerebral cortices to operate BCIs to produce movement. Monkeys have navigated computer cursors on screen and commanded robotic arms to perform simple tasks simply by thinking about the task and seeing the visual feedback, but without any motor output.[5] In May 2008 photographs that showed a monkey at the University of Pittsburgh Medical Center operating a robotic arm by thinking were published in a number of well known science journals and magazines.[6] Other research on cats has decoded their neural visual signals.

Early work

Monkey operating a robotic arm with brain–computer interfacing (Schwartz lab, University of Pittsburgh)
In 1969 the operant conditioning studies of Fetz and colleagues, at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine in Seattle, showed for the first time that monkeys could learn to control the deflection of a biofeedback meter arm with neural activity.[7] Similar work in the 1970s established that monkeys could quickly learn to voluntarily control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded for generating appropriate patterns of neural activity.[8]
Studies that developed algorithms to reconstruct movements from motor cortex neurons, which control movement, date back to the 1970s. In the 1980s, Apostolos Georgopoulos at Johns Hopkins University found a mathematical relationship between the electrical responses of single motor cortex neurons in rhesus macaque monkeys and the direction in which they moved their arms (based on a cosine function). He also found that dispersed groups of neurons, in different areas of the monkey's brains, collectively controlled motor commands. But he was able to record the firings of neurons in only one area at a time, because of the technical limitations imposed by his equipment.[9]
There has been rapid development in BCIs since the mid-1990s.[10] Several groups have been able to capture complex brain motor cortex signals by recording from neural ensembles (groups of neurons) and using these to control external devices. Notable research groups have been led by Richard Andersen, John Donoghue, Phillip Kennedy, Miguel Nicolelis and Andrew Schwartz.[citation needed]

Prominent research successes

Kennedy and Yang Dan

Phillip Kennedy (who later founded Neural Signals in 1987) and colleagues built the first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into monkeys.[citation needed]
Yang Dan and colleagues' recordings of cat vision using a BCI implanted in the lateral geniculate nucleus (top row: original image; bottom row: recording)
In 1999, researchers led by Yang Dan at the University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects.[11] Similar results in humans have since been achieved by researchers in Japan (see below).

Nicolelis

Miguel Nicolelis, a professor at Duke University, in Durham, North Carolina, has been a prominent proponent of using multiple electrodes spread over a greater area of the brain to obtain neuronal signals to drive a BCI. Such neural ensembles are said to reduce the variability in output produced by single electrodes, which could make it difficult to operate a BCI.
After conducting initial studies in rats during the 1990s, Nicolelis and his colleagues developed BCIs that decoded brain activity in owl monkeys and used the devices to reproduce monkey movements in robotic arms. Monkeys have advanced reaching and grasping abilities and good hand manipulation skills, making them ideal test subjects for this kind of work.
By 2000 the group succeeded in building a BCI that reproduced owl monkey movements while the monkey operated a joystick or reached for food.[12] The BCI operated in real time and could also control a separate robot remotely over Internet protocol. But the monkeys could not see the arm moving and did not receive any feedback, a so-called open-loop BCI.
Diagram of the BCI developed by Miguel Nicolelis and colleagues for use on Rhesus monkeys
Later experiments by Nicolelis using rhesus monkeys succeeded in closing the feedback loop and reproduced monkey reaching and grasping movements in a robot arm. With their deeply cleft and furrowed brains, rhesus monkeys are considered to be better models for human neurophysiology than owl monkeys. The monkeys were trained to reach and grasp objects on a computer screen by manipulating a joystick while corresponding movements by a robot arm were hidden.[13][14] The monkeys were later shown the robot directly and learned to control it by viewing its movements. The BCI used velocity predictions to control reaching movements and simultaneously predicted handgripping force.

Donoghue, Schwartz and Andersen

Other laboratories which have developed BCIs and algorithms that decode neuron signals include those run by John Donoghue at Brown University, Andrew Schwartz at the University of Pittsburgh and Richard Andersen at Caltech. These researchers have been able to produce working BCIs, even using recorded signals from far fewer neurons than did Nicolelis (15–30 neurons versus 50–200 neurons).
Donoghue's group reported training rhesus monkeys to use a BCI to track visual targets on a computer screen(closed-loop BCI) with or without assistance of a joystick.[15] Schwartz's group created a BCI for three-dimensional tracking in virtual reality and also reproduced BCI control in a robotic arm.[16] The same group also created headlines when they demonstrated that a monkey could feed itself pieces of fruit and marshmallows using a robotic arm controlled by the animal's own brain signals.[17][18][19]
Andersen's group used recordings of premovement activity from the posterior parietal cortex in their BCI, including signals created when experimental animals anticipated receiving a reward.[20]

Other research

In addition to predicting kinematic and kinetic parameters of limb movements, BCIs that predict electromyographic or electrical activity of the muscles of primates are being developed.[21] Such BCIs could be used to restore mobility in paralyzed limbs by electrically stimulating muscles.
Miguel Nicolelis and colleagues demonstrated that the activity of large neural ensembles can predict arm position. This work made possible creation of BCIs that read arm movement intentions and translate them into movements of artificial actuators. Carmena and colleagues[13] programmed the neural coding in a BCI that allowed a monkey to control reaching and grasping movements by a robotic arm. Lebedev and colleagues[14] argued that brain networks reorganize to create a new representation of the robotic appendage in addition to the representation of the animal's own limbs.
The biggest impediment to BCI technology at present is the lack of a sensor modality that provides safe, accurate and robust access to brain signals. It is conceivable or even likely, however, that such a sensor will be developed within the next twenty years. The use of such a sensor should greatly expand the range of communication functions that can be provided using a BCI.
Development and implementation of a BCI system is complex and time consuming. In response to this problem, Dr. Gerwin Schalk has been developing a general-purpose system for BCI research, called BCI2000. BCI2000 has been in development since 2000 in a project led by the Brain–Computer Interface R&D Program at the Wadsworth Center of the New York State Department of Health in Albany, New York, USA.
A new 'wireless' approach uses light-gated ion channels such as Channelrhodopsin to control the activity of genetically defined subsets of neurons in vivo. In the context of a simple learning task, illumination of transfected cells in the somatosensory cortex influenced the decision making process of freely moving mice.[22]

The BCI Award

The Annual BCI Award, endowed with 3,000 USD, is awarded in recognition of outstanding and innovative research in the field of Brain-Computer Interfaces. Each year, a renowned research laboratory is asked to judge the submitted projects and to award the prize. The jury consists of world-leading BCI experts recruited by the awarding laboratory. Following list consists the winners of the BCI Award:
  • 2010: Cuntai Guan, Kai Keng Ang, Karen Sui Geok Chua and Beng Ti Ang, (A*STAR, Singapore)
Motor imagery-based Brain-Computer Interface robotic rehabilitation for stroke.
What are the neuro-physiological causes of performance variations in brain-computer interfacing?
  • 2012: Surjo R. Soekadar and Niels Birbaumer, (Applied Neurotechnology Lab, University Hospital Tübingen and Institute of Medical Psychology and Behavioral Neurobiology, Eberhard Karls University, Tübingen, Germany)
Improving Efficacy of Ipsilesional Brain-Computer Interface Training in Neurorehabilitation of Chronic Stroke.
  • 2013: M. C. Dadarlata,b, J. E. O’Dohertya, P. N. Sabesa,b (aDepartment of Physiology, Center for Integrative Neuroscience, San Francisco, CA, US, bUC Berkeley-UCSF Bioengineering Graduate Program, University of California, San Francisco, CA, US),
A learning-based approach to artificial sensory feedback: intracortical microstimulation replaces and augments vision.

Human BCI research

Invasive BCIs

Vision

Jens Naumann, a man with acquired blindness, being interviewed about his vision BCI on CBS's The Early Show
Invasive BCI research has targeted repairing damaged sight and providing new functionality for people with paralysis. Invasive BCIs are implanted directly into the grey matter of the brain during neurosurgery. Because they lie in the grey matter, invasive devices produce the highest quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to become weaker, or even non-existent, as the body reacts to a foreign object in the brain.
In vision science, direct brain implants have been used to treat non-congenital (acquired) blindness. One of the first scientists to produce a working brain interface to restore sight was private researcher William Dobelle.
Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes, the sensation of seeing light. The system included cameras mounted on glasses to send signals to the implant. Initially, the implant allowed Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required him to be hooked up to a mainframe computer, but shrinking electronics and faster computers made his artificial eye more portable and now enable him to perform simple tasks unassisted.[23]
Dummy unit illustrating the design of a BrainGate interface
In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle’s second generation implant, marking one of the earliest commercial uses of BCIs. The second generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out across the visual field in what researchers call "the starry-night effect". Immediately after his implant, Jens was able to use his imperfectly restored vision to drive an automobile slowly around the parking area of the research institute.[24]

Movement

BCIs focusing on motor neuroprosthetics aim to either restore movement in individuals with paralysis or provide devices to assist them, such as interfaces with computers or robot arms.
Researchers at Emory University in Atlanta, led by Philip Kennedy and Roy Bakay, were first to install a brain implant in a human that produced signals of high enough quality to simulate movement. Their patient, Johnny Ray (1944–2002), suffered from ‘locked-in syndrome’ after suffering a brain-stem stroke in 1997. Ray’s implant was installed in 1998 and he lived long enough to start working with the implant, eventually learning to control a computer cursor; he died in 2002 of a brain aneurysm.[25]
Tetraplegic Matt Nagle became the first person to control an artificial hand using a BCI in 2005 as part of the first nine-month human trial of Cyberkinetics’s BrainGate chip-implant. Implanted in Nagle’s right precentral gyrus (area of the motor cortex for arm movement), the 96-electrode BrainGate implant allowed Nagle to control a robotic arm by thinking about moving his hand as well as a computer cursor, lights and TV.[26] One year later, professor Jonathan Wolpaw[who?] received the prize of the Altran Foundation for Innovation to develop a Brain Computer Interface with electrodes located on the surface of the skull, instead of directly in the brain.
More recently, research teams led by the Braingate group at Brown University[27] and a group led by University of Pittsburgh Medical Center,[28] both in collaborations with the United States Department of Veterans Affairs, have demonstrated further success in direct control of robotic prosthetic limbs with many degrees of freedom using direct connections to arrays of neurons in the motor cortex of patients with tetraplegia.

Partially invasive BCIs

Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter. They produce better resolution signals than non-invasive BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk of forming scar-tissue in the brain than fully invasive BCIs.
Electrocorticography (ECoG) measures the electrical activity of the brain taken from beneath the skull in a similar way to non-invasive electroencephalography (see below), but the electrodes are embedded in a thin plastic pad that is placed above the cortex, beneath the dura mater.[29] ECoG technologies were first trialed in humans in 2004 by Eric Leuthardt and Daniel Moran from Washington University in St Louis. In a later trial, the researchers enabled a teenage boy to play Space Invaders using his ECoG implant.[30] This research indicates that control is rapid, requires minimal training, and may be an ideal tradeoff with regards to signal fidelity and level of invasiveness.
(Note: these electrodes had not been implanted in the patient with the intention of developing a BCI. The patient had been suffering from severe epilepsy and the electrodes were temporarily implanted to help his physicians localize seizure foci; the BCI researchers simply took advantage of this.)[citation needed]
Signals can be either subdural or epidural, but are not taken from within the brain parenchyma itself. It has not been studied extensively until recently due to the limited access of subjects. Currently, the only manner to acquire the signal for study is through the use of patients requiring invasive monitoring for localization and resection of an epileptogenic focus.
ECoG is a very promising intermediate BCI modality because it has higher spatial resolution, better signal-to-noise ratio, wider frequency range, and less training requirements than scalp-recorded EEG, and at the same time has lower technical difficulty, lower clinical risk, and probably superior long-term stability than intracortical single-neuron recording. This feature profile and recent evidence of the high level of control with minimal training requirements shows potential for real world application for people with motor disabilities.[31][32]
Light Reactive Imaging BCI devices are still in the realm of theory. These would involve implanting a laser inside the skull. The laser would be trained on a single neuron and the neuron's reflectance measured by a separate sensor. When the neuron fires, the laser light pattern and wavelengths it reflects would change slightly. This would allow researchers to monitor single neurons but require less contact with tissue and reduce the risk of scar-tissue build-up.[citation needed]

Non-invasive BCIs

As well as invasive experiments, there have also been experiments in humans using non-invasive neuroimaging technologies as interfaces. Signals recorded in this way have been used to power muscle implants and restore partial movement in an experimental volunteer. Although they are easy to wear, non-invasive implants produce poor signal resolution because the skull dampens signals, dispersing and blurring the electromagnetic waves created by the neurons. Although the waves can still be detected it is more difficult to determine the area of the brain that created them or the actions of individual neurons.

EEG

Overview
Recordings of brainwaves produced by an electroencephalogram
Electroencephalography (EEG) is the most studied potential non-invasive interface, mainly due to its fine temporal resolution, ease of use, portability and low set-up cost. But as well as the technology's susceptibility to noise, another substantial barrier to using EEG as a brain–computer interface is the extensive training required before users can work the technology. For example, in experiments beginning in the mid-1990s, Niels Birbaumer at the University of Tübingen in Germany trained severely paralysed people to self-regulate the slow cortical potentials in their EEG to such an extent that these signals could be used as a binary signal to control a computer cursor.[33] (Birbaumer had earlier trained epileptics to prevent impending fits by controlling this low voltage wave.) The experiment saw ten patients trained to move a computer cursor by controlling their brainwaves. The process was slow, requiring more than an hour for patients to write 100 characters with the cursor, while training often took many months.
Another research parameter is the type of oscillatory activity that is measured. Birbaumer's later research with Jonathan Wolpaw at New York State University has focused on developing technology that would allow users to choose the brain signals they found easiest to operate a BCI, including mu and beta rhythms.
A further parameter is the method of feedback used and this is shown in studies of P300 signals. Patterns of P300 waves are generated involuntarily (stimulus-feedback) when people see something they recognize and may allow BCIs to decode categories of thoughts without training patients first. By contrast, the biofeedback methods described above require learning to control brainwaves so the resulting brain activity can be detected.
Lawrence Farwell and Emanuel Donchin developed an EEG-based brain–computer interface in the 1980s.[34] Their "mental prosthesis" used the P300 brainwave response to allow subjects, including one paralyzed Locked-In syndrome patient, to communicate words, letters and simple commands to a computer and thereby to speak through a speech synthesizer driven by the computer. A number of similar devices have been developed since then. In 2000, for example, research by Jessica Bayliss at the University of Rochester showed that volunteers wearing virtual reality helmets could control elements in a virtual world using their P300 EEG readings, including turning lights on and off and bringing a mock-up car to a stop.[35]
While an EEG based brain-computer interface has been pursued extensively by a number of research labs, recent advancements made by Bin He and his team at the University of Minnesota suggest the potential of an EEG based brain-computer interface to accomplish tasks close to invasive brain-computer interface. Using advanced functional neuroimaging including BOLD functional MRI and EEG source imaging, Bin He and co-workers identified the co-variation and co-localization of electrophysiological and hemodynamic signals induced by motor imagination.[36] Refined by a neuroimaging approach and by a training protocol, Bin He and co-workers demonstrated the ability of a non-invasive EEG based brain-computer interface to control the flight of a virtual helicopter in 3-dimensional space, based upon motor imagination.[37] In June 2013 it was announced that Bin He had developed the technique to enable a remote-control helicopter to be guided through an obstacle course.[38]
In addition to a brain-computer interface based on brain waves, as recorded from scalp EEG electrodes, Bin He and co-workers explored a virtual EEG signal-based brain-computer interface by first solving the EEG inverse problem and then used the resulting virtual EEG for brain-computer interface tasks. Well-controlled studies suggested the merits of such a source analysis based brain-computer interface.[39]
Dry active electrode arrays
In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and also multichannel dry active electrode arrays using micro-machining. The single channel dry EEG electrode construction and results were published in 1994.[40] The arrayed electrode was also demonstrated to perform well compared to Silver/Silver Chloride electrodes. The device consisted of four sites of sensors with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are: (1) no electrolyte used, (2) no skin preparation, (3) significantly reduced sensor size, and (4) compatibility with EEG monitoring systems. The active electrode array is an integrated system made of an array of capacitive sensors with local integrated circuitry housed in a package with batteries to power the circuitry. This level of integration was required to achieve the functional performance obtained by the electrode.
The electrode was tested on an electrical test bench and on human subjects in four modalities of EEG activity, namely: (1) spontaneous EEG, (2) sensory event-related potentials, (3) brain stem potentials, and (4) cognitive event-related potentials. The performance of the dry electrode compared favorably with that of the standard wet electrodes in terms of skin preparation, no gel requirements (dry), and higher signal-to-noise ratio.[41]
In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and down, his beta-rhythm EEG output was analysed using software to identify patterns in the noise. A basic pattern was identified and used to control a switch: Above average activity was set to on, below average off. As well as enabling Jatich to control a computer cursor the signals were also used to drive the nerve controllers embedded in his hands, restoring some movement.[42]
Other research
Electronic neural networks have been deployed which shift the learning phase from the user to the computer. Experiments by scientists at the Fraunhofer Society in 2004 using neural networks led to noticeable improvements within 30 minutes of training.[43]
Experiments by Eduardo Miranda, at the University of Plymouth in the UK, has aimed to use EEG recordings of mental activity associated with music to allow the disabled to express themselves musically through an encephalophone.[44] Ramaswamy Palaniappan has pioneered the development of BCI for use in biometrics to identify/authenticate a person.[45] The method has also been suggested for use as PIN generation device (for example in ATM and internet banking transactions.[46] The group which is now at University of Wolverhampton has previously developed analogue cursor control using thoughts.[47]
Researchers at the University of Twente in the Netherlands have been conducting research on using BCIs for non-disabled individuals, proposing that BCIs could improve error handling, task performance, and user experience and that they could broaden the user spectrum.[48] They particularly focused on BCI games,[49] suggesting that BCI games could provide challenge, fantasy and sociality to game players and could, thus, improve player experience.[50]
The Emotiv company has been selling a commercial video game controller, known as The Epoc, since December 2009. The Epoc uses electromagnetic sensors.[51][52]
The first BCI session with 100% accuracy (based on 80 right hand and 80 left hand movement imaginations) was recorded in 1998 by Christoph Guger. The BCI system used 27 electrodes overlaying the sensorimotor cortex, weighted the electrodes with Common Spatial Patterns, calculated the running variance and used a linear discriminant analysis.[53]
Research is ongoing into military use of BCIs and since the 1970s DARPA has been funding research on this topic.[1][2] The current focus of research is user-to-user communication through analysis of neural signals.[54] The project "Silent Talk" aims to detect and analyze the word-specific neural signals, using EEG, which occur before speech is vocalized, and to see if the patterns are generalizable.[55]

MEG and MRI

ATR Labs' reconstruction of human vision using fMRI (top row: original image; bottom row: reconstruction from mean of combined readings)
Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) have both been used successfully as non-invasive BCIs.[56] In a widely reported experiment, fMRI allowed two users being scanned to play Pong in real-time by altering their haemodynamic response or brain blood flow through biofeedback techniques.[57]
fMRI measurements of haemodynamic responses in real time have also been used to control robot arms with a seven second delay between thought and movement.[58]
In 2008 research developed in the Advanced Telecommunications Research (ATR) Computational Neuroscience Laboratories in Kyoto, Japan, allowed the scientists to reconstruct images directly from the brain and display them on a computer in black and white at a resolution of 10x10 pixels. The article announcing these achievements was the cover story of the journal Neuron of 10 December 2008.[59]
In 2011 researchers from UC Berkeley published[60] a study reporting second-by-second reconstruction of videos watched by the study's subjects, from fMRI data. This was achieved by creating a statistical model relating visual patterns in videos shown to the subjects, to the brain activity caused by watching the videos. This model was then used to look up the 100 one-second video segments, in a database of 18 million seconds of random YouTube videos, whose visual patterns most closely matched the brain activity recorded when subjects watched a new video. These 100 one-second video extracts were then combined into a mashed-up image that resembled the video being watched.[61][62][63]

Neurogaming

Currently, there is a new field of gaming called Neurogaming, which uses non-invasive BCI in order to improve gameplay so that users can interact with a console without the use of a traditional joystick.[64] Some Neurogaming software use a player's brain waves, heart rate, expressions, pupil dilation, and even emotions to complete tasks or effect the mood of the game.[65] For example, game developers at Emotiv have created non-invasive BCI that will determine the mood of a player and adjust music or scenery accordingly. This new form of interaction between player and software will enable a player to have a more realistic gaming experience.[66] Because there will be less disconnect between a player and console, Neurogaming will allow individuals to utilize their "psychological state"[67] and have their reactions transfer to games in real-time.[66]
However, since Neurogaming is still in its first stages, not much is written about the new industry. Due to this, the first NeuroGaming Conference will be held in San Francisco on May 1–2, 2013.[68]

Synthetic telepathy/silent communication

In a $6.3 million Army initiative to invent devices for telepathic communication, Gerwin Schalk, underwritten in a $2.2 million grant, found that it is possible to use ECoG signals to discriminate the vowels and consonants embedded in spoken and in imagined words. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.[32][69]
Research into synthetic telepathy using subvocalization is taking place at the University of California, Irvine under lead scientist Mike D'Zmura. The first such communication took place in the 1960s using EEG to create Morse code using brain alpha waves. Using EEG to communicate imagined speech is less accurate than the invasive method of placing an electrode between the skull and the brain.[70]

Commercialization

John Donoghue and fellow researchers founded Cyberkinetics. The company markets its electrode arrays under the BrainGate product name and has set the development of practical BCIs for humans as its major goal. The BrainGate is based on the Utah Array developed by Dick Normann.
Philip Kennedy founded Neural Signals in 1987 to develop BCIs that would allow paralysed patients to communicate with the outside world and control external devices. As well as an invasive BCI, the company also sells an implant to restore speech. Neural Signals' "Brain Communicator" BCI device uses glass cones containing microelectrodes coated with proteins to encourage the electrodes to bind to neurons.
Although 16 paying patients were treated using William Dobelle's vision BCI, new implants ceased within a year of Dobelle's death in 2004. A company controlled by Dobelle, Avery Biomedical Devices, and Stony Brook University are continuing development of the implant, which has not yet received Food and Drug Administration approval for human implantation in the United States.[71]
Ambient, at a TI developers conference in early 2008, demonstrated a product they have in development call The Audeo. The Audeo aims to create a human–computer interface for communication without the need of physical motor control or speech production. Using signal processing, unpronounced speech can be translated from intercepted neurological signals.[72]
Mindball is a product, developed and commercialized by the Swedish company Interactive Productline, in which players compete to control a ball's movement across a table by becoming more relaxed and focused.[73] Interactive Productline's objective is to develop and sell easily understandable EEG products that train the ability to relax and focus.[74]
An Austrian company called Guger Technologies or[75] g.tec, has been offering Brain Computer Interface systems since 1999. The company provides base BCI models as development platforms for the research community to build upon, including the P300 Speller, Motor Imagery, and Steady-State Visual Evoked Potential. g.tec recently developed the g.SAHARA dry electrode system, which can provide signals comparable to gel-based systems.[76]
Spanish company Starlab, entered this market in 2009 with a wireless 4-channel system called Enobio. In 2011 Enobio 8 and 20 channel (CE Medical) was released and is now commercialised by Starlab spin-off Neuroelectrics Designed for medical and research purposes the system provides an all in one solution and a platform for application development.[77]
There are three main consumer-devices commercial-competitors in this area (launch date mentioned in brackets) which have launched such devices primarily for gaming- and PC-users:
In 2009, the world's first personal EEG-based spelling system came to the market: intendiX. The system can work with passive, active, or new dry EEG electrodes. The first version used P300 activity to type on a keyboard-like matrix. Besides writing text, the patient can also use the system to trigger an alarm, let the computer speak the written text, print out or copy the text into an e-mail or to send commands to external devices. In March 2012, g.tec debuted a new intendiX module called the Screen Overlay Control Interface (SOCI) that could allow users to play World of Warcraft or Angry Birds.

Cell-culture BCIs

Researchers have built devices to interface with neural cells and entire neural networks in cultures outside animals. As well as furthering research on animal implantable devices, experiments on cultured neural tissue have focused on building problem-solving networks, constructing basic computers and manipulating robotic devices. Research into techniques for stimulating and recording from individual neurons grown on semiconductor chips is sometimes referred to as neuroelectronics or neurochips.[78]
The world's first Neurochip, developed by Caltech researchers Jerome Pine and Michael Maher
Development of the first working neurochip was claimed by a Caltech team led by Jerome Pine and Michael Maher in 1997.[79] The Caltech chip had room for 16 neurons.
In 2003 a team led by Theodore Berger, at the University of Southern California, started work on a neurochip designed to function as an artificial or prosthetic hippocampus. The neurochip was designed to function in rat brains and was intended as a prototype for the eventual development of higher-brain prosthesis. The hippocampus was chosen because it is thought to be the most ordered and structured part of the brain and is the most studied area. Its function is to encode experiences for storage as long-term memories elsewhere in the brain.[80]
Thomas DeMarse at the University of Florida used a culture of 25,000 neurons taken from a rat's brain to fly a F-22 fighter jet aircraft simulator.[81] After collection, the cortical neurons were cultured in a petri dish and rapidly began to reconnect themselves to form a living neural network. The cells were arranged over a grid of 60 electrodes and used to control the pitch and yaw functions of the simulator. The study's focus was on understanding how the human brain performs and learns computational tasks at a cellular level.

Ethical considerations

Important ethical, legal and societal issues related to brain-computer interfacing are:[82][83][84][85]
  • conceptual issues (researchers disagree over what is and what is not a brain-computer interface),[85]
  • obtaining informed consent from people who have difficulty communicating,
  • risk/benefit analysis,
  • shared responsibility of BCI teams (e.g. how to ensure that responsible group decisions can be made),
  • the consequences of BCI technology for the quality of life of patients and their families,
  • side-effects (e.g. neurofeedback of sensorimotor rhythm training is reported to affect sleep quality),
  • personal responsibility and its possible constraints (e.g. who is responsible for erroneous actions with a neuroprosthesis),
  • issues concerning personality and personhood and its possible alteration,
  • therapeutic applications and their possible exceedance,
  • questions of research ethics that arise when progressing from animal experimentation to application in human subjects,
  • mind-reading and privacy,
  • mind-control,
  • use of the technology in advanced interrogation techniques by governmental authorities,
  • selective enhancement and social stratification, and
  • communication to the media.
Clausen stated in 2009 that “BCIs pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy”.[82] Moreover, he suggests that bioethics is well-prepared to deal with the issues that arise with BCI technologies. Haselager and colleagues[83] pointed out that expectations of BCI efficacy and value play a great role in ethical analysis and the way BCI scientists should approach media. Furthermore, standard protocols can be implemented to ensure ethically sound informed-consent procedures with locked-in patients.
Researchers are well aware that sound ethical guidelines, appropriately moderated enthusiasm in media coverage and education about BCI systems will be of utmost importance for the societal acceptance of this technology. Thus, recently more effort is made inside the BCI community to create consensus on ethical guidelines for BCI research, development and dissemination.[85]

Low-cost BCI-based Interfaces

Recently a number of companies have scaled back medical grade EEG technology (and in one case, NeuroSky, rebuilt the technology from the ground up) to create inexpensive BCIs. This technology has been built into toys and gaming devices; some of these toys have been extremely commercially successful like the NeuroSky and Mattel MindFlex.
  • In 2006 Sony patented a neural interface system allowing radio waves to affect signals in the neural cortex.[86]
  • In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. This was also the first large scale EEG device to use dry sensor technology.[87]
  • In 2008 OCZ Technology developed a device for use in video games relying primarily on electromyography.[88]
  • In 2008 the Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create a game, Judecca.[89][90]
  • In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG to steer a ball through an obstacle course. By far the best selling consumer based EEG to date.[89][91]
  • In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the force.[89][92]
  • In 2009 Emotiv Systems released the EPOC, a 14 channel EEG device that can read 4 mental states, 13 conscious states, facial expressions, and head movements. The EPOC is the first commercial BCI to use dry sensor technology, which can be dampened with a saline solution for a better connection.[51]
  • In November 2011 Time Magazine selected "necomimi" produced by Neurowear as one of the best inventions of the year. The company announced that it expected to launch a consumer version of the garment, consisting of cat-like ears controlled by a brain-wave reader produced by NeuroSky, in spring 2012.[93]
  • In March 2012 g.tec introduced the intendiX-SPELLER, first commercially available BCI system for home use which can be used to control computer games and apps. It can detect different brain signals with an accuracy of 99%.[94] g.tec has hosted several workshop tours to demonstrate the intendiX system and other hardware and software to the public, such as a g.tec workshop tour of the US West Coast during September 2012.
  • In January 2013 Hasaca National University (HNU) announced first Masters program in Virtual Reality Brain Computer interface application design.

Fiction or speculation

The prospect of BCIs and brain implants of all kinds have been important themes in science fiction. See brain implants in fiction and philosophy for a review of this literature.

See also

References

  1. ^ Jump up to: a b Vidal, JJ (1973). "Toward direct brain-computer communication". Annual review of biophysics and bioengineering 2: 157–80. doi:10.1146/annurev.bb.02.060173.001105. PMID 4583653.
  2. ^ Jump up to: a b J. Vidal (1977). "Real-Time Detection of Brain Events in EEG". IEEE Proceedings 65 (5): 633–641. doi:10.1109/PROC.1977.10542.
  3. Jump up ^ Levine, SP; Huggins, JE; Bement, SL; Kushwaha, RK; Schuh, LA; Rohde, MM; Passaro, EA; Ross, DA et al. (2000). "A direct brain interface based on event-related potentials". IEEE transactions on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society 8 (2): 180–5. doi:10.1109/86.847809. PMID 10896180.
  4. Jump up ^ NIH Publication No. 11-4798 (1 March 2011). "Cochlear Implants". National Institute on Deafness and Other Communication Disorders.
  5. Jump up ^ Miguel Nicolelis et al. (2001) Duke neurobiologist has developed system that allows monkeys to control robot arms via brain signals
  6. Jump up ^ Baum, Michele (6 September 2008). "Monkey Uses Brain Power to Feed Itself With Robotic Arm". Pitt Chronicle. Retrieved 2009-07-06.
  7. Jump up ^ Fetz, E. E. (1969). "Operant Conditioning of Cortical Unit Activity". Science 163 (3870): 955–8. Bibcode:1969Sci...163..955F. doi:10.1126/science.163.3870.955. PMID 4974291.
  8. Jump up ^ Schmidt, EM; McIntosh, JS; Durelli, L; Bak, MJ (1978). "Fine control of operantly conditioned firing patterns of cortical neurons". Experimental neurology 61 (2): 349–69. doi:10.1016/0014-4886(78)90252-2. PMID 101388.
  9. Jump up ^ Georgopoulos, A.; Lurito, J.; Petrides, M; Schwartz, A.; Massey, J. (1989). "Mental rotation of the neuronal population vector". Science 243 (4888): 234–6. Bibcode:1989Sci...243..234G. doi:10.1126/science.2911737. PMID 2911737.
  10. Jump up ^ Lebedev, MA; Nicolelis, MA (2006). "Brain-machine interfaces: past, present and future". Trends in neurosciences 29 (9): 536–46. doi:10.1016/j.tins.2006.07.004. PMID 16859758.
  11. Jump up ^ Stanley, GB; Li, FF; Dan, Y (1999). "Reconstruction of natural scenes from ensemble responses in the lateral geniculate nucleus". Journal of Neuroscience 19 (18): 8036–42. PMID 10479703.
  12. Jump up ^ Nicolelis, Miguel A. L.; Wessberg, Johan; Stambaugh, Christopher R.; Kralik, Jerald D.; Beck, Pamela D.; Laubach, Mark; Chapin, John K.; Kim, Jung et al. (2000). "Real-time prediction of hand trajectory by ensembles of cortical neurons in primates". Nature 408 (6810): 361–5. doi:10.1038/35042582. PMID 11099043.
  13. ^ Jump up to: a b Carmena, JM; Lebedev, MA; Crist, RE; O'Doherty, JE; Santucci, DM; Dimitrov, DF; Patil, PG; Henriquez, CS et al. (2003). "Learning to control a brain-machine interface for reaching and grasping by primates". PLoS Biology 1 (2): E42. doi:10.1371/journal.pbio.0000042. PMC 261882. PMID 14624244.
  14. ^ Jump up to: a b Lebedev, M. A.; Carmena, JM; O'Doherty, JE; Zacksenhouse, M; Henriquez, CS; Principe, JC; Nicolelis, MA (2005). "Cortical Ensemble Adaptation to Represent Velocity of an Artificial Actuator Controlled by a Brain-Machine Interface". Journal of Neuroscience 25 (19): 4681–93. doi:10.1523/JNEUROSCI.4088-04.2005. PMID 15888644.
  15. Jump up ^ Serruya, MD; Hatsopoulos, NG; Paninski, L; Fellows, MR; Donoghue, JP (2002). "Instant neural control of a movement signal". Nature 416 (6877): 141–2. Bibcode:2002Natur.416..141S. doi:10.1038/416141a. PMID 11894084.
  16. Jump up ^ Taylor, D. M.; Tillery, SI; Schwartz, AB (2002). "Direct Cortical Control of 3D Neuroprosthetic Devices". Science 296 (5574): 1829–32. Bibcode:2002Sci...296.1829T. doi:10.1126/science.1070291. PMID 12052948.
  17. Jump up ^ Pitt team to build on brain-controlled arm, Pittsburgh Tribune Review, 5 September 2006.
  18. Jump up ^ YouTube – Monkey controls a robotic arm. Youtube.com. Retrieved on 2012-05-29.
  19. Jump up ^ Velliste, M; Perel, S; Spalding, MC; Whitford, AS; Schwartz, AB (2008). "Cortical control of a prosthetic arm for self-feeding". Nature 453 (7198): 1098–101. Bibcode:2008Natur.453.1098V. doi:10.1038/nature06996. PMID 18509337.
  20. Jump up ^ Musallam, S.; Corneil, BD; Greger, B; Scherberger, H; Andersen, RA (2004). "Cognitive Control Signals for Neural Prosthetics". Science 305 (5681): 258–62. Bibcode:2004Sci...305..258M. doi:10.1126/science.1097938. PMID 15247483.
  21. Jump up ^ Santucci, David M.; Kralik, Jerald D.; Lebedev, Mikhail A.; Nicolelis, Miguel A. L. (2005). "Frontal and parietal cortical ensembles predict single-trial muscle activity during reaching movements in primates". European Journal of Neuroscience 22 (6): 1529–40. doi:10.1111/j.1460-9568.2005.04320.x. PMID 16190906.
  22. Jump up ^ Huber, D; Petreanu, L; Ghitani, N; Ranade, S; Hromádka, T; Mainen, Z; Svoboda, K (2008). "Sparse optical microstimulation in barrel cortex drives learned behaviour in freely moving mice". Nature 451 (7174): 61–4. Bibcode:2008Natur.451...61H. doi:10.1038/nature06445. PMC 3425380. PMID 18094685.
  23. Jump up ^ Vision quest, Wired Magazine, September 2002
  24. Jump up ^ Naumann, J. Search for Paradise: A Patient's Account of the Artificial Vision Experiment (2012), Xlibris Corporation, ISBN 1-479-7092-04
  25. Jump up ^ Kennedy, PR; Bakay, RA (1998). "Restoration of neural output from a paralyzed patient by a direct brain connection". NeuroReport 9 (8): 1707–11. doi:10.1097/00001756-199806010-00007. PMID 9665587.
  26. Jump up ^ Leigh R. Hochberg; Mijail D. Serruya, Gerhard M. Friehs, Jon A. Mukand, Maryam Saleh, Abraham H. Caplan, Almut Branner, David Chen, Richard D. Penn and John P. Donoghue (13 July 2006). "Neuronal ensemble control of prosthetic devices by a human with tetraplegia". Nature 442 (7099): 164–171. Bibcode:2006Natur.442..164H. doi:10.1038/nature04970. PMID 16838014.
  27. Jump up ^ Hochberg, Leigh R.; et al. (2012). Bibcode:2012Natur.485..372H. doi:10.1038/nature11076. Missing or empty |title= (help)
  28. Jump up ^ Collinger, Jennifer L.; et al. (2013). doi:10.1016/S0140-6736(12)61816-9. Missing or empty |title= (help)
  29. Jump up ^ Serruya MD, Donoghue JP. (2003) Chapter III: Design Principles of a Neuromotor Prosthetic Device in Neuroprosthetics: Theory and Practice, ed. Kenneth W. Horch, Gurpreet S. Dhillon. Imperial College Press.
  30. Jump up ^ Teenager moves video icons just by imagination, press release, Washington University in St Louis, 9 October 2006
  31. Jump up ^ Yanagisawa, Takafumi (2011). "Electrocorticograpic Control of Prosthetic Arm in Paralyzed Patients". American Neurological Association. Retrieved 19 January 2012. "ECoG- Based BCI has advantage in signal and durability that are absolutely necessary for clinical application"
  32. ^ Jump up to: a b X, Pei (2011). "Decoding Vowels and Consonants in Spoken and Imagined Words Using Electrocorticographic Signals in Humans". J Neural Eng 046028th ser. 8.4. Retrieved 12 February 2012. "Justin Williams, a biomedical engineer at the university, has already transformed the ECoG implant into a micro device that can be installed with a minimum of fuss. It has been tested in animals for a long period of time – the micro ECoG stays in place and doesn’t seem to negatively affect the immune system."
  33. Jump up ^ Just short of telepathy: can you interact with the outside world if you can't even blink an eye?, Psychology Today, May–June 2003
  34. Jump up ^ Farwell, LA; Donchin, E (1988). "Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials". Electroencephalography and clinical neurophysiology 70 (6): 510–23. doi:10.1016/0013-4694(88)90149-6. PMID 2461285.
  35. Jump up ^ Press release, University of Rochester, 3 May 2000
  36. Jump up ^ Yuan, H; Liu, Tao; Szarkowski, Rebecca; Rios, Cristina; Ashe, James; He, Bin (2010). "Negative covariation between task-related responses in alpha/beta-band activity and BOLD in human sensorimotor cortex: an EEG and fMRI study of motor imagery and movements". NeuroImage 49 (3): 2596–2606. doi:10.1016/j.neuroimage.2009.10.028. PMC 2818527. PMID 19850134.
  37. Jump up ^ Doud, AJ; Lucas, John P.; Pisansky, Marc T.; He, Bin (2011). "Continuous Three-Dimensional Control of a Virtual Helicopter Using a Motor Imagery Based Brain-Computer Interface". In Gribble, Paul L. PLoS ONE 6 (10): e26322. Bibcode:2011PLoSO...626322D. doi:10.1371/journal.pone.0026322. PMC 3202533. PMID 22046274.
  38. Jump up ^ "Thought-guided helicopter takes off". bbc.co.uk. 5 June 2013. Retrieved 5 June 2013.
  39. Jump up ^ Qin, L; Ding, Lei; He, Bin (2004). "Motor imagery classification by means of source analysis for brain-computer interface applications". Journal of Neural Engineering 1 (3): 135–141. Bibcode:2004JNEng...1..135Q. doi:10.1088/1741-2560/1/3/002. PMID 15876632.
  40. Jump up ^ Taheri, B; Knight, R; Smith, R (1994). "A dry electrode for EEG recording☆". Electroencephalography and Clinical Neurophysiology 90 (5): 376–83. doi:10.1016/0013-4694(94)90053-1. PMID 7514984.
  41. Jump up ^ Alizadeh-Taheri, Babak (1994). "Active Micromachined Scalp Electrode Array for Eeg Signal Recording". PhD thesis (University of California): 82. Bibcode:1994PhDT........82A.
  42. Jump up ^ The Next BrainiacsWired Magazine, August 2001.
  43. Jump up ^ Artificial Neural Net Based Signal Processing for Interaction with Peripheral Nervous System. In: Proceedings of the 1st International IEEE EMBS Conference on Neural Engineering. pp. 134–137. 20–22 March 2003.
  44. Jump up ^ Mental ways to make music, Cane, Alan, Financial Times, London (UK), 22 April 2005, p. 12
  45. Jump up ^ EEG biometric
  46. Jump up ^ New research to find out if your thoughts can be used to verify passwords. Retrieved on 2012-09-20.
  47. Jump up ^ When mind over matter has a whole new meaning (From Gazette). Gazette-news.co.uk (13 April 2011). Retrieved on 2012-05-29.
  48. Jump up ^ Gürkök H., Nijholt A. (2012). "Brain-Computer Interfaces for Multimodal Interaction: A Survey and Principles". Int. J. Hum. Comput. Interaction 28 (5): 292–307. doi:10.1080/10447318.2011.582022.
  49. Jump up ^ D. Plass-Oude Bos, B. Reuderink, B. van de Laar, H. Gürkök, C. Mühl, M. Poel, A. Nijholt, D. Heylen. "Brain-Computer Interfacing and Games" Brain-Computer Interfaces 2010: 149–178 doi:10.1007/978-1-84996-272-8 10
  50. Jump up ^ Gürkök H., Nijholt A., Poel M. (2012). "Brain-Computer Interface Games: Towards a Framework". ICEC. Lecture Notes in Computer Science 2012: 373–380. doi:10.1007/978-3-642-33542-6_33. ISBN 978-3-642-33541-9.
  51. ^ Jump up to: a b "Emotiv Systems Homepage". Emotiv.com. Retrieved 2009-12-29.
  52. Jump up ^ Emotiv Epoc "brain-wave" PC controller delayed until 2009. News.bigdownload.com (1 December 2008). Retrieved on 2012-05-29.
  53. Jump up ^ Guger C., Ramoser H., Pfurtscheller G. (Dec 2000). "Real-time analysis with subject-specific spatial patterns". IEEE Trans Rehabil Eng. 8 (4): 447–56. doi:10.1109/86.895947. PMID 11204035.
  54. Jump up ^ Drummond, Katie (14 May 2009). "Pentagon Preps Soldier Telepathy Push". Wired Magazine. Retrieved 2009-05-06.
  55. Jump up ^ DARPA (2009-05). "Department of Defense Fiscal Year (FY) 2010 Budget Estimates May 2009". DARPA. Retrieved 2011-07-25.
  56. Jump up ^ Ranganatha Sitaram, Andrea Caria, Ralf Veit, Tilman Gaber, Giuseppina Rota, Andrea Kuebler and Niels Birbaumer(2007) "FMRI Brain–Computer Interface: A Tool for Neuroscientific Research and Treatment"
  57. Jump up ^ Mental ping-pong could aid paraplegics, Nature, 27 August 2004
  58. Jump up ^ To operate robot only with brain, ATR and Honda develop BMI base technology, Tech-on, 26 May 2006
  59. Jump up ^ Miyawaki, Y; Uchida, H; Yamashita, O; Sato, MA; Morito, Y; Tanabe, HC; Sadato, N; Kamitani, Y (2008). "Decoding the Mind's Eye – Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders". Neuron 60 (5): 915–929. doi:10.1016/j.neuron.2008.11.004. PMID 19081384.
  60. Jump up ^ Nishimoto, Shinji; Vu, An T.; Naselaris, Thomas; Benjamini, Yuval; Yu, Bin; Gallant, Jack L. (22 September 2011). "Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies". Current Biology 21 (19): 1641. doi:10.1016/j.cub.2011.08.031.
  61. Jump up ^ Yam, Philip (22 September 2011). "Breakthrough Could Enable Others to Watch Your Dreams and Memories". Scientific American. Retrieved 25 September 2011.
  62. Jump up ^ "Reconstructing visual experiences from brain activity evoked by natural movies (Project page)". The Gallant Lab at UC Berkeley. Retrieved 25 September 2011.
  63. Jump up ^ Yasmin Anwar (22 September 2011). "Scientists use brain imaging to reveal the movies in our mind". UC Berkeley News Center. Retrieved 25 September 2011.
  64. Jump up ^ http://www.youtube.com/watch?feature=player_embedded&v=Qz2XR3xcx60
  65. Jump up ^ "Neurogaming".
  66. ^ Jump up to: a b http://www.youtube.com/watch?feature=player_embedded&v=T7CiiWBwMgw
  67. Jump up ^ "Merging Cognitive Neuroscience & Virtual Simulation in an Interactive Training Platform".
  68. Jump up ^ http://www.neurogamingconf.com/
  69. Jump up ^ Kennedy, Pagan (18 September 2011). "The Cyborg in Us All". New York Times. Retrieved 28 January 2012.
  70. Jump up ^ Eric Bland (13 Oct,2008). "Army Developing‘synthetic telepathy’". Discovery News. Retrieved 13 Oct, 2008.
  71. Jump up ^ Press release, Stony Brook University Center for Biotechnology, 1 May 2006
  72. Jump up ^ Speak Your Mind. Theaudeo.com. Retrieved on 2012-05-29.
  73. Jump up ^ Welcome to Mind Ball. Vivifeye.com (8 March 2012). Retrieved on 2012-05-29.
  74. Jump up ^ Interactive Productline|About us. Mindball.se. Retrieved on 2012-05-29.
  75. Jump up ^ "Guger Technologies".
  76. Jump up ^ Guger et al., 2012, Frontiers in Neuroscience
  77. Jump up ^ "ENOBIO".
  78. Jump up ^ Mazzatenta, A.; Giugliano, M.; Campidelli, S.; Gambazzi, L.; Businaro, L.; Markram, H.; Prato, M.; Ballerini, L. (2007). "Interfacing Neurons with Carbon Nanotubes: Electrical Signal Transfer and Synaptic Stimulation in Cultured Brain Circuits". Journal of Neuroscience 27 (26): 6931–6. doi:10.1523/JNEUROSCI.1051-07.2007. PMID 17596441.
  79. Jump up ^ Press release, Caltech, 27 October 1997
  80. Jump up ^ Coming to a brain near you, Wired News, 22 October 2004
  81. Jump up ^ 'Brain' in a dish flies flight simulator, CNN, 4 November 2004
  82. ^ Jump up to: a b Clausen, Jens (2009). "Man, machine and in between". Nature 457 (7233): 1080. Bibcode:2009Natur.457.1080C. doi:10.1038/4571080a.
  83. ^ Jump up to: a b Haselager, Pim; Vlek, Rutger; Hill, Jeremy; Nijboer, Femke (2009). "A note on ethical aspects of BCI". Neural Networks 22 (9): 1352. doi:10.1016/j.neunet.2009.06.046.
  84. Jump up ^ Tamburrini, Guglielmo (2009). "Brain to Computer Communication: Ethical Perspectives on Interaction Models". Neuroethics 2 (3): 137. doi:10.1007/s12152-009-9040-1.
  85. ^ Jump up to: a b c Nijboer, Femke; Clausen, Jens; Allison, Brendan Z; Haselager, Pim (2011). "Stakeholders' opinions on ethical issues related to brain-computer interfacing". Neuroethics. doi:10.1007/s12152-011-9132-6.
  86. Jump up ^ "Sony patent neural interface".
  87. Jump up ^ "Mind Games". The Economist. 23 March 2007.
  88. Jump up ^ "nia Game Controller Product Page". OCZ Technology Group. Retrieved 2013-01-30.
  89. ^ Jump up to: a b c Li, Shan (8 August 2010). "Mind reading is on the market". Los Angeles Times.
  90. Jump up ^ Brains-on with NeuroSky and Square Enix's Judecca mind-control game. Engadget.com (9 October 2008). Retrieved on 2012-05-29.
  91. Jump up ^ New games powered by brain waves. Physorg.com. Retrieved on 2010-09-12.
  92. Jump up ^ Snider, Mike (7 January 2009). "Toy trains 'Star Wars' fans to use The Force". USA Today. Retrieved 2010-05-01.
  93. Jump up ^ "necomimi" selected "TIME MAGAZINE / The 50 best invention of the year". Neurowear.com. Retrieved on 2012-05-29.
  94. Jump up ^ ""intendiX-SOCI": g.tec Introduces Mind-controlled Computer Gaming at CeBIT2012". PR Newswire. 5 March 2012.

Further reading

External links

This page was last modified on 9 October 2013 at 03:07.

end quote from:
brain-computer interfaces 

As we progress into the Singularity more each second of each day it might be important to actually try to create a world in which we and our children and theirs and theirs etc. might actually want to live in. Otherwise, the Singularity might just be a hell that people would just rather nuke themselves right out of existence entirely from. So, be careful what you create with your intelligence.  We owe our carefulness and care to generations yet unborn.

 

 


No comments:

Post a Comment