- Akata, Z. et al: A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 53(8), 18–28 (2020)
We define hybrid intelligence (HI) as the combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines. HI is an important new research focus for artificial intelligence, and we set a research agenda for HI by formulating four challenges.
- Dodig Crnkovic, G.: Info-computational constructivism and cognition. Constructivist Foundations (2014), 9 (2):223-231.
At present, we lack a common understanding of both the process of cognition in living organisms and the construction of knowledge in embodied, embedded cognizing agents in general, including future artifactual cognitive agents under development, such as cognitive robots and softbots. Purpose: This paper aims to show how the info-computational approach (IC) can reinforce constructivist ideas about the nature of cognition and knowledge and, conversely, how constructivist insights (such as that the process of cognition is the process of life) can inspire new models of computing. Method: The info-computational constructive framework is presented for the modeling of cognitive processes in cognizing agents. Parallels are drawn with other constructivist approaches to cognition and knowledge generation. We describe how cognition as a process of life itself functions based on info-computation and how the process of knowledge generation proceeds through interactions with the environment and among agents. Results: Cognition and knowledge generation in a cognizing agent is understood as interaction with the world (potential information), which by processes of natural computation becomes actual information. That actual information after integration becomes knowledge for the agent. Heinz von Foerster is identified as a precursor of natural computing, in particular bio computing. Implications: IC provides a framework for unified study of cognition in living organisms (from the simplest ones, such as bacteria, to the most complex ones) as well as in artifactual cognitive systems. Constructivist content: It supports the constructivist view that knowledge is actively constructed by cognizing agents and shared in a process of social cognition. IC argues that this process can be modeled as info-computation.
- Engelbart, D.C.: Augmenting human intellect: A conceptual framework. Summary Report AFOSR-3223 under Contract AF 49(638)-1024, SRI Project 3578 for Air Force Office of Scientific Research, Stanford Research Institute, Menlo Park, CA. (1962)
This is an initial summary report of a project taking a new and systematic approach to improving the intellectual effectiveness of the individual human being. A detailed conceptual framework explores the nature of the system composed of the individual and the tools, concepts, and methods that match his basic capabilities to his problems. One of the tools that shows the greatest immediate promise is the computer, when it can be harnessed for direct online assistance, integrated with new concepts and methods.
- Friston, K.J., et al.: Designing ecosystems of intelligence from first principles. arXiv preprint arXiv:2212.01354 (2022)
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants—what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world—also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing—leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first—and key—step towards such an ecology.
- Grigsby, S.S.: Artificial intelligence for advanced human-machine symbiosis. In: Augmented Cognition: Intelligent Technologies. vol. 10915, pp. 1–15. Springer (2018)
Human capabilities such as memory, attention, sensory bandwidth, comprehension, and visualization are critically important but all have innate limitations. However, these human abilities can benefit from rapidly growing computational capabilities. We can apply computational power to support and augment cognitive skills that will bolster the limited human cognitive resource and provide new capabilities through this symbiosis. We now have the ability to design human-computer interaction capabilities where the computer anticipates, predicts, and augments the performance of the user and where the human supports, aids, and enhances the learning and performance of the computer. Augmented cognition seeks to advance this human-machine symbiosis through both machine understanding of the human (such as physical state sensing, cognitive state sensing, psychophysiology, emotion detection, and intent projection) and human understanding of the machine (such as explainable AI, shared situation awareness, trust enhancement, and advanced UX). The ultimate result being a truly interactive symbiosis where humans and computers are tightly coupled in productive partnerships that merge the best of the human with the best of the machine. As advances in artificial intelligence (AI) accelerate across a myriad of applications, we seek to understand the current state-of-the-art of AI and how it may be best applied for advancing human-machine symbiosis.
- Landgrebe, J., Smith, B.: Why Machines Will Never Rule the World: Artificial Intelligence without Fear. Routledge (2022)
The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: 1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. 2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence from mathematics, physics, computer science, philosophy, linguistics, and biology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve "artificial intelligence" (AI)? And why, after more than 50 years, are our most common interactions with AI, for example with our bank’s computers, still so unsatisfactory? Landgrebe and Smith show how a widespread fear about AI’s potential to bring about radical changes in the nature of human beings and in the human social order is founded on an error. There is still, as they demonstrate in a final chapter, a great deal that AI can achieve which will benefit humanity. But these benefits will be achieved without the aid of systems that are more powerful than humans, which are as impossible as AI systems that are intrinsically "evil" or able to "will" a takeover of human society.
- Maturana, H., Varela, F., Uribe, R.: Autopoiesis: The organisation of living systems, its characterization and a model. Biosystems 5(4), 187–196 (1974).
Notwithstanding their diversity, all living systems must share a common organization which we implicitly recognize calling them “living.” At present there is no formulation of this organization, mainly because the great developments of molecular, genetic and evolutionary notions in contemporary biology have led to the overemphasis of isolated components, e.g., to consider reproduction as a necessary feature of the living organization and, hence, not to ask about the organization which makes a living system a whole, autonomous unity that is alive regardless of whether it reproduces or not. As a result, processes that are history dependent (evolution, ontogenesis) and history independent (individual organization) have been confused in the attempt to provide a single mechanistic explanation for phenomena which, although related, are fundamentally distinct.
- Maturana, H.R., Varela, F.J.: Autopoiesis and Cognition: The Realization of the Living. D. Reidel Publishing Company, 2nd edn. (1980)
This is a bold, brilliant, provocative and puzzling work. It demands a radical shift in standpoint, an almost paradoxical posture in which living systems are described in terms of what lies outside the domain of descriptions. Professor Humberto Maturana, with his colleague Francisco Varela, have undertaken the construction of a systematic theoretical biology which attempts to define living systems not as they are objects of observation and description, nor even as in¬ teracting systems, but as self-contained unities whose only reference is to them¬ selves. Thus, the standpoint of description of such unities from the 'outside', i. e. , by an observer, already seems to violate the fundamental requirement which Maturana and Varela posit for the characterization of such system- namely, that they are autonomous, self-referring and self-constructing closed systems - in short, autopoietic systems in their terms. Yet, on the basis of such a conceptual method, and such a theory of living systems, Maturana goes on to define cognition as a biological phenomenon; as, in effect, the very nature of all living systems. And on this basis, to generate the very domains of interac¬ tion among such systems which constitute language, description and thinking.
- Minsky, M.: Steps toward artificial intelligence. Proceedings of the IRE 49(1), 8–30 (1961).
The problems of heuristic programming-of making computers solve really difficult problems-are divided into five main areas: Search, Pattern-Recognition, Learning, Planning, and Induction. A computer can do, in a sense, only what it is told to do. But even when we do not know how to solve a certain problem, we may program a machine (computer) to Search through some large space of solution attempts. Unfortunately, this usually leads to an enormously inefficient process. With Pattern-Recognition techniques, efficiency can often be improved, by restricting the application of the machine's methods to appropriate problems. Pattern-Recognition, together with Learning, can be used to exploit generalizations based on accumulated experience, further reducing search. By analyzing the situation, using Planning methods, we may obtain a fundamental improvement by replacing the given search with a much smaller, more appropriate exploration. To manage broad classes of problems, machines will need to construct models of their environments, using some scheme for Induction. Wherever appropriate, the discussion is supported by extensive citation of the literature and by descriptions of a few of the most successful heuristic (problem-solving) programs constructed to date.
- Moradi, M., Moradi, M., Bayat, F., Toosi, A.N.: Collective hybrid intelligence: towards a conceptual framework. International Journal of Crowd Science 3(2), 198–220 (2019).
Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant amounts of money and effort to answer this question. Nonetheless, despite some outstanding achievements, replacing humans in the intellectual tasks is not yet a reality. Instead, to compensate for the weakness of machines in some (mostly cognitive) tasks, the idea of putting human in the loop has been introduced and widely accepted. In this paper, the notion of collective hybrid intelligence as a new computing framework and comprehensive.
- Parr, T., Pezzulo, G., Friston, K.J.: Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. The MIT Press (2022)
The first comprehensive treatment of active inference, an integrative perspective on brain, cognition, and behavior used across multiple disciplines. Active inference is a way of understanding sentient behavior—a theory that characterizes perception, planning, and action in terms of probabilistic inference. Developed by theoretical neuroscientist Karl Friston over years of groundbreaking research, active inference provides an integrated perspective on brain, cognition, and behavior that is increasingly used across multiple disciplines including neuroscience, psychology, and philosophy. Active inference puts the action into perception. This book offers the first comprehensive treatment of active inference, covering theory, applications, and cognitive domains. Active inference is a “first principles” approach to understanding behavior and the brain, framed in terms of a single imperative to minimize free energy. The book emphasizes the implications of the free energy principle for understanding how the brain works. It first introduces active inference both conceptually and formally, contextualizing it within current theories of cognition. It then provides specific examples of computational models that use active inference to explain such cognitive phenomena as perception, attention, memory, and planning.
- Varela, F.G., Maturana, H.R., Uribe, R.: Autopoiesis: The organization of living systems, its characterization and a model. Biosystems 5(4), 187–196 (1974).
We formulate the organization of living organisms through the characterization of the class of autopoietic systems to which living things belong. This general characterization is seen at work in a computer simulated model of a minimal case satisfying the conditions for autopoietic organization.
Library
- Akata, Z. et al: A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 53(8), 18–28 (2020)
We define hybrid intelligence (HI) as the combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines. HI is an important new research focus for artificial intelligence, and we set a research agenda for HI by formulating four challenges.
- Dodig Crnkovic, G.: Info-computational constructivism and cognition. Constructivist Foundations (2014), 9 (2):223-231.
At present, we lack a common understanding of both the process of cognition in living organisms and the construction of knowledge in embodied, embedded cognizing agents in general, including future artifactual cognitive agents under development, such as cognitive robots and softbots. Purpose: This paper aims to show how the info-computational approach (IC) can reinforce constructivist ideas about the nature of cognition and knowledge and, conversely, how constructivist insights (such as that the process of cognition is the process of life) can inspire new models of computing. Method: The info-computational constructive framework is presented for the modeling of cognitive processes in cognizing agents. Parallels are drawn with other constructivist approaches to cognition and knowledge generation. We describe how cognition as a process of life itself functions based on info-computation and how the process of knowledge generation proceeds through interactions with the environment and among agents. Results: Cognition and knowledge generation in a cognizing agent is understood as interaction with the world (potential information), which by processes of natural computation becomes actual information. That actual information after integration becomes knowledge for the agent. Heinz von Foerster is identified as a precursor of natural computing, in particular bio computing. Implications: IC provides a framework for unified study of cognition in living organisms (from the simplest ones, such as bacteria, to the most complex ones) as well as in artifactual cognitive systems. Constructivist content: It supports the constructivist view that knowledge is actively constructed by cognizing agents and shared in a process of social cognition. IC argues that this process can be modeled as info-computation.
- Engelbart, D.C.: Augmenting human intellect: A conceptual framework. Summary Report AFOSR-3223 under Contract AF 49(638)-1024, SRI Project 3578 for Air Force Office of Scientific Research, Stanford Research Institute, Menlo Park, CA. (1962)
This is an initial summary report of a project taking a new and systematic approach to improving the intellectual effectiveness of the individual human being. A detailed conceptual framework explores the nature of the system composed of the individual and the tools, concepts, and methods that match his basic capabilities to his problems. One of the tools that shows the greatest immediate promise is the computer, when it can be harnessed for direct online assistance, integrated with new concepts and methods.
- Friston, K.J., et al.: Designing ecosystems of intelligence from first principles. arXiv preprint arXiv:2212.01354 (2022)
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants—what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world—also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing—leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first—and key—step towards such an ecology.
- Grigsby, S.S.: Artificial intelligence for advanced human-machine symbiosis. In: Augmented Cognition: Intelligent Technologies. vol. 10915, pp. 1–15. Springer (2018)
Human capabilities such as memory, attention, sensory bandwidth, comprehension, and visualization are critically important but all have innate limitations. However, these human abilities can benefit from rapidly growing computational capabilities. We can apply computational power to support and augment cognitive skills that will bolster the limited human cognitive resource and provide new capabilities through this symbiosis. We now have the ability to design human-computer interaction capabilities where the computer anticipates, predicts, and augments the performance of the user and where the human supports, aids, and enhances the learning and performance of the computer. Augmented cognition seeks to advance this human-machine symbiosis through both machine understanding of the human (such as physical state sensing, cognitive state sensing, psychophysiology, emotion detection, and intent projection) and human understanding of the machine (such as explainable AI, shared situation awareness, trust enhancement, and advanced UX). The ultimate result being a truly interactive symbiosis where humans and computers are tightly coupled in productive partnerships that merge the best of the human with the best of the machine. As advances in artificial intelligence (AI) accelerate across a myriad of applications, we seek to understand the current state-of-the-art of AI and how it may be best applied for advancing human-machine symbiosis.
- Landgrebe, J., Smith, B.: Why Machines Will Never Rule the World: Artificial Intelligence without Fear. Routledge (2022)
The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: 1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. 2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence from mathematics, physics, computer science, philosophy, linguistics, and biology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve "artificial intelligence" (AI)? And why, after more than 50 years, are our most common interactions with AI, for example with our bank’s computers, still so unsatisfactory? Landgrebe and Smith show how a widespread fear about AI’s potential to bring about radical changes in the nature of human beings and in the human social order is founded on an error. There is still, as they demonstrate in a final chapter, a great deal that AI can achieve which will benefit humanity. But these benefits will be achieved without the aid of systems that are more powerful than humans, which are as impossible as AI systems that are intrinsically "evil" or able to "will" a takeover of human society.
- Maturana, H., Varela, F., Uribe, R.: Autopoiesis: The organisation of living systems, its characterization and a model. Biosystems 5(4), 187–196 (1974).
Notwithstanding their diversity, all living systems must share a common organization which we implicitly recognize calling them “living.” At present there is no formulation of this organization, mainly because the great developments of molecular, genetic and evolutionary notions in contemporary biology have led to the overemphasis of isolated components, e.g., to consider reproduction as a necessary feature of the living organization and, hence, not to ask about the organization which makes a living system a whole, autonomous unity that is alive regardless of whether it reproduces or not. As a result, processes that are history dependent (evolution, ontogenesis) and history independent (individual organization) have been confused in the attempt to provide a single mechanistic explanation for phenomena which, although related, are fundamentally distinct.
- Maturana, H.R., Varela, F.J.: Autopoiesis and Cognition: The Realization of the Living. D. Reidel Publishing Company, 2nd edn. (1980)
This is a bold, brilliant, provocative and puzzling work. It demands a radical shift in standpoint, an almost paradoxical posture in which living systems are described in terms of what lies outside the domain of descriptions. Professor Humberto Maturana, with his colleague Francisco Varela, have undertaken the construction of a systematic theoretical biology which attempts to define living systems not as they are objects of observation and description, nor even as in¬ teracting systems, but as self-contained unities whose only reference is to them¬ selves. Thus, the standpoint of description of such unities from the 'outside', i. e. , by an observer, already seems to violate the fundamental requirement which Maturana and Varela posit for the characterization of such system- namely, that they are autonomous, self-referring and self-constructing closed systems - in short, autopoietic systems in their terms. Yet, on the basis of such a conceptual method, and such a theory of living systems, Maturana goes on to define cognition as a biological phenomenon; as, in effect, the very nature of all living systems. And on this basis, to generate the very domains of interac¬ tion among such systems which constitute language, description and thinking.
- Minsky, M.: Steps toward artificial intelligence. Proceedings of the IRE 49(1), 8–30 (1961).
The problems of heuristic programming-of making computers solve really difficult problems-are divided into five main areas: Search, Pattern-Recognition, Learning, Planning, and Induction. A computer can do, in a sense, only what it is told to do. But even when we do not know how to solve a certain problem, we may program a machine (computer) to Search through some large space of solution attempts. Unfortunately, this usually leads to an enormously inefficient process. With Pattern-Recognition techniques, efficiency can often be improved, by restricting the application of the machine's methods to appropriate problems. Pattern-Recognition, together with Learning, can be used to exploit generalizations based on accumulated experience, further reducing search. By analyzing the situation, using Planning methods, we may obtain a fundamental improvement by replacing the given search with a much smaller, more appropriate exploration. To manage broad classes of problems, machines will need to construct models of their environments, using some scheme for Induction. Wherever appropriate, the discussion is supported by extensive citation of the literature and by descriptions of a few of the most successful heuristic (problem-solving) programs constructed to date.
- Moradi, M., Moradi, M., Bayat, F., Toosi, A.N.: Collective hybrid intelligence: towards a conceptual framework. International Journal of Crowd Science 3(2), 198–220 (2019).
Human or machine, which one is more intelligent and powerful for performing computing and processing tasks? Over the years, researchers and scientists have spent significant amounts of money and effort to answer this question. Nonetheless, despite some outstanding achievements, replacing humans in the intellectual tasks is not yet a reality. Instead, to compensate for the weakness of machines in some (mostly cognitive) tasks, the idea of putting human in the loop has been introduced and widely accepted. In this paper, the notion of collective hybrid intelligence as a new computing framework and comprehensive.
- Parr, T., Pezzulo, G., Friston, K.J.: Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. The MIT Press (2022)
The first comprehensive treatment of active inference, an integrative perspective on brain, cognition, and behavior used across multiple disciplines. Active inference is a way of understanding sentient behavior—a theory that characterizes perception, planning, and action in terms of probabilistic inference. Developed by theoretical neuroscientist Karl Friston over years of groundbreaking research, active inference provides an integrated perspective on brain, cognition, and behavior that is increasingly used across multiple disciplines including neuroscience, psychology, and philosophy. Active inference puts the action into perception. This book offers the first comprehensive treatment of active inference, covering theory, applications, and cognitive domains. Active inference is a “first principles” approach to understanding behavior and the brain, framed in terms of a single imperative to minimize free energy. The book emphasizes the implications of the free energy principle for understanding how the brain works. It first introduces active inference both conceptually and formally, contextualizing it within current theories of cognition. It then provides specific examples of computational models that use active inference to explain such cognitive phenomena as perception, attention, memory, and planning.
- Varela, F.G., Maturana, H.R., Uribe, R.: Autopoiesis: The organization of living systems, its characterization and a model. Biosystems 5(4), 187–196 (1974).
We formulate the organization of living organisms through the characterization of the class of autopoietic systems to which living things belong. This general characterization is seen at work in a computer simulated model of a minimal case satisfying the conditions for autopoietic organization.