Abstract. Three fundamental questions concerning minds are
presented. These are about consciousness, intentionality and intelligence.
After we present the fundamental framework that has shaped both the
philosophy of mind and the Artificial Intelligence research in the last
forty years or so regarding the last two questions, we turn to
consciousness, whose study still seems evasive to both communities.
After briefly illustrating why and how phenomenal consciousness is
puzzling, a theoretical diagnosis of the problem is proposed and a
framework is presented, within which further research would yield a
solution. The diagnosis is that the puzzle stems from a peculiar dual
epistemic access to phenomenal aspects (qualia) of our conscious
experiences. An account of concept formation is presented such that both
the phenomenal concepts (like the concepts r e d and s w e e t ) and the
introspective concepts (like the concepts e x p e r i e n c i n g
 r e d and t a s t i n g  s w e e t ) are acquired from a
firstperson perspective as opposed to the third-person one (the standard
concept formation strategy about objective features). We explain the
first-person perspective in information-theoreti c and computational
terms:
Abstract. We present a simple model of consciousness as it may
exist in animals and can exist in man-made artifacts. The minimum unit
of consciousness is a brain/body in interaction with a world. No parts
of that system are themselves, conscious. Emphasis is placed on structures
that could have evolved from earlier structures by small steps each of
which conferred advantage to its possessors. The model is functional, so
it becomes possible to build such conscious systems. Indeed, we show why
conscious systems should be built as well as how humans should interact
with them.
Abstract. In their joint paper entitled "The Replication of the
Hard Problem of Consciousness in AI and BIO-AI" (Boltuc et al.
Replication of the hard problem of conscious in AI and Bio-AI: An early
conceptual framework 2008), Nicholas and Piotr Boltuc suggest that
machines could be equipped with phenomenal consciousness, which is
subjective consciousness that satisfies Chalmer’s hard problem (We will
abbreviate the hard problem of consciousness as "H-consciousness").
The claim is that if we knew the inner workings of phenomenal
consciousness and could understand its’ precise operation, we could
instantiate such consciousness in a machine. This claim, called the
extra-strong AI thesis, is an important claim because if true it would
demystify the privileged access problem of first-person consciousness
and cast it as an empirical problem of science and not a fundamental
question of philosophy. A core assumption of the extra-strong AI thesis
is that there is no logical argument that precludes the implementation
of H-consciousness in an organic or in-organic machine provided we
understand its algorithm. Another way of framing this conclusion is that
there is nothing special about H-consciousness as compared to any other
process. That is, in the same way that we do not preclude a machine from
implementing photosynthesis, we also do not preclude a machine from
implementing H-consciousness. While one may be more difficult in practice,
it is a problem of science and engineering, and no longer a philosophical
question. I propose that Boltuc’s conclusion, while plausible and
convincing, comes at a very high price; the argument given for his
conclusion does not exclude any conceivable process from machine
implementation. In short, if we make some assumptions about the equivalence
of a rough notion of algorithm and then tie this to human understanding,
all logical preconditions vanish and the argument grants that any process
can be implemented in a machine. The purpose of this paper is to comment
on the argument for his conclusion and offer additional properties of
H-consciousness that can be used to make the conclusion falsifiable
through scientific investigation rather than relying on the limits of
human understanding.
Abstract. This paper has three goals. The first is to introduce
the ‘‘knowledge game’’, a new, simple and yet powerful tool for analysing
some intriguing philosophical questions. The second is to apply the
knowledge game as an informative test to discriminate between conscious
(human) and conscious-less agents (zombies and robots), depending on
which version of the game they can win. And the third is to use a version
of the knowledge game to provide an answer to Dretske’s question ‘‘how do
you know you are not a zombie?’’.
Abstract. Computers and Thought are the two categories that
together define Artificial Intelligence as a discipline. It is generally
accepted that work in Artificial Intelligence over the last thirty years
has had a strong influence on aspects of computer architectures. In this
paper we also make the converse claim; that the state of computer
architecture has been a strong inuence on our models of thought. The
Von Neumann model of computation has lead Artificial Intelligence in
particular directions. Intelligence in biological systems is completely
different. Recent work in behavior-based Artificial Intelligence has
produced new models of intelligence that are much closer in spirit to
biological systems. The non-Von Neumann computational models they use
share many characteristics with biological computation.
Abstract. Activities are located behaviors, taking time, conceived
as socially meaningful, and usually involving interaction with tools and
the environment. In modeling human cognition as a form of problem solving
(goal-directed search and operator sequencing), cognitive science
researchers have not adequately studied ‘off-task’ activities (e.g.
waiting), non-intellectual motives (e.g. hunger), sustaining a goal state
(e.g. playful interaction), and coupled perceptual–motor dynamics (e.g.
following someone). These aspects of human behavior have been considered
in bits and pieces in past research, identified as scripts, human factors,
behavior settings, ensemble, flow experience, and situated action. More
broadly, activity theory provides a comprehensive framework relating
motives, goals, and operations. This paper ties these ideas together,
using examples from work life in a Canadian High Arctic research station.
The emphasis is on simulating human behavior as it naturally occurs, such
that ‘working’ is understood as an aspect of living. The result is a
synthesis of previously unrelated analytic perspectives and a broader
appreciation of the nature of human cognition. Simulating activities in
this comprehensive way is useful for understanding work practice,
promoting learning, and designing better tools, including human–robot
systems.
Abstract. The concept of “social situatedness", that is, the idea
that the development of individual intelligence requires a social (and
cultural) embedding, has recently received much attention in cognitive
science and artificial intelligence research, in particular work on
social or epigenetic robotics. The work of Lev Vygotsky, who put forward
this view as early as the 1920s, has influenced the discussion to some
degree but still remains far from well known. This article therefore is
aimed at giving an overview of his cognitive development theory and a
discussion of its relation to more recent work in primatology and socially
situated artificial intelligence, in particular humanoid robotics.
Abstract. Situated action is a new approach to artificial
intelligence that has thus far functioned without any explicit underlying
theore tical foundation . As a result, many re searchers in artificial
intelligence have misunderstood the goals and claims of situated action.
In order to rectify this situation, we provide an explicit formulation of
the theoretical foundations of situated action.
Abstract. For cognitive systems, embodiment appears to be of
crucial importance. Unfortunately, nobody seems to be able to define
embodiment in a way that would prevent it from also covering its trivial
interpretations such as mere situatedness in complex environments. The
paper focuses on the definition of embodiment, especially whether physical
embodiment is necessary and/or sufficient for cognitive systems. Cognition
is characterized as a continuous complex process rather than ahistorical
logical capability. Furthermore, the paper investigates the relationship
between cognitive embodiment and the issues of understanding,
representation and task specification.
Abstract. The embodiment hypothesis is the idea that intelligence
emerges in the interaction of an agent with an environment and as a
result of sensorimotor activity. We offer six lessons for developing
embodied intelligent agents suggested by research in developmental
psychology. We argue that starting as a baby grounded in a physical,
social, and linguistic world is crucial to the development of the
flexible and inventive intelligence that characterizes humankind.
Abstract. This paper discusses the identity of embodied AI, i.e.
it asks the question exactly what it is that makes AI research embodied.
From an engineering perspective, it is fairly clear that embodied AI is
about robotic, i.e. physically embodied systems. From the scientific
perspective of AI as building models of natural cognition or intelligence,
however, things are less clear. On the one hand embodied AI seems to be
about physically embodied, i.e. robotic models of cognition. On the other
hand the term ‘embodied’ seems to signify the type of intelligence
modeled and/or the conception of (embodied) cognition that is underlying
the modeling. In either case, it appears that embodied AI, as it currently
stands, might be too narrowly conceived since each of these perspectives
is addressed only partially.
Abstract. What do linguistic symbols do for minds like ours, and
how (if at all) can basic embodied, dynamical, and situated approaches
do justice to high-level human thought and reason? These two questions
are best addressed together, since our answers to the first may inform
the second. The key move in scaling up simple embodied cognitive science
is, I argue, to take very seriously the potent role of human-built
structures in transforming the spaces of human learning and reason. In
particular, in this article I look at a range of cases involving what I
dub surrogate situations. Here, we actively create restricted artificial
environments that allow us to deploy basic perception-action-reason
routines in the absence of their proper objects. Examples include the use
of real-world models, diagrams, and other concrete external symbols to
support dense looping interactions with a variety of stable external
structures that stand in for the absent states of affairs. Language itself,
I finally suggest, is the most potent and fundamental form of such
surrogacy. Words are both cheap stand-ins for gross behavioral outcomes,
and the concrete objects that structure new spaces for basic forms of
learning and reason. A good hard look at surrogate situatedness thus turns
the standard skeptical challenge on its head. But it raises important
questions concerning what really matters about these new approaches, and
it helps focus what I see as the major challenge for the future: how, in
detail, to conceptualize the role of symbols (both internal and external)
in dynamical cognitive processes..
Abstract. This review essay surveys recent literature in the history
of science, literary theory, anthropology, and art criticism dedicated
to exploring how the artificial life enterprise has been inflected
by—and might also reshape—existing social, historical, cognitive,
and cultural frames of thought and action. The piece works through
various possible interpretations of Kevin Kelly’s phrase "life is a
verb", in order to track recent shifts in cultural studies of artificial
life from an aesthetic of critique to an aesthetic of conversation,
discerning in the process different styles of translating between the
concerns of the humanities, social sciences, natural sciences, and
sciences of the artificial.
Abstract. We study the effects of an imitation mechanism on
a population of animats capable of individual ontogenetic learning.
An urge to imitate others augments a network-based reinforcement
learning strategy used in the control system of the animats. We test
populations of animats with imitation against populations without
for their ability to find, and maintain over generations, successful
foraging behavior in an environment containing three necessary
resources: food, water, and shelter. We conclude that even simple
imitation mechanisms are effective at increasing the frequency of
success when measured over time and over populations of animats.
Abstract. The field of artificial life (Alife) is replete with
documented instances of emergence, though debate still persists as to
the meaning of this term. We contend that, in the absence of an
acceptable definition, researchers in the field would be well served
by adopting an emergence certification mark that would garner approval
from the Alife community. Toward this end, we propose an emergence test,
namely, criteria by which one can justify conferring the emergence label.
Abstract. The overlapping fields of adaptive behavior and
artificial life are often described as novel approaches to biology. They
focus attention on bottom-up explanations and how lifelike phenomena can
result from relatively simple systems interacting dynamically with their
environments. They are also characterized by the use of synthetic
methodologies, that is, building artificial systems as a means of
exploring these ideas. Two differing approaches can be distinguished:
building models of specific animal systems and assessing them within
complete behavior–environment loops; and exploring the behavior of
invented artificial animals, often called animats, under similar
conditions. An obvious question about the latter approach is, how can
we learn about real biology from simulation of non-existent animals?
In this article I will argue, first, that animat research, to the extent
that it is relevant to biology, should also be considered as model
building. Animat simulations do, implicitly, represent hypotheses about,
and should be evaluated by comparison to, animals. Casting this research
in terms of invented agents serves only to limit the ability to draw
useful conclusions from it by deflecting or deferring any serious
comparisons of the model mechanisms and results with real biological
systems. Claims that animat models are meant to be existence proofs,
idealizations, or represent general problems in biology do not make
these models qualitatively different from more conventional models
of specific animals, nor undermine the ultimate requirement to justify
this work by making concrete comparisons with empirical data. It is
thus suggested that we will learn more by choosing real, and not made-up,
targets for our models.
Abstract. Alan Turing devised his famous test (TT) through a
slight modification of the parlor game in which a judge tries to ascertain
the gender of two people who are only linguistically accessible. Stevan
Harnad has introduced the Total TT, in which the judge can look at the
contestants in an attempt to determine which is a robot and which a
person. But what if we confront the judge with an animal, and a robot
striving to pass for one, and then challenge him to peg which is which?
Now we can index TTT to a particular animal and its synthetic correlate.
We might therefore have TTTrat, TTTcat, TTTdog, and so on. These tests,
as we explain herein, are a better barometer of artificial intelligence
(AI) than Turing’s original TT, because AI seems to have ammunition
sufficient only to reach the level of artificial animal, not artificial
person.
Abstract. We define the main factor of inteligence as the ability
to comprehend, formalising this ability with the help of new constructs
based on descriptional complexity. The result is a comprehension test,
or C-test, exclusively defined in terms of universal descriptional
machines (e.g. universal Turing machines). Despite the absolute and
non-anthropomorphic character of the test it is equally appicable to
both humans and machines. Moreover, it correlates with classical
psychometric tests, thus establishing the first firm connection between
information theoretic notions and traditional IQ tests. The Turing test
is compared with the C-test and their joint combination is discussed.
As a result, the idea of a Turing Test as a practical test of
intelligence should be left behind, and substituted by computational
and factorial tests of different cognitive abilities, a much more useful
approach for artificial intelligence progress and for many other
intriguing questions that are presented beyond the Turing test.
Abstract. The paper begins by examining the original Turing Test
(2T) and Searle’s antithetical Chinese Room Argument, which is intended
to refute the 2T in particular, as well as any formal or abstract
procedural theory of the mind in general. In the ensuing dispute between
Searle and his own critics, I argue that Searle’s ‘internalist’ strategy
is unable to deflect Dennett’s combined robotic-systems reply and the
allied Total Turing Test (3T). Many would hold that the 3T marks the
culmination of the dialectic and, in principle, constitutes a fully
adequate empirical standard for judging that an artifact is intelligent
on a par with human beings. However, the paper carries the debate forward
by arguing that the sociolinguistic factors highlighted in externalist
views in the philosophy of language indicate the need for a fundamental
shift in perspective in a Truly Total Turing Test (4T). It’s not enough
to focus on Dennett’s individual robot viewed as a system; instead, we
need to focus on an ongoing system of such artifacts. Hence a 4T should
evaluate the general category of cognitive organization under
investigation, rather than the performance of single specimens. From
this comprehensive standpoint, the question is not whether an individual
instance could simulate intelligent behavior within the context of a
pre-existing sociolinguistic culture developed by the human cognitive
type. Instead the key issue is whether the artificial cognitive type
itself is capable of producing a comparable sociolinguistic medium.
Abstract. Based on insufficient evidence, and inadequate
research, Floridi and his students report inaccuracies and draw false
conclusions in their Minds and Machines evaluation, which this paper
aims to clarify. Acting as invited judges, Floridi et al. participated
in nine, of the ninety-six, Turing tests staged in the finals of the
18th Loebner Prize for Artificial Intelligence in October 2008. From
the transcripts it appears that they used power over solidarity as an
interrogation technique. As a result, they were fooled on several
occasions into believing that a machine was a human and that a human
was a machine. Worse still, they did not realise their mistake. This
resulted in a combined correct identification rate of less than 56%.
In their paper they assumed that they had made correct identifications
when they in fact had been incorrect.
Abstract. This paper argues that the Turing test is based on a
fixed and de-contextualized view of communicative competence. According
to this view, a machine that passes the test will be able to communicate
effectively in a variety of other situations. But the de-contextualized
view ignores the relationship between language and social context, or,
to put it another way, the extent to which speakers respond dynamically
to variations in discourse function, formality level, social
distance/solidarity among participants, and participants’ relative
degrees of power and status (Holmes, 1992). In the case of the Loebner
Contest, a present day version of the Turing test, the social context
of interaction can be interpreted in conflicting ways. For example,
Loebner discourse is defined 1) as a friendly, casual conversation
between two strangers of equal power, and 2) as a one-way transaction
in which judges control the conversational floor in an attempt to expose
contestants that are not human. This conflict in discourse function is
irrelevant so long as the goal of the contest is to ensure that only
thinking, human entities pass the test. But if the function of Loebner
discourse is to encourage the production of software that can pass for
human on the level of conversational ability, then the contest designers
need to resolve this ambiguity in discourse function, and thus also come
to terms with the kind of competence they are trying to measure.
Abstract. William Rapaport, in "How Helen Keller used syntactic
semantics to escape from a Chinese Room," (Rapaport 2006), argues that
Helen Keller was in a sort of Chinese Room, and that her subsequent
development of natural language fluency illustrates the flaws in Searle’s
famous Chinese Room Argument and provides a method for developing
computers that have genuine semantics (and intentionality). I contend
that his argument fails. In setting the problem, Rapaport uses his own
preferred definitions of semantics and syntax, but he does not translate
Searle’s Chinese Room argument into that idiom before attacking it. Once
the Chinese Room is translated into Rapaport’s idiom (in a manner that
preserves the distinction between meaningful representations and
uninterpreted symbols), I demonstrate how Rapaport’s argument fails to
defeat the CRA. This failure brings a crucial element of the Chinese
Room Argument to the fore: the person in the Chinese Room is prevented
from connecting the Chinese symbols to his/her own meaningful experiences
and memories. This issue must be addressed before any victory over the
CRA is announced.
Abstract. John Searle’s Chinese room argument is perhaps the
most influential and widely cited argument against artificial intelligence
(AI).Understood as targetingAI proper – claims that computers can think or
do think – Searle’s argument, despite its rhetorical flash, is logically
and scientifically a dud. Advertised as effective against AI proper, the
argument, in its main outlines, is an ignoratio elenchi. It musters
persuasive force fallaciously by indirection fostered by equivocal
deployment of the phrase “strong AI” and reinforced by equivocation on
the phrase “causal powers’ (at least) equal to those of brains.” On a more
carefully crafted understanding – understood just to target metaphysical
identification of thought with computation (“Functionalism” or
“Computationalism”) and not AI proper the argument is still unsound,
though more interestingly so. It’s unsound in ways difficult for high
church – “someday my prince of an AI program will come” – believers in AI
to acknowledge without undermining their high church beliefs. The ad
hominem bite of Searle’s argument against the high church persuasions of
so many cognitive scientists, I suggest, largely explains the undeserved
repute this really quite disreputable argument enjoys among them.
Abstract. The intelligent-seeming deeds of computers are what
occasion philosophical debate about artificial intelligence (AI) in the
first place. Since evidence of AI is not bad, arguments against seem
called for. John Searle's Chinese Room Argument (1980a, 1984, 1990, 1994)
is among the most famous and long-running would-be answers to the call.
Surprisingly, both the original thought experiment (1980a) and Searle's
later would-be formalizations of the embedding argument (1984, 1990) are
quite unavailing against AI proper (claims that computers do or someday
will think). Searle lately even styles it a "misunderstanding" (1994,
p. 547) to think the argument was ever so directed! The Chinese room is
now advertised to target Computationalism (claims that computation is
what thought essentially is) exclusively. Despite its renown, the
Chinese Room Argument is totally ineffective even against this target.
Abstract. A theory of “syntactic semantics” is advocated as a
way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics,
as the study of relations between symbols and meanings, can be turned
into syntax—a study of relations among symbols (including meanings)—and
hence syntax can suffice for the semantical enterprise. (2) Semantics, as
the process of understanding one domain modeled in terms of another, can
be viewed recursively: The base case of semantic
understanding—understanding a domain in terms of itself— is syntactic
understanding. An internal (or “narrow”), first-person point of view
makes an external (or “wide”), third-person point of view otiose for
purposes of understanding cognition. The paper also sketches the
ramifications of this view with respect to methodological solipsism,
conceptual-role semantics, holism, misunderstanding, and implementation,
and looks at Helen Keller as inhabitant of a Chinese Room.
Abstract. Searle’s celebrated Chinese room thought experiment
was devised as an attempted refutation of the view that appropriately
programmed digital computers literally are the possessors of genuine
mental states. A standard reply to Searle, known as the "robot reply"
(which, I argue, reflects the dominant approach to the problem of content
in contemporary philosophy of mind), consists of the claim that the
problem he raises can be solved by supplementing the computational
device with some "appropriate" environmental hookups. I argue that not
only does Searle himself casts doubt on the adequacy of this idea by
applying to it a slightly revised version of his original argument, but
that the weakness of this encoding based approach to the problem of
intentionality can also be exposed from a somewhat different angle.
Capitalizing on the work of several authors and, in particular, on that
of psychologist Mark Bickhard, I argue that the existence of symbol-world
correspondence is not a property that the cognitive system itself can
appreciate, from its own perspective, by interacting with the symbol
and therefore, not a property that can constitute intrinsic content.
The foundational crisis to which Searle alluded is, I conclude, very
much alive.
Abstract. The notion of a ‘symbol’ plays an important role in the
disciplines of Philosophy, Psychology, Computer Science, and Cognitive
Science. However, there is comparatively little agreement on how this
notion is to be understood, either between disciplines, or even within
particular disciplines. This paper does not attempt to defend some
putatively ‘correct’ version of the concept of a ‘symbol.’ Rather, some
terminological conventions are suggested, some constraints are proposed
and a taxonomy of the kinds of issue that give rise to disagreement is
articulated. The goal here is to provide something like a ‘geography’ of
the various notions of ‘symbol’ that have appeared in the various
literatures, so as to highlight the key issues and to permit the focusing
of attention upon the important dimensions. In particular, the relationship
between ‘tokens’ and ‘symbols’ is addressed. The issue of designation is
discussed in some detail. The distinction between simple and complex
symbols is clarified and an apparently necessary condition for a system
to be potentially symbol, or token bearing, is introduced.
Abstract. What is the relation between the material, conventional
symbol structures that we encounter in the spoken and written word, and
human thought? A common assumption, that structures a wide variety of
otherwise competing views, is that the way in which these material,
conventional symbol-structures do their work is by being translated
into some kind of content-matching inner code. One alternative to this
view is the tempting but thoroughly elusive idea that we somehow think
in some natural language (such as English). In the present treatment I
explore a third option, which I shall call the "complementarity" view of
language. According to this third view the actual symbol structures of a
given language add cognitive value by complementing (without being
replicated by) the more basic modes of operation and representation
endemic to the biological brain. The "cognitive bonus" that language
brings is, on this model, not to be cashed out either via the ultimately
mysterious notion of "thinking in a given natural language" or via some
process of exhaustive translation into another inner code. Instead, we
should try to think in terms of a kind of coordination dynamics in which
the forms and structures of a language qua material symbol system play
a key and irreducible role. Understanding language as a complementary
cognitive resource is, I argue, an important part of the much larger
project (sometimes glossed in terms of the "extended mind") of
understanding human cognition as essentially and multiply hybrid: as
involving a complex interplay between internal biological resources and
external non-biological resources.
Abstract. This paper locates a conceptual difficulty in the view
that symbolic representations causally govern mental processing. The
problem is to craft a clear divide between systems that are governed by
and systems that are merely described by representations. It is argued
that neither computational functionalists nor identity theorists can
secure the distinction that is needed for defining a successful criterion
for the presence of representations with causal powers. The conclusion
is that until those who are committed to the symbolic processing model
of the mind can produce a better empirical account of what they mean by
the presence of causally active representations in the brain, their claims
will be irrelevant to the conduct of cognitive science.
Abstract. Symbol Grounding tries to answer the question
as to how it is possible for a computer program to use symbols which
are not arbitrarily interpretable. Whereas the signs in conventional
programs are just "parasitic on the meaning in our heads", grounded
symbols should possess at least some "intrinsic meaning". This paper
gives a brief overview of what Symbol Grounding is and summarizes some
of today's connectionist Symbol Grounding models. Instead of concentrating
on cognitive linguistics, we try to present an alternative view of Symbol
Grounding. Our analysis reveals that Symbol Grounding is in fact the
endeavour of automated model construction. Although it originated in a
somewhat anti-formal spirit it is (necessarily) full of parallels to
classical symbolic logic. We present our view that Symbol Grounding is
in fact a connectionist version of transcendental logic, which is the
basis for generating formal models of non-formal domains. Such
formalizations are inherently logical, though not only based on formal
but also on material truth conditions.
Abstract. The Chinese room argument has presented a persistent
headache in the search for Artificial Intelligence. Since it first
appeared in the literature, various interpretations have been made,
attempting to understand the problems posed by this thought experiment.
Throughout all this time, some researchers in the Artificial
Intelligence community have seen Symbol Grounding as proposed by Harnad
as a solution to the Chinese room argument. The main thesis in this paper
is that although related, these two issues present different problems in
the framework presented by Harnad himself. The work presented here attempts
to shed some light on the relationship between John Searle’s
intentionality notion and Harnad’s Symbol Grounding Problem.
Abstract. Symbols should be grounded, as has been argued before.
But we insist that they should be grounded not only in subsymbolic
activities, but also in the interaction between the agent and the world.
The point is that concepts are not formed in isolation (from the world),
in abstraction, or “objectively.” They are formed in relation to the
experience of agents, through their perceptual/motor apparatuses, in
their world and linked to their goals and actions. This paper takes a
detailed look at this relatively old issue, with a new perspective, aided
by our work of computational cognitive model development. To further our
understanding, we also go back in time to link up with earlier
philosophical theories related to this issue. The result is an account
that extends from computational mechanisms to philosophical abstractions.
Abstract. We consider the symbol grounding problem, and apply to
it philosophical arguments against Cartesianism developed by Sellars and
McDowell: the problematic issue is the dichotomy between inside and
outside which the definition of a physical symbol system presupposes.
Surprisingly, one can question this dichotomy and still do symbolic
computation: a detailed examination of the hardware and software of
serial ports shows this.
Abstract. Hubert and Stuart Dreyfus have tried to place connectionism
and artificial intelligence in a broader historical and intellectual context.
This history associates connectionism with neuroscience, conceptual holism,
and nonrationalism, and artificial intelligence with conceptual atomism,
rationalism, and formal logic. The present paper argues that the Dreyfus
account of connectionism and artificial intelligence is both historically
and philosophically misleading.
Abstract. This paper questions approaches to computational modelling
of neural mechanisms underlying behaviour. It examines "simplifying"
(connectionist) models used in computational neuroscience and concludes
that, unless embedded within a sensorimotor system, they are meaningless.
The implication is that future models should be situated within
closed-environment simulation systems: output of the simulated nervous
system is then expressed as observable behaviour. This approach is referred
to as "computational neuroethology". Computational neuroethology offers a
firmer grounding for the semantics of the model, eliminating subjectivity
from the result-interpretation process. A number of more fundamental
implications of the approach are also discussed, chief of which is that
insect cognition should be studied in preference to mammalian cognition.
Abstract. It is not widely realised that Turing was probably the
first person to consider building computing machines out of simple,
neuron-like elements connected together into networks in a largely random
manner. Turing called his networks 'unorganised machines'. By the application
of what he described as 'appropriate interference, mimicking education' an
unorganised machine can be trained to perform any task that a Turing machine
can carry out, provided the number of 'neurons' is sufficient. Turing
proposed simulating both the behaviour of the network and the training
process by means of a computer program. We outline Turing's connectionist
project of 1948.
Abstract. In recent years the development of connectionist theories
and of various subsymbolic approaches to the study of the mind, and the
renewed interest in the relations between the study of the mind and the
neuroscience have had significant repercussions on the philosophical
foundations of artificial intelligence and cognitive science, and on
important questions of the philosophy of mind. Various approaches to the
problem of mental representations have been formulated, in some sense
alternative to classic approaches of artificial intelligence and cognitive
science. We suggest that the problem of modelling the reference of mental
symbols from a cognitive point of view requires the abandonment of a purely
symbolic approach, and the adoption of a subsymbolic level of
representation. Some philosophical consequences of a subsymbolic level of
this kind are discussed. After distinguishing between the problem of
reference and that of intentionality (which cannot be solved positing a
subsymbolic level of representation), we shall see how a subsymbolic
approach can be compatible with a functionalist view of the mind, in the
wider sense. Finally, some consequences of subsymbolic models of reference
regarding the problem of the inverted spectrum are described.
Abstract. This paper responds to criticisms levelled by Fodor,
Pylyshyn, and McLaughlin against connectionism. Specifically, I will rebut
the charge that connectionists cannot account for representational
systematicity without implementing a classical architecture. This will be
accomplished by drawing on Paul Smolensky's Tensor Product model of
representation and on his insights about split-level architectures.
Abstract. Artificial intelligence (AI) was born connectionist when
in 1943 Warren S. McCulloch and Walter Pitts introduced the first sequential
logic model of neuron. The 1950s sees the passage from numerical to symbolic
computation with the christening of AI in 1956. In 1986, there is a rebirth
of connectionism at the same time that an emphasis in knowledge modeling
and inference, both symbolic and connectionist. We thus reach the present
state in which different paradigms coexist (symbolic, connectionist, situated
and hybrid). In this work, we will attempt (1) to approach the concept of
AI both as a science of the natural and as knowledge engineering (KE);
(2) summarize some of the conceptual, formal and methodological approaches
to the development of AI during the last 50 years, (3) mention some of the
constitutive differences between human knowing and machine knowing and
(4) propose some suggestions that we believe must be adopted to progress
in developing AI.
Abstract. The dominant assumptions throughout contemporary
philosophy, psychology, cognitive science, and artificial intelligence
about the ontology underlying intentionality, and its core of
representationality, are those of encodings -some sort of informational
or correspondence or covariation relationship between the represented and
its representation that constitute that representational relationship.
There are many disagreements concerning details and implementations, and
even some suggestions about claimed alternative ontologies, such as
connectionism (though none that escape what is argued is the fundamental
flaw in these dominant approaches). One assumption that seems to be held
by all, however, usually without explication or defence, is that there is
one singular underlying ontology to representationality. In this paper, it
is argued that there are in fact quite a number of ontologies that manifest
representationality -levels of representationality- and that none of them
are the standard `manipulations of encoded symbols ’ ontology, nor any
other variation on the informational approach to representation.
Collectively, these multiple representational ontologies constitute a
framework for cognition, whether natural or artificial.
Abstract. Arguments in favor of anti-representationalism in
cognitive science often suffer from a lack of attention to detail. The
purpose of this paper is to fill in the gaps in these arguments, and in
so doing show that at least one form of anti-representationalism is
potentially viable. After giving a teleological definition of
representation and applying it to a few models that have inspired
anti-representationalist claims, I argue that anti-representationalism
must be divided into two distinct theses, one ontological, one
epistemological. Given the assumptions that define the debate, I give
reason to think that the ontological thesis is false. I then argue that
the epistemological thesis might, in the end, turn out to be true, despite
a potentially serious difficulty. Along the way, there will be a brief
detour to discuss a controversy from early twentieth century physics.
Abstract. The move toward a dynamical and embodied understanding of
cognitive processes initiated a debate about the usefulness of the notion of
representation for cognitive science. The debate started when some proponents
of a dynamical and embodied approach argued that the use of representations
could be discarded in many circumstances. This remained a minority view,
however, and there is now a tendency to shove this critique of the usefulness
of representations aside as a non-issue for a dynamical and situated approach
to cognition. In opposition, I will argue that the representation issue is far
from settled, and instead forms the kernel of an important conceptual shift
between traditional cognitive science and a dynamical and embodied approach.
This will be done by making explicit the key features of representation in
traditional cognitive science and by arguing that the representation-like
entities that come to the fore in a dynamical and embodied approach are
significantly different from the traditional notion of representation. This
difference warrants a change of terminology to signal an important change in
meaning.
Abstract. This paper investigates the prospects of Rodney Brooks’
proposal for AI without representation. It turns out that the supposedly
characteristic features of "new AI" (embodiment, situatedness, absence of
reasoning, and absence of representation) are all present in conventional
systems: "New AI" is just like old AI. Brooks proposal boils down to the
architectural rejection of central control in intelligent agents—Which,
however, turns out to be crucial. Some of more recent cognitive science
suggests that we might do well to dispose of the image of intelligent agents
as central representation processors. If this paradigm shift is achieved,
Brooks’ proposal for cognition without representation appears promising for
fullblown intelligent agents—Though not for conscious agents.
Abstract. The received view is that computational states are
individuated at least in part by their semantic properties. I offer an
alternative, according to which computational states are individuated by
their functional properties. Functional properties are specified by a
mechanistic explanation without appealing to any semantic properties. The
primary purpose of this paper is to formulate the alternative view of
computational individuation, point out that it supports a robust notion
of computational explanation, and defend it on the grounds of how
computational states are individuated within computability theory and
computer science. A secondary purpose is to show that existing arguments
for the semantic view are defective.
Abstract. Robotics as practiced within the artificial life community
is no longer the bitter enemy of representational explanation in the way
that it sometimes seemed to be in the heady, revolutionary days of the 1990s.
This rapprochement is, however, fragile, because the field of evolutionary
robotics continues to pose two important challenges to the idea that
real-time intelligent action must or should be explained by appeal to inner
representations. The first of these challenges, the threat from nontrivial
causal spread, occurs when extra-neural factors account for the kind of
adaptive richness and flexibility normally associated with
representation-based control. The second, the threat from continuous
reciprocal causation, occurs when the causal contributions made by the
systemic components collectively responsible for behavior generation are
massively context-sensitive and variable over time. I argue that while the
threat from nontrivial causal spread can be resisted, the threat from
continuous reciprocal causation provides a stern test for our
representational intuitions.
Abstract. This article revisits the concept of autopoiesis and
examines its relation to cognition and life. We present a mathematical
model of a 3D tesselation automaton, considered as a minimal example of
autopoiesis. This leads us to a thesis T1: "An autopoietic system can be
described as a random dynamical system, which is defined only within its
organized autopoietic domain." We propose a modified definition of
autopoiesis: "An autopoietic system is a network of processes that
produces the components that reproduce the network, and that also
regulates the boundary conditions necessary for its ongoing existence as
a network." We also propose a definition of cognition: "A system is
cognitive if and only if sensory inputs serve to trigger actions in a
specific way, so as to satisfy a viability constraint." It follows from
these definitions that the concepts of autopoiesis and cognition, although
deeply related in their connection with the regulation of the boundary
conditions of the system, are not immediately identical: a system can be
autopoietic without being cognitive, and cognitive without being
autopoietic. Finally, we propose a thesis T2: "A system that is both
autopoietic and cognitive is a living system."
Abstract. Autonomous systems are the result of self-sustaining
processes of constitution of an identity under precarious circumstances.
They may transit through different modes of dynamical engagement with their
environment, from committed ongoing coping to open susceptibility to
external demands. This paper discusses these two statements and presents
examples of models of autonomous behaviour using methods in evolutionary
robotics. A model of an agent capable of issuing self-instructions
demonstrates the fragility of modelling autonomy as a function rather than
as a property of a system’s organization. An alternative model of
behavioural preference based on homeostatic adaptation avoids this problem
by establishing a mutual constraining between lower-level processes (neural
dynamics and sensorimotor interaction) and higher-level metadynamics
(experience-dependent, homeostatic triggering of local plasticity and
re-organization). The results of these models are lessons about how strong
autonomy should be approached: neither as a function, nor as a matter
of external vs. internal determination.
Abstract. We analyze the conditions for agency in natural and
artificial systems. In the case of basic (natural) autonomous systems,
self-construction and activity in the environment are two aspects of the
same organization, the distinction between which is entirely conceptual:
their sensorimotor activities are metabolic, realized according to the
same principles and through the same material transformations as those
typical of internal processes (such as energy transduction). The two
aspects begin to be distinguishable in a particular evolutionary trend,
related to the size increase of some groups of organisms whose adaptive
abilities depend on motility. Here a specialized system develops, which,
in the sensorimotor aspect, is decoupled from the metabolic basis,
although it remains dependent on it in the self-constructive aspect. This
decoupling reveals a complexification of the organization. In the last
section of the article this approach to natural agency is used to analyze
artificial systems by posing two problems: whether it is possible to
artificially build an organization similar to the natural, and whether
this notion of agency can be grounded on different organizing principles.
Abstract. We examine Gärdenfors theory of conceptual spaces, a
geometrical form of knowledge representation (Conceptual spaces: The geometry
of thought, MIT Press, Cambridge, 2000), in the context of the general
Creative Systems Framework introduced by Wiggins (J Knowl Based Syst
19(7):449.458, 2006a; New Generation Comput 24(3):209.222, 2006b). Gärdenfors
theory offers a way of bridging the traditional divide between symbolic and
sub-symbolic representations, as well as the gap between representational
formalism and meaning as perceived by human minds. We discuss how both these
qualities may be advantageous from the point of view of artificial creative
systems. We take music as our example domain, and discuss how a range of
musical qualities may be instantiated as conceptual spaces, and present a
detailed conceptual space formalisation of musical metre.
Abstract. The greatest rhetorical challenge to developers of creative
artificial intelligence systems is convincingly arguing that their software
is more than just an extension of their own creativity. This paper suggests
that "creative autonomy," which exists when a system not only evaluates
creations on its own, but alsochanges its standards without explicit direction,
is a necessary condition for making this argument. Rather than requiring that
the system be hermetically sealed to avoid perceptions of human influence,
developing creative autonomy is argued to be more plausible if the system is
intimately embedded in a broader society of other creators and critics. Ideas
are presented for constructing systems that might be able to achieve
creative autonomy.
Abstract. Can artificial systems be creative? Can they be designed
to be creative on their own? And what are the requirements of such creative artificial systems? To be able to support humans who are expected to
deliver creative solutions, or to automate part of their tasks, this paper
presents a proposal for creativity requirements that provide a basis for
designing creative artificial systems.
Abstract. We propose a model of expressive music performance (EMP),
focusing on the emergence of EMP under social pressure, including social
interaction and generational inheritance. Previously, we have reported a
system to evolve EMP using Genetic Algorithm, exploring the effect of
generational inheritance. This paper presents a system that evolves
expressive performance profiles through social interaction, with a society
of artificial agent performers. Each performer owns a hierarchical pulse
set (i.e., hierarchical duration vs. amplitude matrices), representing a
performance profile for a given piece. An agent performer evaluates a
performance profile with a set of rules derived from the structure of the
piece in question, and imitates others’ performances if appropriate. Then
it modifies its pulse set accordingly. We demonstrate that suitable
performance profiles emerge from social interactions where the diversity
and the commonality of evolved performances are observed in the society
of agents.
Abstract. This paper argues that AI follows classical versions of
epistemology in assuming that the identity of the knowing subject is not
important. In other words this serves to ‘delete the subject’. This
disguises an implicit hierarchy of knowers involved in the representation
of knowledge in AI which privileges the perspective of those who design
and build the systems over alternative perspectives. The privileged
position reflects Western, professional masculinity. Alternative
perspectives, denied a voice, belong to less powerful groups including
women. Feminist epistemology can be used to approach this from new
directions, in particular, to show how women’s knowledge may be left out
of consideration by AI’s focus on masculine subjects. The paper uncovers
the tacitly assumed Western professional male subjects in two flagship AI
systems, Cyc and Soar.
Abstract. There is no strong reason to believe that human-level
intelligence represents an upper limit of the capacity of artificial
intelligence, should it be realized. This poses serious safety issues,
since a superintelligent system would have great power to direct the
future according to its possibly flawed motivation system. Solving this
issue in general has proven to be considerably harder than expected.
This paper looks at one particular approach, Oracle AI. An Oracle AI is
an AI that does not act in the world except by answering questions. Even
this narrow approach presents considerable challenges. In this paper, we
analyse and critique various methods of controlling the AI. In general
an Oracle AI might be safer than unrestricted AI, but still remains
potentially dangerous.
Abstract. Successful attempts to explain expertise in human beings,
or to capture its properties in expert systems, will have to contend with
issues of rationality and generalization. Rationality and generalization
pose enough difficulties on a purely synchronic basis. But an account of
expertise must be diachronic- it must account for the development of
rationality and generalization, even in those who are already experts.
We describe the obstacles in the path of standard approaches to rationality
and generalization, and present an alternative, interactivist treatment
of rationality and its development (space forbids us to do likewise for
generalization). In the interactivist account, rationality cannot be
defined in general as adherence to the rules of a system of formal logic ;
we propose instead that rationality be understood in terms of the
development of negative knowledge- knowing what kinds of errors to avoid.
We examine the development of negative knowledge using examples from the
history of science, and consider the consequences of an orientation towards
negative knowledge for classroom instruction as well as the development of
expert systems.
Abstract. This paper discusses the relation between intelligence
and motivation in artificial agents, developing and briefly arguing for
two theses. The first, the orthogonality thesis, holds (with some caveats)
that intelligence and final goals (purposes) are orthogonal axes along
which possible artificial intellects can freely vary—more or less any level
of intelligence could be combined with more or less any final goal. The
second, the instrumental convergence thesis, holds that as long as they
possess a sufficient level of intelligence, agents having any of a wide
range of final goals will pursue similar intermediary goals because they
have instrumental reasons to do so. In combination, the two theses help
us understand the possible range of behavior of superintelligent agents,
and they point to some potential dangers in building such an agent.
Abstract. Having, as it is generally agreed, failed to destroy
the computational conception of mind with the Gödelian attack he
articulated in his The Emperor’s New Mind, Penrose has returned, armed
with a more elaborate and more fastidious Gödelian case, expressed in
Chapters 2 and 3 of his Shadows of the Mind. The core argument in these
chapters is enthymematic, and when formalized, a remarkable number of
technical glitches come to light. Over and above these defects, the
argument, at best, is an instance of either the fallacy of denying
the antecedent, the fallacy of petitio principii, or the fallacy of
equivocation. More recently, writing in response to his critics in the
electronic journal Psyche, Penrose has offered a Gödelian case designed
to improve on the version presented in SOTM. But this version is yet
again another failure. In falling prey to the errors we uncover,
Penrose’s new Gödelian case is unmasked as the same confused refrain
J. R. Lucas initiated 35 years ago.
Abstract. Why would we want to endow arti¢cial autonomous agents
with emotions? The main answer to this question seems to rely on what has
been called the functional view of emotions, arising from (analytic) studies
of natural systems. In this paper, I examine to what extent this hypothesis
can be applied to the (synthetic) investigation of arti¢cial emotions and
what are its implications for the design of emotional agents, the main
approaches that can be appropriately used to model emotions in autonomous
agents, and why situated autonomous agents provide a good framework to
study the relation between emotion and adaptation.
Abstract. In modern technical societies computers interact with human
beings in ways that can affect moral rights and obligations. This has given
rise to the question whether computers can act as autonomous moral agents.
The answer to this question depends on many explicit and implicit definitions
that touch on different philosophical areas such as anthropology and
metaphysics. The approach chosen in this paper centres on the concept of
information. Information is a multi-facetted notion which is hard to define
comprehensively. However, the frequently used definition of information
as data endowed with meaning can promote our understanding. It is argued
that information in this sense is a necessary condition of cognitivist
ethics. This is the basis for analysing computers and information processors
regarding their status as possible moral agents. Computers have several
characteristics that are desirable for moral agents. However, computers in
their current form are unable to capture the meaning of information and
therefore fail to reflect morality in anything but a most basic sense of
the term. This shortcoming is discussed using the example of the Moral
Turing Test. The paper ends with a consideration of which conditions
computers would have to fulfil in order to be able to use information in
such a way as to render them capable of acting morally and reflecting
ethically.
Abstract. Volition, although often poorly defined, is a concept
of interest and utility to both philosophers and researchers in artificial
intelligence. In this article, a definition of volition is proposed and a
functionally defined, physically grounded ordinal scale and a procedure
by which volition might be measured are put forward: a type of Turing test
for volition, but motivated by an explicit analysis of the concept being
tested and providing results that are graded, rather than Boolean, so that
candidate systems may be ranked according to their degree of volitional
endowment. It is proposed that volition is a functional, aggregate property
of certain physical systems and it is defined as the capacity for adaptive
decisionmaking. The scale, similar in scope to Daniel Dennett’s Kinds of
Minds scale, is then outlined, as well as a set of progressive "litmus
tests" for determining where a candidate system falls on the scale. Such
a formulation may be useful for understanding volition and assessing the
progress made in engineering intelligent, autonomous artificial organisms.
Abstract. Research into cognitive architectures is described within
a framework spanning major issues in artificial intelligence and cognitive
science. Earlier work on motivation is extended with a cognitive model of
reasoning which, together with an affective mechanism, enables
consistent decision-making across a variety of cognitive and reactive
processes. Cognition involves the control of behaviour within both
external and internal environments. The control of behaviour is vital to
an autonomous system as it acts to further its goals. Except in the most
spartan of environments, the potential available information and
associated combinatorics in a perception, cognition, and action sequence
can tax even the most powerful agents. The affect magnitude concept
solves some problems with BDI models, and allows for adaptive decisionmaking
over a number of tasks in different domains. The cognitive and
affective components are brought together using motivational constructs.
The generic cognitive model can adapt to different environments and
tasks as it makes use of motivational models to direct reactive and
situated processes.
Abstract. In the first section of his celebrated 1936 paper A.
Turing says of the machines he defines that at each stage of their
operation they can ‘effectively remember’ some of the symbols they have
scanned before. In this paper I explicate the motivation and content of
this remark of Turing’s, and argue that it reveals what could be labeled
as a connectionist conception of the human mind.
Abstract. The computational conception of the mind that dominates
cognitive science assumes that thought processes involve the computation
of algorithms or the execution of functions. Human minds turn out to be
automatic formal systems or physical syntax-processing systems. The
objection has often been posed that systems of this kind do not possess
sufficient conditions for mentality, because the syntax they process may
be meaningless for those systems. That problem concerns their semantic
content. Here an additional objection is posed that systems of this kind,
as normatively-directed, problem-solving causal systems, impose conditions
that are not necessary for mentality, because many if not most human
thought processes violate them. This problem concerns their causal
character. The computational conception reflects an overgeneralization
about human thought processes based on special kinds of thinking and thus
seems to be trivial or false.
Abstract. Self-improvement was one of the aspects of AI proposed
for study in the 1956 Dartmouth conference. Turing proposed a ‘‘child
machine’’ which could be taught in the human manner to attain adult
human-level intelligence. In latter days, the contention that an AI system
could be built to learn and improve itself indefinitely has acquired the
label of the bootstrap fallacy. Attempts in AI to implement such a system
have met with consistent failure for half a century. Technological
optimists, however, have maintained that a such system is possible,
producing, if implemented, a feedback loop that would lead to a rapid
exponential increase in intelligence. We examine the arguments for both
positions and draw some conclusions.
Abstract. Recently, several authors (Searle, Penrose, Rychlak)
have suggested that AI is a doomed undertaking. In his recent book,
Artificial Intelligence and Human Reasoning, Joseph Rychlak repeats many
of the arguments of the other critics, as well as offering several of his
own. In this paper, taking Rychlak as symptomatic of this new
anti-computational intellectual movement, we respond to these arguments
and defend AI and personal construct theory against some of the
misunderstandings and confusions which we find there.
Abstract. We confront the following popular views: that mind or
life are algorithms; that thinking, or more generally any process other
than computation, is computation; that anything other than a working brain
can have thoughts; that anything other than a biological organism can be
alive; that form and function are independent of matter; that sufficiently
accurate simulations are just as genuine as the real things they imitate;
and that the Turing test is either a necessary or sufficient or scientific
procedure for evaluating whether or not an entity is intelligent. Drawing
on the distinction between activities and tasks, and the fundamental
scientific principles of ontological lawfulness, epistemological realism,
and methodological skepticism, we argue for traditional scientific
materialism of the emergentist kind in opposition to the functionalism,
behaviourism, tacit idealism, and merely decorative materialism of the
artificial intelligence and artificial life communities.
Abstract. This paper deals with the rationalist assumptions behind
researches of artificial intelligence (AI) on the basis of Hubert
Dreyfus’s critique. Dreyfus is a leading American philosopher known for
his rigorous critique on the underlying assumptions of the field of
artificial intelligence. Artificial intelligence specialists, especially
those whose view is commonly dubbed as "classical AI", assume that creating
a thinking machine like the human brain is not a too far away project
because they believe that human intelligence works on the basis of
formalized rules of logic. In contradistinction to classical AI
specialists, Dreyfus contends that it is impossible to create intelligent
computer programs analogous to the human brain because the workings of
human intelligence is entirely different from that of computing machines.
For Dreyfus, the human mind functions intuitively and not formally.
Following Dreyfus, this paper aims to pinpointing the major flaws
classical AI suffers from. The author of this paper believes that
pinpointing these flaws would inform inquiries on and about artificial
intelligence. Over and beyond this, this paper contributes something
indisputably original. It strongly argues that classical AI research
programs have, though inadvertently, falsified an entire epistemological
enterprise of the rationalists not in theory as philosophers do but in
practice. When AI workers were trying hard in order to produce a machine
that can think like human minds, they have in a way been testing—and
testing it up to the last point—the rationalist assumption that the
workings of the human mind depend on logical rules. Result: No computers
actually function like the human mind. Reason: the human mind does not
depend on the formal or logical rules ascribed to computers. Thus,
symbolic AI research has falsified the rationalist assumption that
‘the human mind reaches certainty by functioning formally’ by virtue
of its failure to create a thinking machine.
Abstract. John Searle has used his Chinese room example to attack
the idea of computationally reproducing intelligence. His arguments have
variously assumed or (more recently) asserted that consciousness and
intelligence are necessarily interdependent. This stance has allowed him
to apply intuitive arguments about what could or could not be conscious
to the issue of what could or could not be intelligent. I present a
variety of arguments, theoretical and intuitive, to show that Searle is
conflating mentality and semantics. By maintaining that distinction we
can then address how to generate the semantics that intelligence requires.
In Stevan Hamad's approach to symbol-grounding we have a plausible
candidate for finding referential semantics without taking detours
through an unanalysable consciouness. Artificial intelligence as normally
construed does not require that philosophical problems about consciousness
be resolved, Jet alone that consciousness should be computationally
definable: Searle's arguments against strong AI are irrelevant to
real-world AI.
Abstract. According to the conventional wisdom, Turing (1950)
said that computing machines can be intelligent. I don’t believe it.
I think that what Turing really said was that computing machines
— computers limited to computing — can only fake intelligence. If we
want computers to become genuinely intelligent, we will have to give
them enough “initiative” (Turing, 1948, p. 21) to do more than compute.
In this paper, I want to try to develop this idea. I want to explain
how giving computers more “initiative” can allow them to do more than
compute. And I want to say why I believe (and believe that Turing
believed) that they will have to go beyond computation before they
can become genuinely intelligent.
Abstract. The proper treatment of computationalism, as the thesis
that cognition is computable, is presented and defended. Some arguments
of James H. Fetzer against computationalism are examined and found wanting,
and his positive theory of minds as semiotic systems is shown to be
consistent with computationalism. An objection is raised to an argument
of Selmer Bringsjord against one strand of computationalism, namely,
that Turing-Test± passing artifacts are persons, it is argued that, whether
or not this objection holds, such artifacts will inevitably be persons.
Abstract. Over recent decades there has been a growing interest in
the question of whether computer programs are capable of genuinely creative
activity. Although this notion can be explored as a purely philosophical
debate, an alternative perspective is to consider what aspects of the
behaviour of a program might be noted or measured in order to arrive at
an empirically supported judgement that creativity has occurred. We sketch
out, in general abstract terms, what goes on when a potentially creative
program is constructed and run, and list some of the relationships (for
example, between input and output) which might contribute to a decision
about creativity. Specifically, we list a number of criteria which might
indicate interesting properties of a program’s behaviour, from the
perspective of possible creativity. We go on to review some ways in which
these criteria have been applied to actual implementations, and some
possible improvements to this way of assessing creativity.
Abstract. At the 1997 Annual Meeting of the American Society for
Cybernetics there was a panel session on the subject of "Cyberethics",
a term suggested by Heinz von Foerster. The speakers were Heinz von
Foerster, Philip Lewin, Robert Martin, Herbert Brun, Doreen Steg, and
several people in the audience.
Abstract. In discussions on the limitations of Artificial
Intelligence (AI), there are three major misconceptions, identifying an
AI system with an axiomatic system, a Turing machine, or a system with
a model-theoretic semantics. Though these three notions can be used to
describe a computer system for certain purposes, they are not always
the proper theoretical notions when an AI system is under consideration.
These misconceptions are not only the basis of many criticisms of AI from
the outside, but also responsible for many problems within AI research.
This paper analyses these misconceptions, and points out the common root
of them: treating empirical reasoning as mathematical reasoning. Finally,
an example intelligent system called NARS is introduced, which is neither
an axiomatic system nor a Turing machine in its problem-solving process,
and does not use model-theoretic semantics, but is still implementable in
an ordinary computer.
Abstract. This article examines argument structures and strategies
in pro and con argumentation about the possibility of human-level
artificial intelligence (AI) in the near term future. It examines renewed
controversy about strong AI that originated in a prominent 1999 book and
continued at major conferences and in periodicals, media commentary, and
Web-based discussions through 2002. It will be argued that the book made
use of implicit, anticipatory refutation to reverse prevailing value
hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca’s
(1969) study of refutational argument, this study considers points of
contact between opposing arguments that emerged in opposing loci,
dissociations, and casuistic reasoning. In particular, it shows how
perceptions of AI were reframed and rehabilitated through metaphorical
language, reversal of the philosophical pair ‘artificial/natural’,
appeals to the paradigm case, and use of the loci of quantity and essence.
Furthermore, examining responses to the book in subsequent arguments
indicates the topoi characteristic of the rhetoric of technology advocacy.
Abstract. A communality between research in artificial intelligence
and synthetic emotion is that is seems in both cases to be rather difficult
to give an acceptable definition of the naturally occurring counterpart.
One could speculate whether this is due to the multiplicity of the nature
of both phenomena or due to a categorical misconception. In this article,
I try to briefly outline a number of different motivations for modeling
emotions, and to relate those motivations to two different principal design
approaches for computational models of emotion. From these two aspects,
together with our current assumptions about mechanisms underlying human
emotions, I conclude with some speculations about adaptation in affective
systems, and some implications of the notion of grounding emotions in
adaptive systems.
Κεφάλαια από βιβλία
Οποιοδήποτε κεφάλαιο από τα παρακάτω βιβλία:
Α. Τεχνητή Συνείδηση (Artificial Consciousness)
Readings
PDF, 14 σελίδες, 39 K
PDF, 44 σελίδες, 342 K
Θέματα
PDF, 15 σελίδες, 55 K
PDF, 15 σελίδες, 206 K
Nature (the Art whereby God hath made and governes the World) is by
the Art of man, as in many other things, so in this also imitated, that
it can make an Artificial Animal. For seeing life is but a motion of Limbs,
the beginning whereof is in some principall part within; why may we
not say, that all Automata (Engines that move themselves by springs and
wheels as doth a watch) have an artificiall life ? For what is the Heart,
but a Spring ; and the Nerves but so many Strings ; and the Joynts, but
so many Wheeles, giving motion to the whole Body, such as was intended
by the Artificer ? Art goes yet further, imitating that Rationall and
most excellent worke of Nature, Man.
(Hobbes 1651, p. 81)
So declared Thomas Hobbes in 1651 in the Introduction to his well-known
work, Leviathan, published one year after Rene Descartes’ death.
Descartes was also interested in mechanical explanations of bodily
processes and organic life. In fact, on the basis of his neuroanatomical
and physiological studies, as well as philosophical arguments, Descartes
had already argued that human and animal bodies could be mechanically
understood as complicated and intricately designed machines (Descartes
1664).What differentiated Descartes from Hobbes lay in his belief that
human beings, unlike non-human animals, were not merely bodies ; they
were unions of material bodies and immaterial souls. The immaterial soul
was necessary for Descartes to explain the peculiar capacities and
activities of the human mind. As such, materialist mechanical explanations
could never be sufficient to account for the whole human being."
The fundamental assumption of Artificial Intelligence (AI) as a research
program is that human minds operate on computational principles, and its
grand goal is to build material artifacts that genuinely possess the very
same mental capacities that human beings have. As Haugeland puts it : `we
are really interested in AI as part of the theory that people are computers’
(Haugeland 1985: 5± 6). If so, in order for the project of AI to have any
hopes of accomplishing its grand goal, it has to rely on an entirely
materialist framework. The important and relevant theoretical question,
which connects foundational considerations of Philosophy with the empirical
considerations of AI research, is, then, whether and how a materialist account of the mind can be given. This is the question that is explored
in this essay, in light of the most recent developments in contemporary
philosophy of mind.
PDF, 10 σελίδες, 244 K
PDF, 7 σελίδες, 117 K
Σχετικό άρθρο: N. Boltuc , P. Boltuc (2008). Replication of
the Hard Problem of Consciousness in AI and Bio-AI: An Early Conceptual
Framework, Proceedings 2007 AAAI Fall Symposium on "AI and
Consciousness: Theoretical Foundations and Current Approaches".
(PDF, 6 σελίδες, 156 K)
Abstract. AI could in principle replicate consciousness
(Hconsciousness) in its first-person form (as described by Chalmers in
the hard problem of consciousness.) If we can understand first-person
consciousness in clear terms, we can provide an algorithm for it; if we
have such algorithm, in principle we can build it. There are two questions
that this argument opens. First, whether we ever will understand
H-consciousness in clear terms. Second, whether we can build
H-consciousness in inorganic substance. If organic substance is required,
we would need to clearly grasp the difference between building a machine
out of organic substance (Bio-AI) and just modifying a biological
organism.
PDF, 30 σελίδες, 263 K
Β. Συμπεριφορική ΤΝ (Behavior-Based AI), Embodiment
Readings
PDF, 12 σελίδες, 712 K
PDF, 37 σελίδες, 251 K
Θέματα
PDF, 35 σελίδες, 359 K
PDF, 28 σελίδες, 342 K
PDF, 20 σελίδες, 164 K
PDF, 29 σελίδες, 474 K
PDF, 18 σελίδες, 246 K
PDF, 27 σελίδες, 271 K
PDF, 10 σελίδες, 90 K
PDF, 45 σελίδες, 125 K
PDF, 17 σελίδες, 1.17 M
PDF, 34 σελίδες, 612 K
PDF, 10 σελίδες, 200 K
Γ. Τεχνητή Ζωή (Artificial Life)
Readings
PDF, 87 σελίδες, 400 K
PDF, 24 σελίδες, 98 K
PDF, 4 σελίδες, 97 K
Θέματα
PDF, 12 σελίδες, 122 K
PDF, 14 σελίδες, 146 K
PDF, 17 σελίδες, 239 K
PDF, 15 σελίδες, 139 K
PDF, 18 σελίδες, 192 K
Δ. Turing Test
Readings
PDF, 14 σελίδες, 1.18 M
PDF, 10 σελίδες, 140 K
PDF, 42 σελίδες, 141 K
Θέματα
PDF, 15 σελίδες, 2.99 M
PDF, 22 σελίδες, 119 K
PDF, 28 σελίδες, 184 K
PDF, 22 σελίδες, 356 K
PDF, 22 σελίδες, 212 K
PDF, 14 σελίδες, 164 K
PDF, 24 σελίδες, 97 K
Ε. Κινέζικο Δωμάτιο (Chinese Room)
Readings
PDF, 13 σελίδες, 109 K
PDF, 21 σελίδες, 890 K
Θέματα
PDF, 17 σελίδες, 266 K
PDF, 16 σελίδες, 171 K
Σχετικό άρθρο: W.J. Rapaport (2006). How Helen Keller used
syntactic semantics to escape from a Chinese Room, Minds and
Machines, 16:381–436.
(PDF, 56 σελίδες, 577 K)
Abstract. A computer can come to understand natural language the
same way Helen Keller did: by using "syntactic semantics"—a theory of
how syntax can suffice for semantics, i.e., how semantics for natural
language can be provided by means of computational symbol manipulation.
This essay considers real-life approximations of Chinese Rooms, focusing
on Helen Keller’s experiences growing up deaf and blind, locked in a sort
of Chinese Room yet learning how to communicate with the outside
world. Using the SNePS computational knowledge-representation system,
the essay analyzes Keller’s belief that learning that "everything has a
name" was the key to her success, enabling her to "partition" her mental
concepts into mental representations of: words, objects, and the naming
relations between them. It next looks at Herbert Terrace’s theory of
naming, which is akin to Keller’s, and which only humans are supposed
to be capable of. The essay suggests that computers at least, and perhaps
non-human primates, are also capable of this kind of naming.
PDF, 17 σελίδες, 57 K
PDF, 28 σελίδες, 135 K
PDF, 19 σελίδες, 68 K
PDF, 19 σελίδες, 1901 K
PDF, 26 σελίδες, 413 K
PDF, 22 σελίδες, 227 K
Ζ. Σύμβολα (Symbols)
Readings
PDF, 12 σελίδες, 1.03 M
PDF, 27 σελίδες, 413 K
Θέματα
PDF, 13 σελίδες, 198 K
PDF, 17 σελίδες, 262 K
PDF, 15 σελίδες, 231 K
PDF, 10 σελίδες, 193 K
PDF, 10 σελίδες, 170 K
PDF, 24 σελίδες, 228 K
PDF, 24 σελίδες, 222 K
Η. Συνδετισμός (Connectionism)
Readings
PDF, 15 σελίδες, 874 K
PDF, 36 σελίδες, 2.31 M
Θέματα
PDF, 19 σελίδες, 790 K
PDF, 26 σελίδες, 266 K
PDF, 17 σελίδες, 1.02 M
PDF, 15 σελίδες, 596 K
PDF, 9 σελίδες, 228 K
PDF, 10 σελίδες, 301 K
PDF, 18 σελίδες, 60 K
Θ. Αναπαραστάσεις (Representations)
Readings
PDF, 49 σελίδες, 2.04 M
PDF, 12 σελίδες, 1.42 M
Θέματα
PDF, 37 σελίδες, 548 K
PDF, 23 σελίδες, 447 K
PDF, 14 σελίδες, 100 K
PDF, 15 σελίδες, 188 K
PDF, 37 σελίδες, 236 K
PDF, 17 σελίδες, 155 K
Ι. Αυτονομία (Autonomy)
Readings
PDF, 14 σελίδες, 63 K
PDF, 10 σελίδες, 243 K
Θέματα
PDF, 19 σελίδες, 1137 K
PDF, 15 σελίδες, 818 K
PDF, 15 σελίδες, 146 K
Κ. Τεχνητή Δημιουργικότητα (Artificial Creativity)
Reading
PDF, 10 σελίδες, 807 K
Θέματα
PDF, 28 σελίδες, 122 K
PDF, 30 σελίδες, 488 K
PDF, 13 σελίδες, 261 K
PDF, 14 σελίδες, 153 K
PDF, 12 σελίδες, 844 K
Λ. Διάφορα
Θέματα
PDF, 23 σελίδες, 109 K
PDF, 26 σελίδες, 269 K
PDF, 19 σελίδες, 284 K
PDF, 15 σελίδες, 198 K
PDF, 23 σελίδες, 686 K
PDF, 23 σελίδες, 207 K
PDF, 17 σελίδες, 99 K
PDF, 18 σελίδες, 419 K
PDF, 21 σελίδες, 61 K
PDF, 28 σελίδες, 586 K
PDF, 24 σελίδες, 251 K
PDF, 11 σελίδες, 465 K
PDF, 11 σελίδες, 120 K
PDF, 41 σελίδες, 228 K
PDF, 21 σελίδες, 431 K
PDF, 16 σελίδες, 72 K
PDF, 11 σελίδες, 140 K
PDF, 16 σελίδες, 770 K
PDF, 17 σελίδες, 695 K
PDF, 26 σελίδες, 146 K
PDF, 12 σελίδες, 148 K
PDF, 14 σελίδες, 575 K
PDF, 17 σελίδες, 106 K
PDF, 17 σελίδες, 292 K
PDF, 33 σελίδες, 314 K
PDF, 16 σελίδες, 146 K
PDF, 20 σελίδες, 146 K
PDF, 22 σελίδες, 84 K
PDF, 20 σελίδες, 374 K
Τελευταία ενημέρωση 8 Φεβρουαρίου 2015.
Στείλτε μου mail (etzafestas@phs.uoa.gr)