Προτεινόμενες βιβλιογραφικές εργασίες (2014)


Κεφάλαια από βιβλία

Οποιοδήποτε κεφάλαιο από τα παρακάτω βιβλία:


Α. Τεχνητή Συνείδηση (Artificial Consciousness)

Readings

Θέματα

  1. W.A. Adams (2004). Machine consciousness: Plausible idea or semantic distortion?, Journal of Consciousness Studies, 11(9):46-56.
    PDF, 15 σελίδες, 55 K

  2. M. Aydede, G. Güzeldere (2000). Consciousness, intentionality and intelligence: Some foundational issues for artificial intelligence, Journal of Experimental and Theoretical Artificial Intelligence, 12:263-277.
    PDF, 15 σελίδες, 206 K

    Abstract. Three fundamental questions concerning minds are presented. These are about consciousness, intentionality and intelligence. After we present the fundamental framework that has shaped both the philosophy of mind and the Artificial Intelligence research in the last forty years or so regarding the last two questions, we turn to consciousness, whose study still seems evasive to both communities. After briefly illustrating why and how phenomenal consciousness is puzzling, a theoretical diagnosis of the problem is proposed and a framework is presented, within which further research would yield a solution. The diagnosis is that the puzzle stems from a peculiar dual epistemic access to phenomenal aspects (qualia) of our conscious experiences. An account of concept formation is presented such that both the phenomenal concepts (like the concepts r e d and s w e e t ) and the introspective concepts (like the concepts e x p e r i e n c i n g  r e d and t a s t i n g  s w e e t ) are acquired from a firstperson perspective as opposed to the third-person one (the standard concept formation strategy about objective features). We explain the first-person perspective in information-theoreti c and computational terms:
    Nature (the Art whereby God hath made and governes the World) is by the Art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal. For seeing life is but a motion of Limbs, the beginning whereof is in some principall part within; why may we not say, that all Automata (Engines that move themselves by springs and wheels as doth a watch) have an artificiall life ? For what is the Heart, but a Spring ; and the Nerves but so many Strings ; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer ? Art goes yet further, imitating that Rationall and most excellent worke of Nature, Man.
    (Hobbes 1651, p. 81)
    So declared Thomas Hobbes in 1651 in the Introduction to his well-known work, Leviathan, published one year after Rene Descartes’ death. Descartes was also interested in mechanical explanations of bodily processes and organic life. In fact, on the basis of his neuroanatomical and physiological studies, as well as philosophical arguments, Descartes had already argued that human and animal bodies could be mechanically understood as complicated and intricately designed machines (Descartes 1664).What differentiated Descartes from Hobbes lay in his belief that human beings, unlike non-human animals, were not merely bodies ; they were unions of material bodies and immaterial souls. The immaterial soul was necessary for Descartes to explain the peculiar capacities and activities of the human mind. As such, materialist mechanical explanations could never be sufficient to account for the whole human being."
    The fundamental assumption of Artificial Intelligence (AI) as a research program is that human minds operate on computational principles, and its grand goal is to build material artifacts that genuinely possess the very same mental capacities that human beings have. As Haugeland puts it : `we are really interested in AI as part of the theory that people are computers’ (Haugeland 1985: 5± 6). If so, in order for the project of AI to have any hopes of accomplishing its grand goal, it has to rely on an entirely materialist framework. The important and relevant theoretical question, which connects foundational considerations of Philosophy with the empirical considerations of AI research, is, then, whether and how a materialist account of the mind can be given. This is the question that is explored in this essay, in light of the most recent developments in contemporary philosophy of mind.

  3. H.J. Caulfield, J.L. Johnson, M.P. Schamschula, R. Inguva (2001). A general model of primitive consciousness, Cognitive Systems Research, 2:263-272.
    PDF, 10 σελίδες, 244 K

    Abstract. We present a simple model of consciousness as it may exist in animals and can exist in man-made artifacts. The minimum unit of consciousness is a brain/body in interaction with a world. No parts of that system are themselves, conscious. Emphasis is placed on structures that could have evolved from earlier structures by small steps each of which conferred advantage to its possessors. The model is functional, so it becomes possible to build such conscious systems. Indeed, we show why conscious systems should be built as well as how humans should interact with them.

  4. B.H. Dournaee (2010). Comments on "The Replication of the Hard Problem of Consciousness in AI and Bio-AI", Minds and Machines, 20:303–309.
    PDF, 7 σελίδες, 117 K

    Abstract. In their joint paper entitled "The Replication of the Hard Problem of Consciousness in AI and BIO-AI" (Boltuc et al. Replication of the hard problem of conscious in AI and Bio-AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as "H-consciousness"). The claim is that if we knew the inner workings of phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding.

      Σχετικό άρθρο: N. Boltuc , P. Boltuc (2008). Replication of the Hard Problem of Consciousness in AI and Bio-AI: An Early Conceptual Framework, Proceedings 2007 AAAI Fall Symposium on "AI and Consciousness: Theoretical Foundations and Current Approaches". (PDF, 6 σελίδες, 156 K)
      Abstract. AI could in principle replicate consciousness (Hconsciousness) in its first-person form (as described by Chalmers in the hard problem of consciousness.) If we can understand first-person consciousness in clear terms, we can provide an algorithm for it; if we have such algorithm, in principle we can build it. There are two questions that this argument opens. First, whether we ever will understand H-consciousness in clear terms. Second, whether we can build H-consciousness in inorganic substance. If organic substance is required, we would need to clearly grasp the difference between building a machine out of organic substance (Bio-AI) and just modifying a biological organism.

  5. L. Floridi (2005). Consciousness, agents and the knowledge game, Minds and Machines, 15:415-444.
    PDF, 30 σελίδες, 263 K

    Abstract. This paper has three goals. The first is to introduce the ‘‘knowledge game’’, a new, simple and yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as an informative test to discriminate between conscious (human) and conscious-less agents (zombies and robots), depending on which version of the game they can win. And the third is to use a version of the knowledge game to provide an answer to Dretske’s question ‘‘how do you know you are not a zombie?’’.


Β. Συμπεριφορική ΤΝ (Behavior-Based AI), Embodiment

Readings

Θέματα

  1. R.D. Beer (2010). Dynamical systems and embedded cognition, in K. Frankish and W. Ramsey (Eds.) "The Cambridge Handbook of Artificial Intelligence".
    PDF, 35 σελίδες, 359 K

  2. R.A. Brooks (1991). Intelligence without reason, Artificial Intelligence Memo 1293, MIT AI Lab, April 1991.
    PDF, 28 σελίδες, 342 K

    Abstract. Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong inuence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Artificial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation.

  3. R. Chrisley (2003). Embodied Artificial Intelligence, Artificial Intelligence, 149:131-150.
    PDF, 20 σελίδες, 164 K

  4. W.J. Clancey (2002). Simulating activities: Relating motives, deliberation and attentive coordination, Cognitive Systems Research, 3:471-499.
    PDF, 29 σελίδες, 474 K

    Abstract. Activities are located behaviors, taking time, conceived as socially meaningful, and usually involving interaction with tools and the environment. In modeling human cognition as a form of problem solving (goal-directed search and operator sequencing), cognitive science researchers have not adequately studied ‘off-task’ activities (e.g. waiting), non-intellectual motives (e.g. hunger), sustaining a goal state (e.g. playful interaction), and coupled perceptual–motor dynamics (e.g. following someone). These aspects of human behavior have been considered in bits and pieces in past research, identified as scripts, human factors, behavior settings, ensemble, flow experience, and situated action. More broadly, activity theory provides a comprehensive framework relating motives, goals, and operations. This paper ties these ideas together, using examples from work life in a Canadian High Arctic research station. The emphasis is on simulating human behavior as it naturally occurs, such that ‘working’ is understood as an aspect of living. The result is a synthesis of previously unrelated analytic perspectives and a broader appreciation of the nature of human cognition. Simulating activities in this comprehensive way is useful for understanding work practice, promoting learning, and designing better tools, including human–robot systems.

  5. J. Lindblom, T. Ziemke (2003). Social situatedness of natural and artificial intelligence: Vygotsky and beyond, Adaptive Behavior, 11(2):79-96.
    PDF, 18 σελίδες, 246 K

    Abstract. The concept of “social situatedness", that is, the idea that the development of individual intelligence requires a social (and cultural) embedding, has recently received much attention in cognitive science and artificial intelligence research, in particular work on social or epigenetic robotics. The work of Lev Vygotsky, who put forward this view as early as the 1920s, has influenced the discussion to some degree but still remains far from well known. This article therefore is aimed at giving an overview of his cognitive development theory and a discussion of its relation to more recent work in primatology and socially situated artificial intelligence, in particular humanoid robotics.

  6. L.A. Loren, E. Dietrich, C. Morrison, J. Beskin (1998). What it means to be situated, Cybernetics and Systems, 29:751-777.
    PDF, 27 σελίδες, 271 K

    Abstract. Situated action is a new approach to artificial intelligence that has thus far functioned without any explicit underlying theore tical foundation . As a result, many re searchers in artificial intelligence have misunderstood the goals and claims of situated action. In order to rectify this situation, we provide an explicit formulation of the theoretical foundations of situated action.

  7. A. Riegler (2002). When is a cognitive system embodied?, Cognitive Systems Research, 3:339-348.
    PDF, 10 σελίδες, 90 K

    Abstract. For cognitive systems, embodiment appears to be of crucial importance. Unfortunately, nobody seems to be able to define embodiment in a way that would prevent it from also covering its trivial interpretations such as mere situatedness in complex environments. The paper focuses on the definition of embodiment, especially whether physical embodiment is necessary and/or sufficient for cognitive systems. Cognition is characterized as a continuous complex process rather than ahistorical logical capability. Furthermore, the paper investigates the relationship between cognitive embodiment and the issues of understanding, representation and task specification.

  8. R. Rupert (2007). Innateness and the situated mind, Chapter 8, in "Cambridge Handbook of Situated Cognition" by P. Robbins and M. Aydede.
    PDF, 45 σελίδες, 125 K

  9. L. Smith, M. Gasser (2005). The development of embodied cognition: Six lessons from babies, Artificial Life, 11:13-29.
    PDF, 17 σελίδες, 1.17 M

    Abstract. The embodiment hypothesis is the idea that intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity. We offer six lessons for developing embodied intelligent agents suggested by research in developmental psychology. We argue that starting as a baby grounded in a physical, social, and linguistic world is crucial to the development of the flexible and inventive intelligence that characterizes humankind.

  10. R.A. Wilson, A. Clark (2007). How to situate cognition: Letting nature take its course, Chapter 8, in "Cambridge Handbook of Situated Cognition" by P. Robbins and M. Aydede.
    PDF, 34 σελίδες, 612 K

  11. E.Ziemke (2003). Embodied AI as Science: Models of Embodied Cognition, Embodied Models of Cognition, or Both, Embodied Artificial Intelligence (Eds.: F. Iida et al.), LNAI 3139, pp. 27–36.
    PDF, 10 σελίδες, 200 K

    Abstract. This paper discusses the identity of embodied AI, i.e. it asks the question exactly what it is that makes AI research embodied. From an engineering perspective, it is fairly clear that embodied AI is about robotic, i.e. physically embodied systems. From the scientific perspective of AI as building models of natural cognition or intelligence, however, things are less clear. On the one hand embodied AI seems to be about physically embodied, i.e. robotic models of cognition. On the other hand the term ‘embodied’ seems to signify the type of intelligence modeled and/or the conception of (embodied) cognition that is underlying the modeling. In either case, it appears that embodied AI, as it currently stands, might be too narrowly conceived since each of these perspectives is addressed only partially.


Γ. Τεχνητή Ζωή (Artificial Life)

Readings

Θέματα

  1. A. Clark (2005). Beyond the flesh: Some lessons from a mole cricket, Artificial Life, 11:233-244.
    PDF, 12 σελίδες, 122 K

    Abstract. What do linguistic symbols do for minds like ours, and how (if at all) can basic embodied, dynamical, and situated approaches do justice to high-level human thought and reason? These two questions are best addressed together, since our answers to the first may inform the second. The key move in scaling up simple embodied cognitive science is, I argue, to take very seriously the potent role of human-built structures in transforming the spaces of human learning and reason. In particular, in this article I look at a range of cases involving what I dub surrogate situations. Here, we actively create restricted artificial environments that allow us to deploy basic perception-action-reason routines in the absence of their proper objects. Examples include the use of real-world models, diagrams, and other concrete external symbols to support dense looping interactions with a variety of stable external structures that stand in for the absent states of affairs. Language itself, I finally suggest, is the most potent and fundamental form of such surrogacy. Words are both cheap stand-ins for gross behavioral outcomes, and the concrete objects that structure new spaces for basic forms of learning and reason. A good hard look at surrogate situatedness thus turns the standard skeptical challenge on its head. But it raises important questions concerning what really matters about these new approaches, and it helps focus what I see as the major challenge for the future: how, in detail, to conceptualize the role of symbols (both internal and external) in dynamical cognitive processes..

  2. S. Helmreich (2007). "Life is a verb": Inflections of artificial life in cultural context, Artificial Life, 13:189-201.
    PDF, 14 σελίδες, 146 K

    Abstract. This review essay surveys recent literature in the history of science, literary theory, anthropology, and art criticism dedicated to exploring how the artificial life enterprise has been inflected by—and might also reshape—existing social, historical, cognitive, and cultural frames of thought and action. The piece works through various possible interpretations of Kevin Kelly’s phrase "life is a verb", in order to track recent shifts in cultural studies of artificial life from an aesthetic of critique to an aesthetic of conversation, discerning in the process different styles of translating between the concerns of the humanities, social sciences, natural sciences, and sciences of the artificial.

  3. C. Marriott, J. Parker, J. Denzinger (2010). Imitation as a Mechanism of Cultural Transmission, Artificial Life, 16:21-37.
    PDF, 17 σελίδες, 239 K

    Abstract. We study the effects of an imitation mechanism on a population of animats capable of individual ontogenetic learning. An urge to imitate others augments a network-based reinforcement learning strategy used in the control system of the animats. We test populations of animats with imitation against populations without for their ability to find, and maintain over generations, successful foraging behavior in an environment containing three necessary resources: food, water, and shelter. We conclude that even simple imitation mechanisms are effective at increasing the frequency of success when measured over time and over populations of animats.

  4. E.M.A. Ronald, M. Sipper, M.S. Capcarrère (1999). Design, Observation, Surprise! A Test of Emergence, Artificial Life, 5:225-239.
    PDF, 15 σελίδες, 139 K

    Abstract. The field of artificial life (Alife) is replete with documented instances of emergence, though debate still persists as to the meaning of this term. We contend that, in the absence of an acceptable definition, researchers in the field would be well served by adopting an emergence certification mark that would garner approval from the Alife community. Toward this end, we propose an emergence test, namely, criteria by which one can justify conferring the emergence label.

  5. B. Webb (2009). Animals versus animats: Or why not model the real iguana?, Adaptive Behavior, 17(4):269-286.
    PDF, 18 σελίδες, 192 K

    Abstract. The overlapping fields of adaptive behavior and artificial life are often described as novel approaches to biology. They focus attention on bottom-up explanations and how lifelike phenomena can result from relatively simple systems interacting dynamically with their environments. They are also characterized by the use of synthetic methodologies, that is, building artificial systems as a means of exploring these ideas. Two differing approaches can be distinguished: building models of specific animal systems and assessing them within complete behavior–environment loops; and exploring the behavior of invented artificial animals, often called animats, under similar conditions. An obvious question about the latter approach is, how can we learn about real biology from simulation of non-existent animals? In this article I will argue, first, that animat research, to the extent that it is relevant to biology, should also be considered as model building. Animat simulations do, implicitly, represent hypotheses about, and should be evaluated by comparison to, animals. Casting this research in terms of invented agents serves only to limit the ability to draw useful conclusions from it by deflecting or deferring any serious comparisons of the model mechanisms and results with real biological systems. Claims that animat models are meant to be existence proofs, idealizations, or represent general problems in biology do not make these models qualitatively different from more conventional models of specific animals, nor undermine the ultimate requirement to justify this work by making concrete comparisons with empirical data. It is thus suggested that we will learn more by choosing real, and not made-up, targets for our models.


Δ. Turing Test

Readings

Θέματα

  1. S. Bringsjord (1994). Could, how could we tell if, and why should -- androids have inner lives?, in "Android Epistemology", K. Ford, C. Glymour and P. Hayes (Eds.), pp. 93-122.
    PDF, 15 σελίδες, 2.99 M

  2. S. Bringsjord, C. Caporale, R. Noel (2000). Animals, Zombanimals, and the Total Turing Test (The Essence of Artificial Intelligence), Journal of Logic, Language and Information, 9: 397–418.
    PDF, 22 σελίδες, 119 K

    Abstract. Alan Turing devised his famous test (TT) through a slight modification of the parlor game in which a judge tries to ascertain the gender of two people who are only linguistically accessible. Stevan Harnad has introduced the Total TT, in which the judge can look at the contestants in an attempt to determine which is a robot and which a person. But what if we confront the judge with an animal, and a robot striving to pass for one, and then challenge him to peg which is which? Now we can index TTT to a particular animal and its synthetic correlate. We might therefore have TTTrat, TTTcat, TTTdog, and so on. These tests, as we explain herein, are a better barometer of artificial intelligence (AI) than Turing’s original TT, because AI seems to have ammunition sufficient only to reach the level of artificial animal, not artificial person.

  3. S. Harnad (2006). The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence, in: Epstein, Robert & Peters, Grace (Eds.) The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer, Kluwer.
    PDF, 28 σελίδες, 184 K

  4. J. Hernández-Orallo (1999). Beyond the Turing test, Journal of Logic, Language and Information, 9(4):447-466.
    PDF, 22 σελίδες, 356 K

    Abstract. We define the main factor of inteligence as the ability to comprehend, formalising this ability with the help of new constructs based on descriptional complexity. The result is a comprehension test, or C-test, exclusively defined in terms of universal descriptional machines (e.g. universal Turing machines). Despite the absolute and non-anthropomorphic character of the test it is equally appicable to both humans and machines. Moreover, it correlates with classical psychometric tests, thus establishing the first firm connection between information theoretic notions and traditional IQ tests. The Turing test is compared with the C-test and their joint combination is discussed. As a result, the idea of a Turing Test as a practical test of intelligence should be left behind, and substituted by computational and factorial tests of different cognitive abilities, a much more useful approach for artificial intelligence progress and for many other intriguing questions that are presented beyond the Turing test.

  5. P. Schweizer (2012). The Externalist Foundations of a Truly Total Turing Test, Minds and Machines, 22:191–212.
    PDF, 22 σελίδες, 212 K

    Abstract. The paper begins by examining the original Turing Test (2T) and Searle’s antithetical Chinese Room Argument, which is intended to refute the 2T in particular, as well as any formal or abstract procedural theory of the mind in general. In the ensuing dispute between Searle and his own critics, I argue that Searle’s ‘internalist’ strategy is unable to deflect Dennett’s combined robotic-systems reply and the allied Total Turing Test (3T). Many would hold that the 3T marks the culmination of the dialectic and, in principle, constitutes a fully adequate empirical standard for judging that an artifact is intelligent on a par with human beings. However, the paper carries the debate forward by arguing that the sociolinguistic factors highlighted in externalist views in the philosophy of language indicate the need for a fundamental shift in perspective in a Truly Total Turing Test (4T). It’s not enough to focus on Dennett’s individual robot viewed as a system; instead, we need to focus on an ongoing system of such artifacts. Hence a 4T should evaluate the general category of cognitive organization under investigation, rather than the performance of single specimens. From this comprehensive standpoint, the question is not whether an individual instance could simulate intelligent behavior within the context of a pre-existing sociolinguistic culture developed by the human cognitive type. Instead the key issue is whether the artificial cognitive type itself is capable of producing a comparable sociolinguistic medium.

  6. H. Shah, K. Warwick (2010). Hidden Interlocutor Misidentification in Practical Turing Tests, Minds and Machines, 20:441–454.
    PDF, 14 σελίδες, 164 K

    Abstract. Based on insufficient evidence, and inadequate research, Floridi and his students report inaccuracies and draw false conclusions in their Minds and Machines evaluation, which this paper aims to clarify. Acting as invited judges, Floridi et al. participated in nine, of the ninety-six, Turing tests staged in the finals of the 18th Loebner Prize for Artificial Intelligence in October 2008. From the transcripts it appears that they used power over solidarity as an interrogation technique. As a result, they were fooled on several occasions into believing that a machine was a human and that a human was a machine. Worse still, they did not realise their mistake. This resulted in a combined correct identification rate of less than 56%. In their paper they assumed that they had made correct identifications when they in fact had been incorrect.

  7. S. Zdenek (1991). Passing Loebner’s Turing Test: A Case of Conflicting Discourse Functions, Minds and Machines, 11: 53–76.
    PDF, 24 σελίδες, 97 K

    Abstract. This paper argues that the Turing test is based on a fixed and de-contextualized view of communicative competence. According to this view, a machine that passes the test will be able to communicate effectively in a variety of other situations. But the de-contextualized view ignores the relationship between language and social context, or, to put it another way, the extent to which speakers respond dynamically to variations in discourse function, formality level, social distance/solidarity among participants, and participants’ relative degrees of power and status (Holmes, 1992). In the case of the Loebner Contest, a present day version of the Turing test, the social context of interaction can be interpreted in conflicting ways. For example, Loebner discourse is defined 1) as a friendly, casual conversation between two strangers of equal power, and 2) as a one-way transaction in which judges control the conversational floor in an attempt to expose contestants that are not human. This conflict in discourse function is irrelevant so long as the goal of the contest is to ensure that only thinking, human entities pass the test. But if the function of Loebner discourse is to encourage the production of software that can pass for human on the level of conversational ability, then the contest designers need to resolve this ambiguity in discourse function, and thus also come to terms with the kind of competence they are trying to measure.


Ε. Κινέζικο Δωμάτιο (Chinese Room)

Readings

Θέματα

  1. S. Bringsjord, R. Noel (2000). Real robots and the missing thought experiment in the chinese room dialectic, manuscript.
    PDF, 17 σελίδες, 266 K

  2. J. Ford (2011). Helen Keller Was Never in a Chinese Room, Minds and Machines, 21:57–72.
    PDF, 16 σελίδες, 171 K

    Abstract. William Rapaport, in "How Helen Keller used syntactic semantics to escape from a Chinese Room," (Rapaport 2006), argues that Helen Keller was in a sort of Chinese Room, and that her subsequent development of natural language fluency illustrates the flaws in Searle’s famous Chinese Room Argument and provides a method for developing computers that have genuine semantics (and intentionality). I contend that his argument fails. In setting the problem, Rapaport uses his own preferred definitions of semantics and syntax, but he does not translate Searle’s Chinese Room argument into that idiom before attacking it. Once the Chinese Room is translated into Rapaport’s idiom (in a manner that preserves the distinction between meaningful representations and uninterpreted symbols), I demonstrate how Rapaport’s argument fails to defeat the CRA. This failure brings a crucial element of the Chinese Room Argument to the fore: the person in the Chinese Room is prevented from connecting the Chinese symbols to his/her own meaningful experiences and memories. This issue must be addressed before any victory over the CRA is announced.

      Σχετικό άρθρο: W.J. Rapaport (2006). How Helen Keller used syntactic semantics to escape from a Chinese Room, Minds and Machines, 16:381–436. (PDF, 56 σελίδες, 577 K)
      Abstract. A computer can come to understand natural language the same way Helen Keller did: by using "syntactic semantics"—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS computational knowledge-representation system, the essay analyzes Keller’s belief that learning that "everything has a name" was the key to her success, enabling her to "partition" her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace’s theory of naming, which is akin to Keller’s, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming.

  3. S. Harnad (2001). Minds, machines and Searle 2: What's right and wrong about the chinese room argument, in: M. Bishop and J. Preston (Eds.), "Essays on Searle's Chinese Room Argument", Oxford University Press.
    PDF, 17 σελίδες, 57 K

  4. L. Hauser (1997). Searle’s Chinese Box: Debunking the Chinese Room Argument, Minds and Machines, 7: 199–226.
    PDF, 28 σελίδες, 135 K

    Abstract. John Searle’s Chinese room argument is perhaps the most influential and widely cited argument against artificial intelligence (AI).Understood as targetingAI proper – claims that computers can think or do think – Searle’s argument, despite its rhetorical flash, is logically and scientifically a dud. Advertised as effective against AI proper, the argument, in its main outlines, is an ignoratio elenchi. It musters persuasive force fallaciously by indirection fostered by equivocal deployment of the phrase “strong AI” and reinforced by equivocation on the phrase “causal powers’ (at least) equal to those of brains.” On a more carefully crafted understanding – understood just to target metaphysical identification of thought with computation (“Functionalism” or “Computationalism”) and not AI proper the argument is still unsound, though more interestingly so. It’s unsound in ways difficult for high church – “someday my prince of an AI program will come” – believers in AI to acknowledge without undermining their high church beliefs. The ad hominem bite of Searle’s argument against the high church persuasions of so many cognitive scientists, I suggest, largely explains the undeserved repute this really quite disreputable argument enjoys among them.

  5. L. Hauser (2002). Nixin' goes to China, in J. Preston and M. Bishop (Eds.), "Views into the chinese room", Oxford University Press.
    PDF, 19 σελίδες, 68 K

    Abstract. The intelligent-seeming deeds of computers are what occasion philosophical debate about artificial intelligence (AI) in the first place. Since evidence of AI is not bad, arguments against seem called for. John Searle's Chinese Room Argument (1980a, 1984, 1990, 1994) is among the most famous and long-running would-be answers to the call. Surprisingly, both the original thought experiment (1980a) and Searle's later would-be formalizations of the embedding argument (1984, 1990) are quite unavailing against AI proper (claims that computers do or someday will think). Searle lately even styles it a "misunderstanding" (1994, p. 547) to think the argument was ever so directed! The Chinese room is now advertised to target Computationalism (claims that computation is what thought essentially is) exclusively. Despite its renown, the Chinese Room Argument is totally ineffective even against this target.

  6. D. Jacquette (1989). Adventures in the Chinese Room, Philosophy and Phenomenological Research, 49(4):605-623.
    PDF, 19 σελίδες, 1901 K

  7. W.J. Rapaport (2000). How to pass a Turing test: Syntax suffices for understanding natural language, SUNY at Buffalo Computer Science and Engineering Technical Report 99-06.
    PDF, 26 σελίδες, 413 K

    Abstract. A theory of “syntactic semantics” is advocated as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, as the study of relations between symbols and meanings, can be turned into syntax—a study of relations among symbols (including meanings)—and hence syntax can suffice for the semantical enterprise. (2) Semantics, as the process of understanding one domain modeled in terms of another, can be viewed recursively: The base case of semantic understanding—understanding a domain in terms of itself— is syntactic understanding. An internal (or “narrow”), first-person point of view makes an external (or “wide”), third-person point of view otiose for purposes of understanding cognition. The paper also sketches the ramifications of this view with respect to methodological solipsism, conceptual-role semantics, holism, misunderstanding, and implementation, and looks at Helen Keller as inhabitant of a Chinese Room.

  8. I. Shani (2005). Computation and intentionality: A recipe for epistemic impasse, Minds and Machines, 15:207-228.
    PDF, 22 σελίδες, 227 K

    Abstract. Searle’s celebrated Chinese room thought experiment was devised as an attempted refutation of the view that appropriately programmed digital computers literally are the possessors of genuine mental states. A standard reply to Searle, known as the "robot reply" (which, I argue, reflects the dominant approach to the problem of content in contemporary philosophy of mind), consists of the claim that the problem he raises can be solved by supplementing the computational device with some "appropriate" environmental hookups. I argue that not only does Searle himself casts doubt on the adequacy of this idea by applying to it a slightly revised version of his original argument, but that the weakness of this encoding based approach to the problem of intentionality can also be exposed from a somewhat different angle. Capitalizing on the work of several authors and, in particular, on that of psychologist Mark Bickhard, I argue that the existence of symbol-world correspondence is not a property that the cognitive system itself can appreciate, from its own perspective, by interacting with the symbol and therefore, not a property that can constitute intrinsic content. The foundational crisis to which Searle alluded is, I conclude, very much alive.


Ζ. Σύμβολα (Symbols)

Readings

Θέματα

  1. I.S. Berkeley (2008). What the $$$ is a symbol, Minds and Machines, 18:93-105.
    PDF, 13 σελίδες, 198 K

    Abstract. The notion of a ‘symbol’ plays an important role in the disciplines of Philosophy, Psychology, Computer Science, and Cognitive Science. However, there is comparatively little agreement on how this notion is to be understood, either between disciplines, or even within particular disciplines. This paper does not attempt to defend some putatively ‘correct’ version of the concept of a ‘symbol.’ Rather, some terminological conventions are suggested, some constraints are proposed and a taxonomy of the kinds of issue that give rise to disagreement is articulated. The goal here is to provide something like a ‘geography’ of the various notions of ‘symbol’ that have appeared in the various literatures, so as to highlight the key issues and to permit the focusing of attention upon the important dimensions. In particular, the relationship between ‘tokens’ and ‘symbols’ is addressed. The issue of designation is discussed in some detail. The distinction between simple and complex symbols is clarified and an apparently necessary condition for a system to be potentially symbol, or token bearing, is introduced.

  2. A. Clark (2006). Material symbols, Philosophical Psychology, 19(3):1-17.
    PDF, 17 σελίδες, 262 K

    Abstract. What is the relation between the material, conventional symbol structures that we encounter in the spoken and written word, and human thought? A common assumption, that structures a wide variety of otherwise competing views, is that the way in which these material, conventional symbol-structures do their work is by being translated into some kind of content-matching inner code. One alternative to this view is the tempting but thoroughly elusive idea that we somehow think in some natural language (such as English). In the present treatment I explore a third option, which I shall call the "complementarity" view of language. According to this third view the actual symbol structures of a given language add cognitive value by complementing (without being replicated by) the more basic modes of operation and representation endemic to the biological brain. The "cognitive bonus" that language brings is, on this model, not to be cashed out either via the ultimately mysterious notion of "thinking in a given natural language" or via some process of exhaustive translation into another inner code. Instead, we should try to think in terms of a kind of coordination dynamics in which the forms and structures of a language qua material symbol system play a key and irreducible role. Understanding language as a complementary cognitive resource is, I argue, an important part of the much larger project (sometimes glossed in terms of the "extended mind") of understanding human cognition as essentially and multiply hybrid: as involving a complex interplay between internal biological resources and external non-biological resources.

  3. J.W. Garson (2002). Making symbols matter: A new challenge to their causal efficacy, Journal of Experimental and Theoretical Artificial Intelligence, 14:13-27.
    PDF, 15 σελίδες, 231 K

    Abstract. This paper locates a conceptual difficulty in the view that symbolic representations causally govern mental processing. The problem is to craft a clear divide between systems that are governed by and systems that are merely described by representations. It is argued that neither computational functionalists nor identity theorists can secure the distinction that is needed for defining a successful criterion for the presence of representations with causal powers. The conclusion is that until those who are committed to the symbolic processing model of the mind can produce a better empirical account of what they mean by the presence of causally active representations in the brain, their claims will be irrelevant to the conduct of cognitive science.

  4. E. Prem (1994). Symbol Grounding and Transcendental Logic, Austrian Research Institute for Artificial Intelligence.
    PDF, 10 σελίδες, 193 K

    Abstract. Symbol Grounding tries to answer the question as to how it is possible for a computer program to use symbols which are not arbitrarily interpretable. Whereas the signs in conventional programs are just "parasitic on the meaning in our heads", grounded symbols should possess at least some "intrinsic meaning". This paper gives a brief overview of what Symbol Grounding is and summarizes some of today's connectionist Symbol Grounding models. Instead of concentrating on cognitive linguistics, we try to present an alternative view of Symbol Grounding. Our analysis reveals that Symbol Grounding is in fact the endeavour of automated model construction. Although it originated in a somewhat anti-formal spirit it is (necessarily) full of parallels to classical symbolic logic. We present our view that Symbol Grounding is in fact a connectionist version of transcendental logic, which is the basis for generating formal models of non-formal domains. Such formalizations are inherently logical, though not only based on formal but also on material truth conditions.

  5. D. Rodríguez, J. Hermosillo, B. Lara (2012). Meaning in Artificial Agents: The Symbol Grounding Problem Revisited, Minds and Machines, 22:25-34.
    PDF, 10 σελίδες, 170 K

    Abstract. The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument. The main thesis in this paper is that although related, these two issues present different problems in the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between John Searle’s intentionality notion and Harnad’s Symbol Grounding Problem.

  6. R. Sun (2000). Symbol grounding: A new look at an old idea, Philosophical Psychology, 13(2):149-172.
    PDF, 24 σελίδες, 228 K

    Abstract. Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or “objectively.” They are formed in relation to the experience of agents, through their perceptual/motor apparatuses, in their world and linked to their goals and actions. This paper takes a detailed look at this relatively old issue, with a new perspective, aided by our work of computational cognitive model development. To further our understanding, we also go back in time to link up with earlier philosophical theories related to this issue. The result is an account that extends from computational mechanisms to philosophical abstractions.

  7. G. White (2011). Descartes Among the Robots: Computer Science and the Inner/Outer Distinction, Minds and Machines, 21:179-202.
    PDF, 24 σελίδες, 222 K

    Abstract. We consider the symbol grounding problem, and apply to it philosophical arguments against Cartesianism developed by Sellars and McDowell: the problematic issue is the dichotomy between inside and outside which the definition of a physical symbol system presupposes. Surprisingly, one can question this dichotomy and still do symbolic computation: a detailed examination of the hardware and software of serial ports shows this.


Η. Συνδετισμός (Connectionism)

Readings

Θέματα

  1. K. Aizawa (1992). Connectionism and artificial intelligence: History and philosophical interpretation, Journal of Experimental and Theoretical Artificial Intelligence, 4:295-313.
    PDF, 19 σελίδες, 790 K

    Abstract. Hubert and Stuart Dreyfus have tried to place connectionism and artificial intelligence in a broader historical and intellectual context. This history associates connectionism with neuroscience, conceptual holism, and nonrationalism, and artificial intelligence with conceptual atomism, rationalism, and formal logic. The present paper argues that the Dreyfus account of connectionism and artificial intelligence is both historically and philosophically misleading.

  2. D.T. Cliff (1990). Computational neuroethology: A provisional manifesto, Univ. of Sussex CSRP 162 (Cognitive Science Research Paper).
    PDF, 26 σελίδες, 266 K

    Abstract. This paper questions approaches to computational modelling of neural mechanisms underlying behaviour. It examines "simplifying" (connectionist) models used in computational neuroscience and concludes that, unless embedded within a sensorimotor system, they are meaningless. The implication is that future models should be situated within closed-environment simulation systems: output of the simulated nervous system is then expressed as observable behaviour. This approach is referred to as "computational neuroethology". Computational neuroethology offers a firmer grounding for the semantics of the model, eliminating subjectivity from the result-interpretation process. A number of more fundamental implications of the approach are also discussed, chief of which is that insect cognition should be studied in preference to mammalian cognition.

  3. B.J. Copeland, D. Proudfoot (1996). On Alan Turing's anticipation of connectionism, Synthese, 108:361-377.
    PDF, 17 σελίδες, 1.02 M

    Abstract. It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner. Turing called his networks 'unorganised machines'. By the application of what he described as 'appropriate interference, mimicking education' an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of 'neurons' is sufficient. Turing proposed simulating both the behaviour of the network and the training process by means of a computer program. We outline Turing's connectionist project of 1948.

  4. M. Frixione, G. Spinelli (1992). Connectionism and functionalism: The importance of being a subsymbolist, Journal of Experimental and Theoretical Artificial Intelligence, 1:3-17.
    PDF, 15 σελίδες, 596 K

    Abstract. In recent years the development of connectionist theories and of various subsymbolic approaches to the study of the mind, and the renewed interest in the relations between the study of the mind and the neuroscience have had significant repercussions on the philosophical foundations of artificial intelligence and cognitive science, and on important questions of the philosophy of mind. Various approaches to the problem of mental representations have been formulated, in some sense alternative to classic approaches of artificial intelligence and cognitive science. We suggest that the problem of modelling the reference of mental symbols from a cognitive point of view requires the abandonment of a purely symbolic approach, and the adoption of a subsymbolic level of representation. Some philosophical consequences of a subsymbolic level of this kind are discussed. After distinguishing between the problem of reference and that of intentionality (which cannot be solved positing a subsymbolic level of representation), we shall see how a subsymbolic approach can be compatible with a functionalist view of the mind, in the wider sense. Finally, some consequences of subsymbolic models of reference regarding the problem of the inverted spectrum are described.

  5. M. Guarini (1996). Tensor Products and Split-Level Architecture: Foundational Issues in the Classicism-Connectionism Debate, Philosophy of Science, 63(Supplement):S239-S247.
    PDF, 9 σελίδες, 228 K

    Abstract. This paper responds to criticisms levelled by Fodor, Pylyshyn, and McLaughlin against connectionism. Specifically, I will rebut the charge that connectionists cannot account for representational systematicity without implementing a classical architecture. This will be accomplished by drawing on Paul Smolensky's Tensor Product model of representation and on his insights about split-level architectures.

  6. J. Mira (2008). Symbols versus connections: 50 years of artificial intelligence, Neurocomputing, 71:671–680.
    PDF, 10 σελίδες, 301 K

    Abstract. Artificial intelligence (AI) was born connectionist when in 1943 Warren S. McCulloch and Walter Pitts introduced the first sequential logic model of neuron. The 1950s sees the passage from numerical to symbolic computation with the christening of AI in 1956. In 1986, there is a rebirth of connectionism at the same time that an emphasis in knowledge modeling and inference, both symbolic and connectionist. We thus reach the present state in which different paradigms coexist (symbolic, connectionist, situated and hybrid). In this work, we will attempt (1) to approach the concept of AI both as a science of the natural and as knowledge engineering (KE); (2) summarize some of the conceptual, formal and methodological approaches to the development of AI during the last 50 years, (3) mention some of the constitutive differences between human knowing and machine knowing and (4) propose some suggestions that we believe must be adopted to progress in developing AI.

  7. B. Smith (1997). The connectionist mind: A study of Hayekian psychology, in S.F.Frowken (Ed.), "Hayek, Economist and social philosopher: A critical retrospect".
    PDF, 18 σελίδες, 60 K


Θ. Αναπαραστάσεις (Representations)

Readings

Θέματα

  1. M.H. Bickhard (1998). Levels of representationality, Journal of Experimental and Theoretical Artificial Intelligence, 10:179-215.
    PDF, 37 σελίδες, 548 K

    Abstract. The dominant assumptions throughout contemporary philosophy, psychology, cognitive science, and artificial intelligence about the ontology underlying intentionality, and its core of representationality, are those of encodings -some sort of informational or correspondence or covariation relationship between the represented and its representation that constitute that representational relationship. There are many disagreements concerning details and implementations, and even some suggestions about claimed alternative ontologies, such as connectionism (though none that escape what is argued is the fundamental flaw in these dominant approaches). One assumption that seems to be held by all, however, usually without explication or defence, is that there is one singular underlying ontology to representationality. In this paper, it is argued that there are in fact quite a number of ontologies that manifest representationality -levels of representationality- and that none of them are the standard `manipulations of encoded symbols ’ ontology, nor any other variation on the informational approach to representation. Collectively, these multiple representational ontologies constitute a framework for cognition, whether natural or artificial.

  2. A. Chemero (2000). Anti-representationalism and the dynamical stance, Philosophy of Science, 67(4):625-647.
    PDF, 23 σελίδες, 447 K

    Abstract. Arguments in favor of anti-representationalism in cognitive science often suffer from a lack of attention to detail. The purpose of this paper is to fill in the gaps in these arguments, and in so doing show that at least one form of anti-representationalism is potentially viable. After giving a teleological definition of representation and applying it to a few models that have inspired anti-representationalist claims, I argue that anti-representationalism must be divided into two distinct theses, one ontological, one epistemological. Given the assumptions that define the debate, I give reason to think that the ontological thesis is false. I then argue that the epistemological thesis might, in the end, turn out to be true, despite a potentially serious difficulty. Along the way, there will be a brief detour to discuss a controversy from early twentieth century physics.

  3. F. Keijzer (2002). Representation in dynamical and embodied cognition, Cognitive Systems Research, 3:275-288.
    PDF, 14 σελίδες, 100 K

    Abstract. The move toward a dynamical and embodied understanding of cognitive processes initiated a debate about the usefulness of the notion of representation for cognitive science. The debate started when some proponents of a dynamical and embodied approach argued that the use of representations could be discarded in many circumstances. This remained a minority view, however, and there is now a tendency to shove this critique of the usefulness of representations aside as a non-issue for a dynamical and situated approach to cognition. In opposition, I will argue that the representation issue is far from settled, and instead forms the kernel of an important conceptual shift between traditional cognitive science and a dynamical and embodied approach. This will be done by making explicit the key features of representation in traditional cognitive science and by arguing that the representation-like entities that come to the fore in a dynamical and embodied approach are significantly different from the traditional notion of representation. This difference warrants a change of terminology to signal an important change in meaning.

  4. V.C. Müller (2007). Is There a Future for AI Without Representation?, Minds and Machines, 17:101-115.
    PDF, 15 σελίδες, 188 K

    Abstract. This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of "new AI" (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: "New AI" is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for fullblown intelligent agents—Though not for conscious agents.

  5. G. Piccinini (2006). Computation without representation, Philosophical Studies, in press.
    PDF, 37 σελίδες, 236 K

    Abstract. The received view is that computational states are individuated at least in part by their semantic properties. I offer an alternative, according to which computational states are individuated by their functional properties. Functional properties are specified by a mechanistic explanation without appealing to any semantic properties. The primary purpose of this paper is to formulate the alternative view of computational individuation, point out that it supports a robust notion of computational explanation, and defend it on the grounds of how computational states are individuated within computability theory and computer science. A secondary purpose is to show that existing arguments for the semantic view are defective.

  6. M. Wheeler (2005). Friends reunited? Evolutionary robotics and representational explanation, Artificial Life, 11:215-231.
    PDF, 17 σελίδες, 155 K

    Abstract. Robotics as practiced within the artificial life community is no longer the bitter enemy of representational explanation in the way that it sometimes seemed to be in the heady, revolutionary days of the 1990s. This rapprochement is, however, fragile, because the field of evolutionary robotics continues to pose two important challenges to the idea that real-time intelligent action must or should be explained by appeal to inner representations. The first of these challenges, the threat from nontrivial causal spread, occurs when extra-neural factors account for the kind of adaptive richness and flexibility normally associated with representation-based control. The second, the threat from continuous reciprocal causation, occurs when the causal contributions made by the systemic components collectively responsible for behavior generation are massively context-sensitive and variable over time. I argue that while the threat from nontrivial causal spread can be resisted, the threat from continuous reciprocal causation provides a stern test for our representational intuitions.


Ι. Αυτονομία (Autonomy)

Readings

Θέματα

  1. P.Bourgine, J.Stewart (2004). Autopoiesis and Cognition, Artificial Life, 10:327-345.
    PDF, 19 σελίδες, 1137 K

    Abstract. This article revisits the concept of autopoiesis and examines its relation to cognition and life. We present a mathematical model of a 3D tesselation automaton, considered as a minimal example of autopoiesis. This leads us to a thesis T1: "An autopoietic system can be described as a random dynamical system, which is defined only within its organized autopoietic domain." We propose a modified definition of autopoiesis: "An autopoietic system is a network of processes that produces the components that reproduce the network, and that also regulates the boundary conditions necessary for its ongoing existence as a network." We also propose a definition of cognition: "A system is cognitive if and only if sensory inputs serve to trigger actions in a specific way, so as to satisfy a viability constraint." It follows from these definitions that the concepts of autopoiesis and cognition, although deeply related in their connection with the regulation of the boundary conditions of the system, are not immediately identical: a system can be autopoietic without being cognitive, and cognitive without being autopoietic. Finally, we propose a thesis T2: "A system that is both autopoietic and cognitive is a living system."

  2. E.A. di Paolo, H. Iizuka (2007). How (not) to model autonomous behaviour, BioSystems.
    PDF, 15 σελίδες, 818 K

    Abstract. Autonomous systems are the result of self-sustaining processes of constitution of an identity under precarious circumstances. They may transit through different modes of dynamical engagement with their environment, from committed ongoing coping to open susceptibility to external demands. This paper discusses these two statements and presents examples of models of autonomous behaviour using methods in evolutionary robotics. A model of an agent capable of issuing self-instructions demonstrates the fragility of modelling autonomy as a function rather than as a property of a system’s organization. An alternative model of behavioural preference based on homeostatic adaptation avoids this problem by establishing a mutual constraining between lower-level processes (neural dynamics and sensorimotor interaction) and higher-level metadynamics (experience-dependent, homeostatic triggering of local plasticity and re-organization). The results of these models are lessons about how strong autonomy should be approached: neither as a function, nor as a matter of external vs. internal determination.

  3. A. Moreno, A. Etxeberria (2005). Agency in natural and artificial systems, Artificial Life, 11:161-175.
    PDF, 15 σελίδες, 146 K

    Abstract. We analyze the conditions for agency in natural and artificial systems. In the case of basic (natural) autonomous systems, self-construction and activity in the environment are two aspects of the same organization, the distinction between which is entirely conceptual: their sensorimotor activities are metabolic, realized according to the same principles and through the same material transformations as those typical of internal processes (such as energy transduction). The two aspects begin to be distinguishable in a particular evolutionary trend, related to the size increase of some groups of organisms whose adaptive abilities depend on motility. Here a specialized system develops, which, in the sensorimotor aspect, is decoupled from the metabolic basis, although it remains dependent on it in the self-constructive aspect. This decoupling reveals a complexification of the organization. In the last section of the article this approach to natural agency is used to analyze artificial systems by posing two problems: whether it is possible to artificially build an organization similar to the natural, and whether this notion of agency can be grounded on different organizing principles.


Κ. Τεχνητή Δημιουργικότητα (Artificial Creativity)

Reading

Θέματα

  1. M.A. Boden (2004). Of humans and hoverflies, Chapter 11 of "The Creative Mind: Myths and mechanisms", 2nd edition, Routledge.
    PDF, 28 σελίδες, 122 K

  2. J. Forth, G.A. Wiggins, A. McLean (2010). Unifying Conceptual Spaces: Concept Formation in Musical Creative Systems, Minds and Machines, 20:503-532.
    PDF, 30 σελίδες, 488 K

    Abstract. We examine Gärdenfors theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449.458, 2006a; New Generation Comput 24(3):209.222, 2006b). Gärdenfors theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.

  3. K.E. Jennings (2010). Developing Creativity: Artificial Barriers in Artificial Intelligence, Minds and Machines, 20:489–501.
    PDF, 13 σελίδες, 261 K

    Abstract. The greatest rhetorical challenge to developers of creative artificial intelligence systems is convincingly arguing that their software is more than just an extension of their own creativity. This paper suggests that "creative autonomy," which exists when a system not only evaluates creations on its own, but alsochanges its standards without explicit direction, is a necessary condition for making this argument. Rather than requiring that the system be hermetically sealed to avoid perceptions of human influence, developing creative autonomy is argued to be more plausible if the system is intimately embedded in a broader society of other creators and critics. Ideas are presented for constructing systems that might be able to achieve creative autonomy.

  4. P. van Langen, N. Wijngaards, F. Brazier (2004). Towards designing creative artificial systems, Artificial Intelligence for Engineering Design, Analysis and Manifacturing, 18(4):217-225.
    PDF, 14 σελίδες, 153 K

    Abstract. Can artificial systems be creative? Can they be designed to be creative on their own? And what are the requirements of such creative artificial systems? To be able to support humans who are expected to deliver creative solutions, or to automate part of their tasks, this paper presents a proposal for creativity requirements that provide a basis for designing creative artificial systems.

  5. Q. Zhang, E.R. Miranda (2007). Evolving Expressive Music Performance through Interaction of Artificial Agent Performers, Proceedings European Conference on Artificial Life 2007 (Music Workshop).
    PDF, 12 σελίδες, 844 K

    Abstract. We propose a model of expressive music performance (EMP), focusing on the emergence of EMP under social pressure, including social interaction and generational inheritance. Previously, we have reported a system to evolve EMP using Genetic Algorithm, exploring the effect of generational inheritance. This paper presents a system that evolves expressive performance profiles through social interaction, with a society of artificial agent performers. Each performer owns a hierarchical pulse set (i.e., hierarchical duration vs. amplitude matrices), representing a performance profile for a given piece. An agent performer evaluates a performance profile with a set of rules derived from the structure of the piece in question, and imitates others’ performances if appropriate. Then it modifies its pulse set accordingly. We demonstrate that suitable performance profiles emerge from social interactions where the diversity and the commonality of evolved performances are observed in the society of agents.

Λ. Διάφορα

Θέματα

  1. A. Adam (2000). Deleting the Subject: A Feminist Reading of Epistemology in Artificial Intelligence, Minds and Machines, 10:231-253.
    PDF, 23 σελίδες, 109 K

    Abstract. This paper argues that AI follows classical versions of epistemology in assuming that the identity of the knowing subject is not important. In other words this serves to ‘delete the subject’. This disguises an implicit hierarchy of knowers involved in the representation of knowledge in AI which privileges the perspective of those who design and build the systems over alternative perspectives. The privileged position reflects Western, professional masculinity. Alternative perspectives, denied a voice, belong to less powerful groups including women. Feminist epistemology can be used to approach this from new directions, in particular, to show how women’s knowledge may be left out of consideration by AI’s focus on masculine subjects. The paper uncovers the tacitly assumed Western professional male subjects in two flagship AI systems, Cyc and Soar.

  2. S. Armstrong, A. Sandberg, N. Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI, Minds and Machines, 22:299–324.
    PDF, 26 σελίδες, 269 K

    Abstract. There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous.

  3. M.H. Bickhard, R.L. Campbell (1996). Developmental aspects of expertise: Rationality and generalization, Journal of Experimental and Theoretical Artificial Intelligence, 8:399-417.
    PDF, 19 σελίδες, 284 K

    Abstract. Successful attempts to explain expertise in human beings, or to capture its properties in expert systems, will have to contend with issues of rationality and generalization. Rationality and generalization pose enough difficulties on a purely synchronic basis. But an account of expertise must be diachronic- it must account for the development of rationality and generalization, even in those who are already experts. We describe the obstacles in the path of standard approaches to rationality and generalization, and present an alternative, interactivist treatment of rationality and its development (space forbids us to do likewise for generalization). In the interactivist account, rationality cannot be defined in general as adherence to the rules of a system of formal logic ; we propose instead that rationality be understood in terms of the development of negative knowledge- knowing what kinds of errors to avoid. We examine the development of negative knowledge using examples from the history of science, and consider the consequences of an orientation towards negative knowledge for classroom instruction as well as the development of expert systems.

  4. N. Bostrom (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents, Minds and Machines, 22:71–85.
    PDF, 15 σελίδες, 198 K

    Abstract. This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.

  5. S. Bringsjord, H. Xiao (2000). A refutation of Penrose's Gödelian case against artificial intelligence, Journal of Experimental and Theoretical Artificial Intelligence, 12:307-329.
    PDF, 23 σελίδες, 686 K

    Abstract. Having, as it is generally agreed, failed to destroy the computational conception of mind with the Gödelian attack he articulated in his The Emperor’s New Mind, Penrose has returned, armed with a more elaborate and more fastidious Gödelian case, expressed in Chapters 2 and 3 of his Shadows of the Mind. The core argument in these chapters is enthymematic, and when formalized, a remarkable number of technical glitches come to light. Over and above these defects, the argument, at best, is an instance of either the fallacy of denying the antecedent, the fallacy of petitio principii, or the fallacy of equivocation. More recently, writing in response to his critics in the electronic journal Psyche, Penrose has offered a Gödelian case designed to improve on the version presented in SOTM. But this version is yet again another failure. In falling prey to the errors we uncover, Penrose’s new Gödelian case is unmasked as the same confused refrain J. R. Lucas initiated 35 years ago.

  6. L. Cañamero (2001). Emotions and adaptation in autonomous agents: A design perspective, Cybernetics and Systems, 32:507-529.
    PDF, 23 σελίδες, 207 K

    Abstract. Why would we want to endow arti¢cial autonomous agents with emotions? The main answer to this question seems to rely on what has been called the functional view of emotions, arising from (analytic) studies of natural systems. In this paper, I examine to what extent this hypothesis can be applied to the (synthetic) investigation of arti¢cial emotions and what are its implications for the design of emotional agents, the main approaches that can be appropriately used to model emotions in autonomous agents, and why situated autonomous agents provide a good framework to study the relation between emotion and adaptation.

  7. B. Carsten-Stahl (2004). Information, Ethics, and Computers: The Problem of Autonomous Moral Agents, Minds and Machines, 14: 67–83.
    PDF, 17 σελίδες, 99 K

    Abstract. In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. However, the frequently used definition of information as data endowed with meaning can promote our understanding. It is argued that information in this sense is a necessary condition of cognitivist ethics. This is the basis for analysing computers and information processors regarding their status as possible moral agents. Computers have several characteristics that are desirable for moral agents. However, computers in their current form are unable to capture the meaning of information and therefore fail to reflect morality in anything but a most basic sense of the term. This shortcoming is discussed using the example of the Moral Turing Test. The paper ends with a consideration of which conditions computers would have to fulfil in order to be able to use information in such a way as to render them capable of acting morally and reflecting ethically.

  8. G.L. Chadderdon (2008). Assessing Machine Volition: An Ordinal Scale for Rating Artificial and Natural Systems, Adaptive Behavior, 16(4):246-263.
    PDF, 18 σελίδες, 419 K

    Abstract. Volition, although often poorly defined, is a concept of interest and utility to both philosophers and researchers in artificial intelligence. In this article, a definition of volition is proposed and a functionally defined, physically grounded ordinal scale and a procedure by which volition might be measured are put forward: a type of Turing test for volition, but motivated by an explicit analysis of the concept being tested and providing results that are graded, rather than Boolean, so that candidate systems may be ranked according to their degree of volitional endowment. It is proposed that volition is a functional, aggregate property of certain physical systems and it is defined as the capacity for adaptive decisionmaking. The scale, similar in scope to Daniel Dennett’s Kinds of Minds scale, is then outlined, as well as a set of progressive "litmus tests" for determining where a candidate system falls on the scale. Such a formulation may be useful for understanding volition and assessing the progress made in engineering intelligent, autonomous artificial organisms.

  9. A. Clark (2003). Artificial Intelligence and The Many Faces of Reason, in "The Blackwell guide to philosophy of mind", by S. Stich and T. Warfield (eds).
    PDF, 21 σελίδες, 61 K

  10. B.J. Copeland (2000). Narrow versus wide mechanism: Including a re-examination of Turing's views on the mind-machine issue, Journal of Philosophy, 97(1):5-32.
    PDF, 28 σελίδες, 586 K

  11. D.N.Davis (2008). Linking perception and action through motivation and affect, Journal of Experimental and Theoretical Artificial Intelligence, 20:37-60.
    PDF, 24 σελίδες, 251 K

    Abstract. Research into cognitive architectures is described within a framework spanning major issues in artificial intelligence and cognitive science. Earlier work on motivation is extended with a cognitive model of reasoning which, together with an affective mechanism, enables consistent decision-making across a variety of cognitive and reactive processes. Cognition involves the control of behaviour within both external and internal environments. The control of behaviour is vital to an autonomous system as it acts to further its goals. Except in the most spartan of environments, the potential available information and associated combinatorics in a perception, cognition, and action sequence can tax even the most powerful agents. The affect magnitude concept solves some problems with BDI models, and allows for adaptive decisionmaking over a number of tasks in different domains. The cognitive and affective components are brought together using motivational constructs. The generic cognitive model can adapt to different environments and tasks as it makes use of motivational models to direct reactive and situated processes.

  12. E. Dietrich, C. Fields (1995). The role of the frame problem in Fodor's modularity thesis: a case study of rationalist cognitive science, Journal of Experimental and Theoretical Artificial Intelligence, 7:279-289.
    PDF, 11 σελίδες, 465 K

  13. E. Dresner (2003). 'Effective memory' and Turing's model of mind, Journal of Experimental and Theoretical Artificial Intelligence, 15:113-123.
    PDF, 11 σελίδες, 120 K

    Abstract. In the first section of his celebrated 1936 paper A. Turing says of the machines he defines that at each stage of their operation they can ‘effectively remember’ some of the symbols they have scanned before. In this paper I explicate the motivation and content of this remark of Turing’s, and argue that it reveals what could be labeled as a connectionist conception of the human mind.

  14. C. Emmeche (2001). Does a robot have an Umwelt? Reflections on the qualitative biosemiotics of Jakob von Uexkull, Semiotica 134:653-693.
    PDF, 41 σελίδες, 228 K

  15. J.H. Fetzer (1998). People are not computers: (most) thought processes are not computational procedures, Journal of Experimental and Theoretical Artificial Intelligence, 10:371-391.
    PDF, 21 σελίδες, 431 K

    Abstract. The computational conception of the mind that dominates cognitive science assumes that thought processes involve the computation of algorithms or the execution of functions. Human minds turn out to be automatic formal systems or physical syntax-processing systems. The objection has often been posed that systems of this kind do not possess sufficient conditions for mentality, because the syntax they process may be meaningless for those systems. That problem concerns their semantic content. Here an additional objection is posed that systems of this kind, as normatively-directed, problem-solving causal systems, impose conditions that are not necessary for mentality, because many if not most human thought processes violate them. This problem concerns their causal character. The computational conception reflects an overgeneralization about human thought processes based on special kinds of thinking and thus seems to be trivial or false.

  16. J. Friedland (2005). Wittgenstein and the aesthetic robot's handicap, Philosophical Investigations, 28(2):177-192.
    PDF, 16 σελίδες, 72 K

  17. J.S. Hall (2007). Self-improving AI: an Analysis, Minds and Machines, 17:249-259.
    PDF, 11 σελίδες, 140 K

    Abstract. Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a ‘‘child machine’’ which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions.

  18. J. Haugeland (2002). Authentic intentionality, in "Computationalism, New Directions" by M. Scheutz (Ed.), MIT Press.
    PDF, 16 σελίδες, 770 K

  19. P.J. Hayes, K.M. Ford, J.R. Adams-Webber (1992). Human reasoning about artificial intelligence, Journal of Experimental and Theoretical Artificial Intelligence, 4:247-263.
    PDF, 17 σελίδες, 695 K

    Abstract. Recently, several authors (Searle, Penrose, Rychlak) have suggested that AI is a doomed undertaking. In his recent book, Artificial Intelligence and Human Reasoning, Joseph Rychlak repeats many of the arguments of the other critics, as well as offering several of his own. In this paper, taking Rychlak as symptomatic of this new anti-computational intellectual movement, we respond to these arguments and defend AI and personal construct theory against some of the misunderstandings and confusions which we find there.

  20. M. Kary, M. Mahner (2002). How Would You Know if You Synthesized a Thinking Thing?, Minds and Machines, 12:61-86.
    PDF, 26 σελίδες, 146 K

    Abstract. We confront the following popular views: that mind or life are algorithms; that thinking, or more generally any process other than computation, is computation; that anything other than a working brain can have thoughts; that anything other than a biological organism can be alive; that form and function are independent of matter; that sufficiently accurate simulations are just as genuine as the real things they imitate; and that the Turing test is either a necessary or sufficient or scientific procedure for evaluating whether or not an entity is intelligent. Drawing on the distinction between activities and tasks, and the fundamental scientific principles of ontological lawfulness, epistemological realism, and methodological skepticism, we argue for traditional scientific materialism of the emergentist kind in opposition to the functionalism, behaviourism, tacit idealism, and merely decorative materialism of the artificial intelligence and artificial life communities.

  21. S. Kenaw (2008). Hubert L. Dreyfus's critique of classical AI and its rationalist assumptions, Minds and Machines, 18:227-238.
    PDF, 12 σελίδες, 148 K

    Abstract. This paper deals with the rationalist assumptions behind researches of artificial intelligence (AI) on the basis of Hubert Dreyfus’s critique. Dreyfus is a leading American philosopher known for his rigorous critique on the underlying assumptions of the field of artificial intelligence. Artificial intelligence specialists, especially those whose view is commonly dubbed as "classical AI", assume that creating a thinking machine like the human brain is not a too far away project because they believe that human intelligence works on the basis of formalized rules of logic. In contradistinction to classical AI specialists, Dreyfus contends that it is impossible to create intelligent computer programs analogous to the human brain because the workings of human intelligence is entirely different from that of computing machines. For Dreyfus, the human mind functions intuitively and not formally. Following Dreyfus, this paper aims to pinpointing the major flaws classical AI suffers from. The author of this paper believes that pinpointing these flaws would inform inquiries on and about artificial intelligence. Over and beyond this, this paper contributes something indisputably original. It strongly argues that classical AI research programs have, though inadvertently, falsified an entire epistemological enterprise of the rationalists not in theory as philosophers do but in practice. When AI workers were trying hard in order to produce a machine that can think like human minds, they have in a way been testing—and testing it up to the last point—the rationalist assumption that the workings of the human mind depend on logical rules. Result: No computers actually function like the human mind. Reason: the human mind does not depend on the formal or logical rules ascribed to computers. Thus, symbolic AI research has falsified the rationalist assumption that ‘the human mind reaches certainty by functioning formally’ by virtue of its failure to create a thinking machine.

  22. K.B. Korb (1991). Searle's AI program, Journal of Experimental and Theoretical Artificial Intelligence, 1:283-296.
    PDF, 14 σελίδες, 575 K

    Abstract. John Searle has used his Chinese room example to attack the idea of computationally reproducing intelligence. His arguments have variously assumed or (more recently) asserted that consciousness and intelligence are necessarily interdependent. This stance has allowed him to apply intuitive arguments about what could or could not be conscious to the issue of what could or could not be intelligent. I present a variety of arguments, theoretical and intuitive, to show that Searle is conflating mentality and semantics. By maintaining that distinction we can then address how to generate the semantics that intelligence requires. In Stevan Hamad's approach to symbol-grounding we have a plausible candidate for finding referential semantics without taking detours through an unanalysable consciouness. Artificial intelligence as normally construed does not require that philosophical problems about consciousness be resolved, Jet alone that consciousness should be computationally definable: Searle's arguments against strong AI are irrelevant to real-world AI.

  23. P. Kugel (2002). Computing Machines Can’t Be Intelligent (...and Turing Said So), Minds and Machines, 12:563-579.
    PDF, 17 σελίδες, 106 K

    Abstract. According to the conventional wisdom, Turing (1950) said that computing machines can be intelligent. I don’t believe it. I think that what Turing really said was that computing machines — computers limited to computing — can only fake intelligence. If we want computers to become genuinely intelligent, we will have to give them enough “initiative” (Turing, 1948, p. 21) to do more than compute. In this paper, I want to try to develop this idea. I want to explain how giving computers more “initiative” can allow them to do more than compute. And I want to say why I believe (and believe that Turing believed) that they will have to go beyond computation before they can become genuinely intelligent.

  24. W.J. Rapaport (1998). How minds can be computational systems, Journal of Experimental and Theoretical Artificial Intelligence, 10:403-419.
    PDF, 17 σελίδες, 292 K

    Abstract. The proper treatment of computationalism, as the thesis that cognition is computable, is presented and defended. Some arguments of James H. Fetzer against computationalism are examined and found wanting, and his positive theory of minds as semiotic systems is shown to be consistent with computationalism. An objection is raised to an argument of Selmer Bringsjord against one strand of computationalism, namely, that Turing-Test± passing artifacts are persons, it is argued that, whether or not this objection holds, such artifacts will inevitably be persons.

  25. G. Ritchie (2007). Some empirical criteria for attributing creativity to a computer program, Minds and Machines, 17:67-99.
    PDF, 33 σελίδες, 314 K

    Abstract. Over recent decades there has been a growing interest in the question of whether computer programs are capable of genuinely creative activity. Although this notion can be explored as a purely philosophical debate, an alternative perspective is to consider what aspects of the behaviour of a program might be noted or measured in order to arrive at an empirically supported judgement that creativity has occurred. We sketch out, in general abstract terms, what goes on when a potentially creative program is constructed and run, and list some of the relationships (for example, between input and output) which might contribute to a decision about creativity. Specifically, we list a number of criteria which might indicate interesting properties of a program’s behaviour, from the perspective of possible creativity. We go on to review some ways in which these criteria have been applied to actual implementations, and some possible improvements to this way of assessing creativity.

  26. S.A. Umpleby (1999). "Cyberethics": A pane discussion, Cybernetics and Systems, 30:315-330.
    PDF, 16 σελίδες, 146 K

    Abstract. At the 1997 Annual Meeting of the American Society for Cybernetics there was a panel session on the subject of "Cyberethics", a term suggested by Heinz von Foerster. The speakers were Heinz von Foerster, Philip Lewin, Robert Martin, Herbert Brun, Doreen Steg, and several people in the audience.

  27. P. Wang (2007). Three fundamental misconceptions of artificial intelligence, Journal of Experimental and Theoretical Artificial Intelligence, 19:249-268.
    PDF, 20 σελίδες, 146 K

    Abstract. In discussions on the limitations of Artificial Intelligence (AI), there are three major misconceptions, identifying an AI system with an axiomatic system, a Turing machine, or a system with a model-theoretic semantics. Though these three notions can be used to describe a computer system for certain purposes, they are not always the proper theoretical notions when an AI system is under consideration. These misconceptions are not only the basis of many criticisms of AI from the outside, but also responsible for many problems within AI research. This paper analyses these misconceptions, and points out the common root of them: treating empirical reasoning as mathematical reasoning. Finally, an example intelligent system called NARS is introduced, which is neither an axiomatic system nor a Turing machine in its problem-solving process, and does not use model-theoretic semantics, but is still implementable in an ordinary computer.

  28. B. Warnick (2004). Rehabilitating AI: Argument loci and the case for artificial intelligence, Argumentation, 18:149-170.
    PDF, 22 σελίδες, 84 K

    Abstract. This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca’s (1969) study of refutational argument, this study considers points of contact between opposing arguments that emerged in opposing loci, dissociations, and casuistic reasoning. In particular, it shows how perceptions of AI were reframed and rehabilitated through metaphorical language, reversal of the philosophical pair ‘artificial/natural’, appeals to the paradigm case, and use of the loci of quantity and essence. Furthermore, examining responses to the book in subsequent arguments indicates the topoi characteristic of the rhetoric of technology advocacy.

  29. T. Wehrle (2001). The grounding problem of modeling emotions in adaptive artifacts, Cybernetics and Systems, 32:561-580.
    PDF, 20 σελίδες, 374 K

    Abstract. A communality between research in artificial intelligence and synthetic emotion is that is seems in both cases to be rather difficult to give an acceptable definition of the naturally occurring counterpart. One could speculate whether this is due to the multiplicity of the nature of both phenomena or due to a categorical misconception. In this article, I try to briefly outline a number of different motivations for modeling emotions, and to relate those motivations to two different principal design approaches for computational models of emotion. From these two aspects, together with our current assumptions about mechanisms underlying human emotions, I conclude with some speculations about adaptation in affective systems, and some implications of the notion of grounding emotions in adaptive systems.


Τελευταία ενημέρωση 8 Φεβρουαρίου 2015.
Στείλτε μου mail (etzafestas@phs.uoa.gr)