Contents
141 found
Order:
1 — 50 / 141
  1. Social Choice for AI Alignment: Dealing with Diverse Human Feedback.Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mosse, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde & William S. Zwicker - manuscript
    Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Can AI Abstract the Architecture of Mathematics?Posina Rayudu - manuscript
    The irrational exuberance associated with contemporary artificial intelligence (AI) reminds me of Charles Dickens: "it was the age of foolishness, it was the epoch of belief" (cf. Nature Editorial, 2016; to get a feel for the vanity fair that is AI, see Mitchell and Krakauer, 2023; Stilgoe, 2023). It is particularly distressing—feels like yet another rerun of Seinfeld, which is all about nothing (pun intended); we have seen it in the 60s and again in the 90s. AI might have had (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Private memory confers no advantage.Samuel Allen Alexander - forthcoming - Cifma.
    Mathematicians and software developers use the word "function" very differently, and yet, sometimes, things that are in practice implemented using the software developer's "function", are mathematically formalized using the mathematician's "function". This mismatch can lead to inaccurate formalisms. We consider a special case of this meta-problem. Various kinds of agents might, in actual practice, make use of private memory, reading and writing to a memory-bank invisible to the ambient environment. In some sense, we humans do this when we silently subvocalize (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Computer Simulations, Machine Learning and the Laplacean Demon: Opacity in the Case of High Energy Physics.Florian J. Boge & Paul Grünke - forthcoming - In Andreas Kaminski, Michael Resch & Petra Gehring (eds.), The Science and Art of Simulation II.
    In this paper, we pursue three general aims: (I) We will define a notion of fundamental opacity and ask whether it can be found in High Energy Physics (HEP), given the involvement of machine learning (ML) and computer simulations (CS) therein. (II) We identify two kinds of non-fundamental, contingent opacity associated with CS and ML in HEP respectively, and ask whether, and if so how, they may be overcome. (III) We address the question of whether any kind of opacity, contingent (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  5. Intention Reconsideration in Artificial Agents: a Structured Account.Fabrizio Cariani - forthcoming - Special Issue of Phil Studies.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman's work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some incompatible prospect is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. The future won’t be pretty: The nature and value of ugly, AI-designed experiments.Michael T. Stuart - 2023 - In Milena Ivanova & Alice Murphy (eds.), The Aesthetics of Scientific Experiments. New York, NY: Routledge.
    Can an ugly experiment be a good experiment? Philosophers have identified many beautiful experiments and explored ways in which their beauty might be connected to their epistemic value. In contrast, the present chapter seeks out (and celebrates) ugly experiments. Among the ugliest are those being designed by AI algorithms. Interestingly, in the contexts where such experiments tend to be deployed, low aesthetic value correlates with high epistemic value. In other words, ugly experiments can be good. Given this, we should conclude (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Extended subdomains: a solution to a problem of Hernández-Orallo and Dowe.Samuel Allen Alexander - 2022 - In AGI.
    This is a paper about the general theory of measuring or estimating social intelligence via benchmarks. Hernández-Orallo and Dowe described a problem with certain proposed intelligence measures. The problem suggests that those intelligence measures might not accurately capture social intelligence. We argue that Hernández-Orallo and Dowe's problem is even more general than how they stated it, applying to many subdomains of AGI, not just the one subdomain in which they stated it. We then propose a solution. In our solution, instead (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - 2022 - Journal of Artificial General Intelligence 13 (1).
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  13. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. A Fuzzy-Cognitive-Maps Approach to Decision-Making in Medical Ethics.Alice Hein, Lukas J. Meier, Alena Buyx & Klaus Diepold - 2022 - 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
    Although machine intelligence is increasingly employed in healthcare, the realm of decision-making in medical ethics remains largely unexplored from a technical perspective. We propose an approach based on fuzzy cognitive maps (FCMs), which builds on Beauchamp and Childress’ prima-facie principles. The FCM’s weights are optimized using a genetic algorithm to provide recommendations regarding the initiation, continuation, or withdrawal of medical treatment. The resulting model approximates the answers provided by our team of medical ethicists fairly well and offers a high degree (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  15. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments with the (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Concern Across Scales: a biologically inspired embodied artificial intelligence.Matthew Sims - 2022 - Frontiers in Neurorobotics 1 (Bio A.I. - From Embodied Cogniti).
    Intelligence in current AI research is measured according to designer-assigned tasks that lack any relevance for an agent itself. As such, tasks and their evaluation reveal a lot more about our intelligence than the possible intelligence of agents that we design and evaluate. As a possible first step in remedying this, this article introduces the notion of “self-concern,” a property of a complex system that describes its tendency to bring about states that are compatible with its continued self-maintenance. Self-concern, as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Can reinforcement learning learn itself? A reply to 'Reward is enough'.Samuel Allen Alexander - 2021 - Cifma.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Reward-Punishment Symmetric Universal Intelligence.Samuel Allen Alexander & Marcus Hutter - 2021 - In AGI.
    Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  21. Short-circuiting the definition of mathematical knowledge for an Artificial General Intelligence.Samuel Alexander - 2020 - Cifma.
    We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. AGI and the Knight-Darwin Law: why idealized AGI reproduction requires collaboration.Samuel Alexander - 2020 - Agi.
    Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence.Albert Efimov - 2020 - Lecture Notes in Computer Science 12177.
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25. There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Ontology and Cognitive Outcomes.David Limbaugh, Jobst Landgrebe, David Kasmier, Ronald Rudnicki, James Llinas & Barry Smith - 2020 - Journal of Knowledge Structures and Systems 1 (1): 3-22.
    The term ‘intelligence’ as used in this paper refers to items of knowledge collected for the sake of assessing and maintaining national security. The intelligence community (IC) of the United States (US) is a community of organizations that collaborate in collecting and processing intelligence for the US. The IC relies on human-machine-based analytic strategies that 1) access and integrate vast amounts of information from disparate sources, 2) continuously process this information, so that, 3) a maximally comprehensive understanding of world actors (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27. Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV, USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV, USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. कैसे सात Socipaths जो चीन शासन कर रहे हैं विश्व युद्ध तीन और तीन तरीके उन्हें रोकने के लिए How the Seven Sociopaths Who Rule China are Winning World War and Three and Three Ways to Stop Them (2019).Michael Richard Starks - 2020 - In पृथ्वी पर नर्क में आपका स्वागत है: शिशुओं, जलवायु परिवर्तन, बिटकॉइन, कार्टेल, चीन, लोकतंत्र, विविधता, समानता, हैकर्स, मानव अधिकार, इस्लाम, उदारवाद, समृद्धि, वेब, अराजकता, भुखमरी, बीमारी, हिंसा, कृत्रिम बुद्धिमत्ता, युद्ध. Ls Vegas, NV USA: Reality Press. pp. 389-396.
    पहली बात हमें ध्यान में रखना चाहिए कि जब यह कहना है कि चीन यह कहता है या चीन ऐसा करता है, तो हम चीनी लोगों की बात नहीं कर रहे हैं, लेकिन उन सोशियोपैथों की जो सीसीपी (चीनी कम्युनिस्ट पार्टी, अर्थात सात सेनेले सोसोपैथिक सीरियल किलर (एसएसएसएसके) का नियंत्रण करते हैं। सीपी या पोलितब्यूरो के 25 सदस्यों की टंडिंग समिति। मैं हाल ही में कुछ ठेठ वामपंथी नकली समाचार कार्यक्रमों को देखा (सुंदर बहुत ही तरह एक ही तरह से (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Cosa significano Paraconsistente, Indecifrabile, Casuale, Calcolabile e Incompleto? Una recensione di Godel's Way: sfrutta in un mondo indecidibile (Godel's Way: Exploits into an Undecidable World) di Gregory Chaitin, Francisco A Doria, Newton C.A. da Costa 160p (2012) (rivisto 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV, USA: Reality Press. pp. 163-176.
    Nel 'Godel's Way' tre eminenti scienziati discutono questioni come l'indecidibilità, l'incompletezza, la casualità, la computabilità e la paracoerenza. Affronto questi problemi dal punto di vista di Wittgensteinian che ci sono due questioni fondamentali che hanno soluzioni completamente diverse. Ci sono le questioni scientifiche o empiriche, che sono fatti sul mondo che devono essere studiati in modo osservante e filosofico su come il linguaggio può essere usato in modo intelligibilmente (che include alcune domande in matematica e logica), che devono essere decise (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. The role of robotics and AI in technologically mediated human evolution: a constructive proposal.Jeffrey White - 2020 - AI and Society 35 (1):177-185.
    This paper proposes that existing computational modeling research programs may be combined into platforms for the information of public policy. The main idea is that computational models at select levels of organization may be integrated in natural terms describing biological cognition, thereby normalizing a platform for predictive simulations able to account for both human and environmental costs associated with different action plans and institutional arrangements over short and long time spans while minimizing computational requirements. Building from established research programs, the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Interprétabilité et explicabilité pour l’apprentissage machine : entre modèles descriptifs, modèles prédictifs et modèles causaux. Une nécessaire clarification épistémologique.Christophe Denis & Franck Varenne - 2019 - Actes de la Conférence Nationale En Intelligence Artificielle - CNIA 2019.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathématique et causale d’un phénomène physique. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Chess, Artificial Intelligence, and Epistemic Opacity.Paul Grünke - 2019 - Információs Társadalom 19 (4):7--17.
    In 2017 AlphaZero, a neural network-based chess engine shook the chess world by convincingly beating Stockfish, the highest-rated chess engine. In this paper, I describe the technical differences between the two chess engines and based on that, I discuss the impact of the modeling choices on the respective epistemic opacities. I argue that the success of AlphaZero’s approach with neural networks and reinforcement learning is counterbalanced by an increase in the epistemic opacity of the resulting model.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Present Scenario of Fog Computing and Hopes for Future Research.G. KSoni, B. Hiren Bhatt & P. Dhaval Patel - 2019 - International Journal of Computer Sciences and Engineering 7 (9).
    According to the forecast that billions of devices will get connected to the Internet by 2020. All these devices will produce a huge amount of data that will have to be handled rapidly and in a feasible manner. It will become a challenge for real-time applications to handle this huge data while considering security issues as well as time constraints. The main highlights of cloud computing are on-demand service and scalability; therefore the data generated from IoT devices are generally handled (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Data science and molecular biology: prediction and mechanistic explanation.Ezequiel López-Rubio & Emanuele Ratti - 2019 - Synthese (4):1-26.
    In the last few years, biologists and computer scientists have claimed that the introduction of data science techniques in molecular biology has changed the characteristics and the aims of typical outputs (i.e. models) of such a discipline. In this paper we will critically examine this claim. First, we identify the received view on models and their aims in molecular biology. Models in molecular biology are mechanistic and explanatory. Next, we identify the scope and aims of data science (machine learning in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  36. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   53 citations  
  37. The Facets of Artificial Intelligence: A Framework to Track the Evolution of AI.Fernando Martínez-Plumed, Bao Sheng Loe, Peter Flach, Sean O. O. HEigeartaigh, Karina Vold & José Hernández-Orallo - 2018 - In Fernando Martínez-Plumed, Bao Sheng Loe, Peter Flach, Sean O. O. HEigeartaigh, Karina Vold & José Hernández-Orallo (eds.), Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Evolution of the contours of AI. pp. 5180-5187.
    We present nine facets for the analysis of the past and future evolution of AI. Each facet has also a set of edges that can summarise different trends and contours in AI. With them, we first conduct a quantitative analysis using the information from two decades of AAAI/IJCAI conferences and around 50 years of documents from AI topics, an official database from the AAAI, illustrated by several plots. We then perform a qualitative analysis using the facets and edges, locating AI (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  38. In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI.Vincent C. Müller - 2018 - Medienkorrespondenz 20 (05.10.2018):5-15.
    Die Entwicklungen in der Künstlichen Intelligenz (KI) sind spannend. Aber wohin geht die Reise? Ich stelle eine Analyse vor, der zufolge exponentielles Wachstum von Rechengeschwindigkeit und Daten die entscheidenden Faktoren im bisherigen Fortschritt waren. Im Folgenden erläutere ich, unter welchen Annahmen dieses Wachstum auch weiterhin Fortschritt ermöglichen wird: 1) Intelligenz ist eindimensional und messbar, 2) Kognitionswissenschaft wird für KI nicht benötigt, 3) Berechnung (computation) ist hinreichend für Kognition, 4) Gegenwärtige Techniken und Architektur sind ausreichend skalierbar, 5) Technological Readiness Levels (TRL) (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. A Study on Fog Computing Environment Mobility and Migration.R. J. Pedro - 2018 - 22nd International Conference Electronics 22.
    Cloud Computing paradigm has reached a high degree of popularity among all kinds of computer users, but it may not be suitable for mobile devices as they need computing power to be as close as possible to data sources in order to reduce delays. This paper focuses on achieving mathematical models for users moving around and proposes an overlay mobility model for Fog Data Centres based on traditional wireless mobility models aimed at better allocating edge computing resources to client demands. (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  40. Simple or complex bodies? Trade-offs in exploiting body morphology for control.Matej Hoffmann & Vincent C. Müller - 2017 - In Gordana Dodig-Crnkovic & Raffaela Giovagnoli (eds.), Representation of Reality: Humans, Other Living Organism and Intelligent Machines. Heidelberg: Springer. pp. 335-345.
    Engineers fine-tune the design of robot bodies for control purposes, however, a methodology or set of tools is largely absent, and optimization of morphology (shape, material properties of robot bodies, etc.) is lagging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or ”soft” bodies. These carry substantial potential regarding their exploitation for control—sometimes referred to as ”morphological computation”. In this article, we briefly review different notions of computation by physical systems and (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Why Build a Virtual Brain? Large-Scale Neural Simulations as Jump Start for Cognitive Computing.Matteo Colombo - 2016 - Journal of Experimental and Theoretical Artificial Intelligence.
    Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42. From human to artificial cognition and back: New perspectives on cognitively inspired AI systems.Antonio Lieto & Daniele Radicioni - 2016 - Cognitive Systems Research 39 (c):1-3.
    We overview the main historical and technological elements characterising the rise, the fall and the recent renaissance of the cognitive approaches to Artificial Intelligence and provide some insights and suggestions about the future directions and challenges that, in our opinion, this discipline needs to face in the next years.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. An expert system for feeding problems in infants and children.Samy S. Abu Naser & Mariam W. Alawar - 2016 - International Journal of Medicine Research 1 (2):79--82.
    A lot of infants have significant food-related problems, as well as spitting up, rejecting new foods, or not accepting to eat at specific times. These issues are frequently ordinary and are not a sign that the baby is unwell. According to the National Institutes of Health, 25% of generally developing infants and 35% of babies with neurodevelopmental disabilities are tormented by some sort of feeding problem. Some, for example rejecting to eat specific foods or being overly finicky, are momentary and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   17 citations  
  44. Why Build a Virtual Brain? Large-scale Neural Simulations as Test-bed for Artificial Computing Systems.Matteo Colombo - 2015 - In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings & P. P. Maglio (eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society. Cognitive Science Society. pp. 429-434.
    Despite the impressive amount of financial resources invested in carrying out large-scale brain simulations, it is controversial what the payoffs are of pursuing this project. The present paper argues that in some cases, from designing, building, and running a large-scale neural simulation, scientists acquire useful knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. What this means, why it is not a trivial lesson, and how it advances the literature on (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  45. On a Cognitive Model of Semiosis.Piotr Konderak - 2015 - Studies in Logic, Grammar and Rhetoric 40 (1):129-144.
    What is the class of possible semiotic systems? What kinds of systems could count as such systems? The human mind is naturally considered the prototypical semiotic system. During years of research in semiotics the class has been broadened to include i.e. living systems like animals, or even plants. It is suggested in the literature on artificial intelligence that artificial agents are typical examples of symbol-processing entities. It also seems that semiotic processes are in fact cognitive processes. In consequence, it is (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  46. Evaluating Artificial Models of Cognition.Marcin Miłkowski - 2015 - Studies in Logic, Grammar and Rhetoric 40 (1):43-62.
    Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation ofmodels (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. An Approach to Subjective Computing: a Robot that Learns from Interaction with Humans.Patrick Grüneberg & Kenji Suzuki - 2014 - Ieee Transactions on Autonomous Mental Development 6 (1):5-18.
    We present an approach to subjective computing for the design of future robots that exhibit more adaptive and flexible behavior in terms of subjective intelligence. Instead of encapsulating subjectivity into higher order states, we show by means of a relational approach how subjective intelligence can be implemented in terms of the reciprocity of autonomous self-referentiality and direct world-coupling. Subjectivity concerns the relational arrangement of an agent’s cognitive space. This theoretical concept is narrowed down to the problem of coaching a reinforcement (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   1 citation  
  48. A General Structure for Legal Arguments About Evidence Using Bayesian Networks.Norman Fenton, Martin Neil & David A. Lagnado - 2013 - Cognitive Science 37 (1):61-102.
    A Bayesian network (BN) is a graphical model of uncertainty that is especially well suited to legal arguments. It enables us to visualize and model dependencies between different hypotheses and pieces of evidence and to calculate the revised probability beliefs about all uncertain factors when any piece of new evidence is presented. Although BNs have been widely discussed and recently used in the context of legal arguments, there is no systematic, repeatable method for modeling legal arguments as BNs. Hence, where (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  49. Dealing with Concepts: from Cognitive Psychology to Knowledge Representation.Marcello Frixione & Antonio Lieto - 2013 - Frontiers of Psychological and Behevioural Science 2 (3):96-106.
    Concept representation is still an open problem in the field of ontology engineering and, more generally, of knowledge representation. In particular, the issue of representing “non classical” concepts, i.e. concepts that cannot be defined in terms of necessary and sufficient conditions, remains unresolved. In this paper we review empirical evidence from cognitive psychology, according to which concept representation is not a unitary phenomenon. On this basis, we sketch some proposals for concept representation, taking into account suggestions from psychological research. In (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. A lesson from subjective computing: autonomous self-referentiality and social interaction as conditions for subjectivity.Patrick Grüneberg & Kenji Suzuki - 2013 - AISB Proceedings 2012:18-28.
    In this paper, we model a relational notion of subjectivity by means of two experiments in subjective computing. The goal is to determine to what extent a cognitive and social robot can be regarded to act subjectively. The system was implemented as a reinforcement learning agent with a coaching function. To analyze the robotic agent we used the method of levels of abstraction in order to analyze the agent at four levels of abstraction. At one level the agent is described (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 141