Artificial Intelligence: Systems of intentionality & Human-Centred values

The report offers material and narrative frameworks and educational exercises to understand, critique, assess, influence and shape the future of Artificial Intelligence. The need for reimagining current paradigms of AI proliferation stem from the opaque and alienating practices conducted in and by these systems, that need an urgent unpacking to make our bodies, relations and futures safer and more transparent. Through the strength of Humanities, Arts, Social Humanities (HASH), as a crucial mediator between human values and AI systems, we propose human centred systems to build towards sustainable, equitable and resilient futures for both humans and AI.

A Scouting Report 2023

 

Executive Summary

by Nishant Shah with Fangyu Qing and Longhan Wei

In collaboration with the Professorship in Music-based Therapies and Interventions, and facilitated by the Stichting Doubleyoutee

Overview:

The report offers material and narrative frameworks and educational exercises to understand, critique, assess, influence and shape the future of Artificial Intelligence. The need for reimagining current paradigms of AI proliferation stem from the opaque and alienating practices conducted in and by these systems, that need an urgent unpacking to make our bodies, relations and futures safer and more transparent. Through the strength of Humanities, Arts, Social Humanities (HASH), as a crucial mediator between human values and AI systems, we propose human centred systems to build towards sustainable, equitable and resilient futures for both humans and AI.

 

Download the report PDF here

Artificial Intelligence-Syetems of Intentionality and Human-Centred Values.pdf984.1KB

Problems:

In order to address techno-social issues and impacts, HASH scholarship is key in undoing the threads that tangle the current approach to AI proliferation. It aids in seeing through the opacities of these technologies to build more ethical, safer and sustainable practices that reshape and instil intention and value into technological development. We need to address the biases, falsehoods, subjective opinions, manipulative intentions, that are instilled in AI systems, and thereby produced in our everyday lives, causing polarisation, power imbalances, unaccountability of stakeholders, erosion of agency, lack of data protection and data sovereignty, and well-being. The relevance of this work is situated in drawing from the strength of HASH scholarship as a way to enter the debate, and to offer a lens that goes beyond efficacy, and bridges the knowledge gaps in the current discourse.

The proliferation of AI systems and their consistent sense of newness, along with a lack of a clear definition and explanation of its decision-making processes, instils the perception of technology being neutral. The fetishization of AI technologies fails to ask questions of sustainability, intentionality, dangers and ethics that are crucial in addressing the imbalances and harms caused by emerging and unregulated AI systems. Not understanding the risks and stakes that come with AI systems, and their non-transparency give rise to anxieties such as AI translation, autonomous and opaque intelligence, automation and replacement, AI deployment for extraction, machine ethics, the capacity to mimic human emotions, among others. These have embodied consequences that need more specialised attention because AI systems do not adapt and change in tandem to changing societal realities and values.

Solutions:

To address these problems, this report invites human, social and political ways of thinking and doing. Through HASH, we offer frameworks that extend an accessible understanding of the field of AI through various frameworks and values that unpack the trends, anxieties and problematics of AI systems and their operations in our lives. We offer 4 HASH entry points, namely, autonomy and authority; crises and critique; agents and agency; safeguards and security, to think through this, and call to action the development of non-extractive and non-exploitative ways of building responsible and ethical AI futures.

We propose 7 layers of AI to demystify it, and hence aid in building policies, practices, and processes through intention and human centred values. The layers that help in unwrenching these systems, such as, AI is social, technological, embodied, affective, political, tacit and material bring up crucial questions of authorship, and who a digital author is. It allows for critiquing, evaluating, assessing and reformulating, so that we can contribute towards building safe and happy futures.

We propose 5 values for human centred systems, namely, relationality, authority, state of being, inalienability, and future-proofing. We offer educational exercises that address the affordances and extend the capacities of both human and AI. Through this report, the educational exercise we formulate include 6 modules that focus on narratives and intentions; aforementioned HASH entry points; contextual maps, narratives and visuals for creative interventions; technology blocks; imaginaries of AI; and finally research and evaluation that aid in creating assessment frameworks for new AI futures.

Key Learnings:

  • How to develop non-extractive and non-exploitative ways of being with AI, for sustainable and responsible futures, through multidisciplinary and creative modes of HASH scholarship and practice.
  • How to establish AI questions as sociotechnical questions that are tangible and material questions, that frame intentions for the present and future.
  • How to build community driven practices of critique, evaluation and action—through social, political and cultural values—to reclaim agency.
  • How to situate narratives and intentionality as the ontology of AI, to bring to light their non-neutrality, in order to reclaim them as cultural artefacts that can be unpacked beyond the logics of machine probability.
  • How to establish conditions of equality, equity and autonomy through gaining tools and knowledge to examine power dynamics in decision-making, between AI and humans.
  • How to build strategies to address the materiality of AI anxieties, to recentre the human actor and build trust—through the inculcation of human-centred values—within algorithmic driven realities.
  • How to shift focus from harms of AI towards reimagining alternatives for new architectures of AI systems that align with human-centred values, to reinforce more resilient and equitable societies.
  • How to radically redefine authorship in the age of contemporary digital cultures, by unpacking and situating the new author’s function in questions of responsibility rather than creativity, as it shapes how we live, work, create and connect with each other.
  • How to generate speculative design, fictional storytelling, artistic and social interventions to visualise new ethical and human-friendly frameworks and models of human-AI symbiosis.
  • How to build new paradigms of human-AI symbiotic systems with five human centred values:
    • Relationality: how to recognise AI systems as relational, in order to examine the power dynamics it creates without acknowledging the complexity and diversity of human life.
    • Authority: how to reclaim human agency through human, social, cultural and artistic intervention; ethics by design; precautionary measures; and new governance frameworks for reliable, accountable and secure AI systems.
    • State of being: how to theorize new kinds of humans, that emerge from AI-human symbiosis, and safeguards for the well-being of communities that live in and with AI systems.
    • Inalienability: how to evaluate decisions of harm, extraction and exploitation by AI systems that intentionally alienate human users’ from their embodied data rights, thus creating strategies of technological literacy and power reclamation.
    • Future-proofing: how to serve the values of communities and uphold the importance of sustainability and safety in ethical, responsible and robust AI systems, and the futures they generate.
  • How to conceptualise alternative models that prioritize care, redress, well-being and inclusivity, not through abolishing AI systems, but by making them accountable models.
  • How to apply human centred principles and the abilities of an AI Translator (as a mediator between human values and AI), to ethically enhance and extend human-AI conditions, capacities and affordances, and to pave the way for fresh visions of the future.

 

Value:

It is essential for a multidisciplinary, speculative and creative reimagining of new lexicons, architectures of AI systems, human agency and safety, to resist and negotiate the ways we choose to move forward with the development, deployment, and governance of AI systems. This report offers ways to rebuild trust, autonomy, responsibility, intention, values, ethics, well-being, and collective action through the study of AI as techno-social artefacts.

We value the relationality with AI systems and the inherent intra-actions that entangle and nurture the human-AI symbiosis, as it is not merely a tool. AI holds immense power and is intertwined with human society and bodies, individual and collective. Thus, we are committed to generating new ways of imagining collective ownership, human welfare, accountable stakeholders and systems, moral and value based negotiations of AI proliferation, and non-alienating ways of being with AI.

Our intentions are situated in the core desire to speculate, remould and perform future-making practices through highlighting the importance of intentions and human centred values as the way forward. As a system that currently claims to be holders of universal truth, AI is validated by machine logic. We urge bringing the ‘human’ into these logics to build viable and just ways of living with these systems. The technocratic vision needs rethinking and reformulating, thus we urge shifting focus from the harms of AI systems, to reimagine alternatives for new infrastructures, and promises of a more humane future.

Conclusion:

Our focus on systems of intentionality and human centred values in Artificial Intelligence vouches for algorithmic fairness, innovation, ethics by design, and the need for a transparent, reliable, accountable, and secure AI that prioritizes well-being and inclusivity. We do not approach these emerging technologies through resistance but in fact through multidisciplinary, alternative models, that care about human, social, political and environmental urgencies, of human-AI symbiosis.

Through this scholarship, we intend to redirect our energies towards improving algorithmic literacy and encourage paradigms that are human friendly and do not bypass and surpass the right towards agency, autonomy, and equitable societies. Our future challenges can be unpacked through the values, frameworks and approaches offered in this work, which contribute in turning the wave of extraction and exploitation towards ethical and intentional networks of co-existence with AI.