An intensive course with lectures on the history of AI, technology and society, science fiction and pop culture, current applications of AI, and practical exercises on making sense of the complex discourses on technology.
Programme »
Resources »
Debunking AI myths »
Generative media »
This website is part of HUGOD project and part of New Media and Technosemiotics modules at Palacký University Olomouc
16-19 January 2023, Palacký University Olomouc More info »
We will look at the formation of the signifier artificial intelligence (AI), the kinds of objects and technologies it signified and sought to designate. Today, AI usually signifies a form of machine learning and/or a combination of software, hardware, and data. The concept of AI is also associated with autonomy and decision-making. From a broader perspective, AI can be understood as a technical practice, a set of methods, a discursive practice, or a cultural and socio-political phenomenon. In public consciousness, AI is understood mainly through specific objects and functions. The fields of cybernetics, behavioural psychology and economic theories have also contributed to the historical understanding and development of intelligent systems and functions.
John von Neumann is mostly known for his theory of self-reproducing automata, nowadays known as the beginning of the A-Life program. When it comes to A. I. and its history, his name is usually just mentioned while addressing the architecture of modern computers and rivalry between him and Alan Turing. However, von Neumann’s role in the history of A.I. is a bit more significant than that.
AI is everywhere around us and affects many aspects of our everyday lives. But it is not as prevalent and pervasive as one might think: a lot of AI discourse contains sensationalist and misleading information about the technology. How can we recognise the hype and the myths? In this workshop, we will learn some simple techniques and methods that enable better orientation in the “tech talk” in media and help reveal the “mythical” layers in the discourse.
The popularity of conversational AI (chatbots, voicebots) is on the rise. We encounter bots in various situations of our daily life: when online shopping, logging into an account, calling a provider… We are going to talk about how they are “taught” to communicate and what is the place of linguistics and communication theory in the birth of a bot.
LOD is one of the means by which the ideas of the semantic web and the development of artificial intelligence can be realized. The lecture will present the basic assumptions related to LOD and also the standard model of data exchange used in the web environment for its implementation - RDF (Resource Description Framework). The issue of appropriate data storage and creation of datasets in the form of open ontologies that can be used to enrich the Semantic Web will also be discussed. The issues will be supported by selected examples of European LOD resources provided by libraries, museums and public administration units.
In human-machine interaction, people sometimes tend to attribute greater authority to the machine than it deserves. Automation bias causes problems in several fields; elsewhere, anthropomorphic design is seen as a positive choice for better usability. We will look at the cases of automation bias and the lessons learned from machine mistakes.
Technologies have always been developed partly with military purpose and support, and AI is no exception. This entanglement has shaped a vocabulary that is now common in the discourse on intelligent or autonomous machine functions. We will look at such concepts (unmanned, human-in-the-loop, etc.). We will also touch upon the debate on weapons autonomy and the positions of the EU and US policies on the topic of military AI.
It is often assumed that humans live in a “meaningful world”, but what exactly is this “meaningfulness”, and how is it that it comes to exist? In this lecture, we will try to tackle this question from the point of view of semiotics, and we will do so in two steps. First, we will explore John Deely’s notions of “subjectivity” and “objectivity”, and we will explain how they relate to each other, and how their relating gives rise to a given meaningful reality. Our second step will consist in analyzing what it entails for meaning to appear through the interplay of subjectivity and objectivity (i.e. does it mean that meaning is simply subjective, or does it mean that it comes solely from objects?); to put it in other terms, is the meaning imbuing the world in which humans live arbitrary or motivated? This question will take us to consider the task of critique as among the main objectives of semiotic inquiry, and as the main goal of a general theory of signs.
How do we make sense of the world? What does it take for our perception to turn into more than reference? What is the place of language in the construction of our perception of the world? In this lecture we will explore different ways to understand language and the way it may (or may not) ground the world around us in a way that makes sense as perceived experience.
The concept ‘robot’ originates from Czech writer Karel Čapek. However, automata are common through history both in the real world and in stories. We will look at the ‘living statues’ present in Ancient Greek legends and the role of mechanical temple marvels in everyday culture and religion of Ancient Greece. Moving on to medieval Europe, the automata and artifice in stories merge with the concept of Natura artifex, inspiring the 17th-century notions of mechanical nature and clockwork-world, which have in turn contributed to our contemporary models of rationality, underlying the conceptualisation of current technologies.
Until recently, language was the unique domain of human beings. Now, new language technologies challenge this assumption and enable the production of synthetically generated content. Additionally, deepfake algorithms enable the imitation of human voices and faces. It is increasingly difficult to recognise synthetic content as such. What does this mean for our culture(s)? How do text generators such as GPT-3 and deepfake technologies interfere with our traditional understanding of communication?
Semiotic models and theories have been implicitly coded within the AI field since the very beginning. Explicit calls for a semiotic theory of computers are more recent. Computer systems can be considered as inherently semiotic due to their function as extensions of human symbolic communication and signification. At the same time, computer systems remain assemblages of interactivity led by semiotic agents (humans). This section will focus on the vocabulary and implications of such entanglements and outline the semiotic parts in these supersystems.
Is technology good, bad, or neutral? Can technology be neutral at all? These are questions still puzzling many researchers. The lecture will give an overview of the general trends of technological determinism and social constructionism and introduce some authors who have shaped the understanding of technology in the 20th century (Mumford, Ellul, Winner, Latour) and earlier (La Mettrie). Building on previous lectures, we will look at the state of the debate on the neutrality of technology today.
In contemporary science fiction, robots are often depicted as perfect humans. We will look at the spectrum of artificial, non-human or half-human creatures in fiction and their construction as the ‘human mirror’. We will also inquire into the role of fiction in relation to reality, how it criticizes contemporary society and how sci-fi imagery is used by technologists in discourse.
Everyone (else) interested and still in Olomouc is welcome to join!