Abstracts of the talks at SoLitAI Colloquium (Society, Literature, AI) on 25th September 2025.
Deeper Talks
Michaela Liegertová: Exploring literary creation through multi-LLM simulations - Insights from a non-expert's perspective
"This presentation introduces SIMPLEX (Simulated Iterative Multi-Persona Literary EXperimentation), an experimental framework developed by a non-literature professional to explore what happens when AI systems simulate the entire literary ecosystem - from creation through criticism to translation.
As someone outside the literary field, I approached this as a systems engineering problem: Could we model the literary process through multi-agent AI simulation? The framework orchestrates multiple LLMs to create fictional personas (writers, critics, translators), who then generate stories, evaluate them through simulated literary juries, and produce professional translations—essentially creating a ""literary laboratory"" for controlled experimentation."
Przemysław Kordos: Slop Story
The aim of my short presentation is to examine the notion of slop, understood as AI-
generated content - or, more specifically, content that is obviously AI-generated. This issue
arises in the context of growing evidence that AI-detection tools are highly unreliable. For
instance, while ZeroGPT advertises an accuracy rate of 98%, empirical testing showed it
reached only about 62.5% sensitivity and 50% specificity (Bellini et al., 2024). Another study
demonstrated that, although AI-generated texts are often identified correctly, a large
amount of human-written material is falsely flagged (Dik et al., 2025). Furthermore, these
tools exhibit clear biases against non-English writers (Liang et al., 2023). In short, current
detectors not only produce too many false positives but are also easily circumvented by
“humanizing” algorithms, making them unsuitable as fair and reliable arbiters in educational
contexts. Moreover, newer LLM versions seem to be more fluent and less susceptible to
detection - while the detectors themselves do not catch up (along with the scholars who
research the topic!)
In this situation, one is perhaps left to rely on intuition (AI-tuition) - on a kind of readerly
“gut feeling” - when trying to determine whether a text is AI-generated. Indeed, some
pathways toward recognizing slop are already being sketched out (see, for example, Warzel,
2024; Hern & Milmo, 2024). Yet because definitive proof seems unattainable - or even
conceptually impossible - these attempts to define and detect slop often take metaphorical
or aesthetic forms. My presentation will therefore focus on these descriptive strategies:
how slop is being characterized, as well as on the cultural backlash against it, and possible
ways of avoiding it - even while continuing to engage with generative AI content.
References:
Bellini, V., Semeraro, F., Montomoli, J., Cascella, M., & Bignami, E. (2024). Between human
and AI: assessing the reliability of AI text detection tools. Current Medical Research and
Opinion, 40(3), 353–358. https://doi.org/10.1080/03007995.2024.2310086
Dik, S., Erdem, O., & Dik, M. (2025). Assessing GPTZero's Accuracy in Identifying AI vs.
Human-Written Essays. arXiv preprint arXiv:2506.23517.
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased
against non-native English writers. Patterns, 4(7).
Warzel, C. (2024, August 21). The MAGA aesthetic is AI slop: Far‑right influencers are
flooding social media with a new kind of junk. The Atlantic.
Hern, A., & Milmo, D. (2024, May 19). Spam, junk … slop? The latest wave of AI behind the
‘zombie internet’. The Guardian.
Rudolf Rosa and Tomáš Musil: Authorship in the AI era: Proposing a publisher-centered authorship conceptualization
The talk deals with the current situation with AI involvement in authorship of texts, and proposes the concept of publisher-centered authorship.
We look at the past and present perspectives on authorship in literature, discussing the subtasks and actors involved in the process. We then see how the advent of AI complicates the situation, although we argue that a similar complication has already appeared some time ago with the practice of ghost writing. We review some possible views of the customary concept of authorship, both within the field of literature as well as in other fields that also operate with the concept of an author (architecture, film, theatre, music), also utilizing the works of Barthes, Foucault, Latour, and Piorecký.
We propose our central idea of publisher-centered authorship. We argue that the act of publishing a work is a key event in the life of the work, which is rather easy to identify and is by definition observable from the outside, unlike most of the other processes that lead to the emergence of the work. Similarly, the publisher is a single and easy-to-identify entity that can represent the work on the outside, based on interactions of and with the actors within the author-network that produces the work. Following the interactions, the publisher may oversee or enter into various formal or informal contracts with the other actors, which may include delegating rights and duties to actors. This includes answering any inquiries about the authorship of the work from the outside world, the answer to which may differ based on the exact nature and function of authorship that is being inquired.
We assert that definitive answers to a new conceptualization of authorship in the era of AI involvement are yet unattainable, as the situation is now rapidly evolving and shifting, and has not yet stabilized into one or a few typical patterns. Only once several more or less stable and usual practices emerge from the current flux can we study them, describe them, and based on them propose a new conceptualization of authorship.
Krzysztof Skonieczny: Should Social Critique be Automated?
In late 2024, the philosopher Jianwei Xun published the book “Hypnocracy. Trump, Musk and the Architecture of Reality”, which has soon been translated into a number of languages and received some recognition, including an interview with Xun in Le Figaro, apparently drawing praise from Emmanuel Macron himself. A few months later, it was revealed that Xun does not exist — he was fabricated by the Italian philosopher Andrea Colamedici, and the book was created with the use of ChatGPT and Claude. It is unclear how much of the conceptual content of the book was generated, but the fact remains that AI is now capable of co-creating potentially significant social or cultural critique.
The question I will ask during my talk is: if it were indeed possible for generative AI tools to generate social critique on a level on par with or above human, should we still be doing it the “old fashioned way”? To do so, I will (1) compare social critique with other activities that have been or potentially could be automatised soon (chess, creative writing, theoretical physics); (2) ask about the status of authorship in cultural critique, especially in comparison to literature, and (3) consider what social circumstances of creation of gAI tools are especially conducive to producing trustworthy social critique.
Tomáš Musil: Rethinking Understanding: From Logocentrism to the Author-Function in the Age of LLMs
This talk presents a theoretical provocation on the evolving role of authorship and understanding in the age of large language models (LLMs). We build on David J. Gunkel’s recent argument that generative AI materially advances the poststructuralist critique of authorship, particularly the tradition of logocentrism—the privileging of a central, authoritative source of meaning. While we agree that LLMs present a profound challenge to this tradition, we argue that the critique can and must go further. Specifically, we suggest that it is necessary to rethink the concept of understanding itself in order to fully grasp the epistemological implications of machine-generated text.
Current frameworks tend to reserve understanding—and thus the capacity to co-construct meaning—for human agents. However, if LLMs are capable of producing language that is legible, relevant, and engaged with human discourse, then restricting the category of understanding to human cognition may serve to reassert precisely the metaphysical boundary that logocentrism depends on. Instead, we propose a broadened, operational conception of understanding, which allows non-human entities to occupy roles in the production and circulation of meaning.
In support of this position, we turn to Michel Foucault’s concept of the "author-function," which positions the author not as an originator of meaning but as a discursive function that regulates the status and interpretation of texts. We argue that LLMs, even in the absence of consciousness or intent, increasingly fulfill aspects of this function. They generate, structure, and influence text in ways that are institutionally and culturally legible, thereby participating—at least partially—in the work of meaning-making.
This intervention speaks directly to the conference’s themes of reconfiguring creativity, knowledge, and collaboration in the wake of generative AI. Our goal is to contribute to a theoretical framework that moves beyond questions of human-machine mimicry or tool use, and instead interrogates the ontological and epistemic shifts introduced when machines begin to function not just as mediums, but as participants in creative discourse.
Ligthning Talks
- Rudolf Rosa: AI literary competition
- Monika Stobiecka: Early summary of the most important topics concerning authorship raised in expert interviews by artists who use generative AI in their work
- Dita Malečková: Digital Daimons: How AI Mediates Between Worlds
- Rudolf Rosa: Conceptualizing Writing as Reading


