Program
Keynote speaker: Simon McCallum
Simon McCallum is a Senior Lecturer at Victoria University of Wellington. My research focuses on Mobile Apps and Computer Games applied to societal challenges of mental health, climate change and education.
He has been lecturing in Computer Science since 1999, and created the first University level Game Development courses in New Zealand in 2004. He spent 11 years in Norway working as a game developer and teaching game development.
Accepted papers
Multi-Agent Differential Testing for the Game of Go
- Jiaxue Song, Beijing University of Chemical Technology
- Xiao-Yi Zhang, University of Science and Technology Beijing
- Paolo Arcaini, National Institute of Informatics
- Fuyuki Ishikawa, National Institute of Informatics
- Yong Liu, Beijing University of Chemical Technology
- Bin Du, Beijing University of Chemical Technology
Artificial intelligent (AI) techniques have been successfully applied in various domains, especially in complex games like Go. Indeed, although Go has simple rules, it has countless variations of possible positions, making it an extremely difficult game. Therefore, it has been taken as a benchmark to assess the abilities of AI agents (called Go Agents). However, assessing the performance of the decision-making of a Go agent is challenging; indeed, since Go agents play much better than humans, it is difficult to establish intuitive rules or to rely on human experts to determine whether a Go agent’s decision is correct or not. While professional players can do it, they can not evaluate too many moves. To tackle these issues, in this paper we propose a differential testing approach, called Multi-Agent Differential testing For the game of Go (MAD4Go), to identify interesting test cases in which a Go agent may perform a non-optimal move. Specifically, we play different Go agents over the same tests and we check whether they agree with each other. In case of disagreement, we assess the level of disagreement. If two agents strongly disagree with each other, it is more likely that at least one agent made a wrong decision. We conducted experiments by evaluating Go agents from Leela Zero, using as tests different Go positions obtained from real Go games of professional players. Results revealed diverse disagreements among agents, showing that MAD4Go can identify valuable test cases.
Enhancing Emotional Realism in Games: An Optimized Generative AI Framework for Dynamic 3D Facial Animation
- Jonas de Araújo Luz Junior, Universidade de Fortaleza
- Rafael Fonseca Pessoa, Universidade de Fortaleza
- Guadalupe P. Saldanha Ribeiro, Universidade de Fortaleza
- João Vitor Vieira Lira, Universidade de Fortaleza
- Maria Andréia Formico Rodrigues, Universidade de Fortaleza
Facial expressiveness is essential for immersive gaming, yet generating emotionally responsive 3D characters remains challenging. This paper presents an optimized generative AI framework that integrates OpenAI’s LLMs with OpenFace to improve blendshape-AU mapping for dynamic facial animation. The system, embedded in an original game scene, generates facial expressions based on interactive player dialogues. From a software engineering perspective, it features modular, scalable AI-driven animation pipelines that support adaptive emotion modeling and game engine integration. User evaluations demonstrate high realism, clear emotion recognition, and strong engagement, confirming the system’s potential to enhance player interaction. Further refinements in blendshape mappings and real-time adjustments can enhance the differentiation of subtle emotions like Disgust and Contempt, improving the system’s overall expressiveness.
Incorporating Multiple Self-Adaptive Agents in Games
- Steven Streasick, Grand Valley State University
- Erik Fredericks, Grand Valley State University
- Byron DeVries, Grand Valley State University
- Ira Woodring, Grand Valley State University
A self-adaptive system (SAS) is capable of modifying its behavior at run-time to address uncertainty. For games, these self-adaptations can present a more dynamic experience (e.g., changing difficulty, optimizing performance), thereby enabling run-time updates to mitigate potential issues experienced during gameplay. For example, a self-adaptation may result in emergent behaviors that keep the player engaged or optimize performance to support a multitude of device configurations. Notably, games that leverage a run-time feedback loop have previously demonstrated success in optimizing a game’s frame rate. However, multi-agent systems that incorporate self-adaptation remain largely unexplored in the video games domain. This paper demonstrates a novel approach for using multiple goal models with competing metrics for expressing optimal behavior in balancing and mitigating video game uncertainties. To support this goal, we adapt an existing browser-based game to a new framework that incorporates two distinct self-adaptive agents with potentially competing objectives.