top of page

Search Results

Results found for ""

  • Dimitris Menexopoulos

    < Back ​ Dimitris Menexopoulos Queen Mary University of London ​ iGGi PG Researcher ​ ​ Dimitris Menexopoulos is a versatile music composer, sound designer, audio technologist, and multi-instrumentalist from Thessaloniki, Greece. With an academic background in Geoscience, Electronic Production, and Information Experience Design, he draws elements from a wide knowledge spectrum in the fields of Art, Science, and Technology to carry out his work. He has released two solo albums under his name (Perpetuum Mobile - 2017, Phenomena - 2014), two EPs (Modern Catwalk Music - 2022, 40 EP - 2020), two published soundtracks (Iolas Wonderland - 2021, The Village - 2019), and has performed internationally. His collaborations include work with electronic musician Robert Rich (Vestiges - 2016), director Shekhar Kapur (Brides of the Well - 2018), and film composer George Kallis (The Last Warrior: Root of Evil - 2021, Cliffs of Freedom - 2019), among others. As a designer, he has presented work at prominent venues including the Barbican Centre (Nesta FutureFest - 2019, with Akvile Terminaite), Somerset House (24 Hours in Uchronia with Helga Schmid - 2020), and Christie's London (Christie's Lates - 2023, with Scarlett Yang). His current research focuses on Graphics-Based Procedural Sound Design for Games as well as Innovative Music Composition and Performance Systems. His original scientific publications and devices have been presented at prestigious events in Japan (AES 6th International Conference on Audio for Games - 2024), Spain (AES Europe - 2024), the UK (Iklectik - 2020), France (IRCAM - 2020, 2019), and the USA (Mass MoCA - 2019). A description of Dimitris' research: Procedural content generation supports the creation of rich and varied games, but audio design has not kept pace with such innovation. Often the visual aspects of every asset in the scene may be procedurally rendered, yet audio developers still rely mostly on pre-recorded samples in order to carry out their tasks. However, much of the information required to determine smooth audiovisual interactions is already there. For example, the size, shape, material and movement of assets offer potential types of data that can drive audio algorithms directly. This topic explores how available animation information in the game engine can be used to generate the sound effects produced when objects interact in real-time. ​ d.menexopoulos@qmul.ac.uk Email https://menex.bandcamp.com Mastodon https://linktr.ee/menexmusic Other links Website https://www.linkedin.com/in/dimitris-menexopoulos/ LinkedIn https://twitter.com/DimitrisMenex Twitter https://github.com/dmenex Github Supervisors: Dr Josh Reiss Dr Tom Collins ​ Using texture maps to procedurally generate sound in virtual environments The State of the Art in Procedural Audio Themes Creative Computing Game Audio Previous Next

  • Intrinsic Motivation in Computational Creativity Applied to Videogames. PhD Thesis. 306 pages.

    < Back Intrinsic Motivation in Computational Creativity Applied to Videogames. PhD Thesis. 306 pages. Link ​ Author(s) C Guckelsberger Abstract ​ More info TBA ​ Link

  • How virtual and mechanical coupling impact bimanual tracking

    < Back How virtual and mechanical coupling impact bimanual tracking Link ​ Author(s) N Pena-Perez, J Eden, E Ivanova, I Farkhatdinov, E Burdet Abstract ​ More info TBA ​ Link

  • A data-driven approach for examining the demand for relaxation games on Steam during the COVID-19 pandemic

    < Back A data-driven approach for examining the demand for relaxation games on Steam during the COVID-19 pandemic Link ​ Author(s) M Croissant, M Frister Abstract ​ More info TBA ​ Link

  • Memo Akten

    < Back ​ Dr Memo Akten Goldsmiths ​ iGGi Alum ​ ​ Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression. This research investigates how the latest developments in Deep Learning can be used to create intelligent systems that enhance artistic expression. These are systems that learn – both offline and online – and people interact with and gesturally ‘conduct’ to expressively produce and manipulate text, images and sounds. The desired relationship between human and machine is analogous to that between an Art Director and graphic designer, or film director and video editor – i.e. a visionary communicates their vision to a ‘doer’ who produces the output under the direction of the visionary, shaping the output with their own vision and skills. Crucially, the desired human-machine relationship here also draws inspirations from that between a pianist and piano, or a conductor and orchestra – i.e. again a visionary communicates their vision to a system which produces the output, but this communication is real-time, continuous and expressive; it’s an immediate response to everything that has been produced so far, creating a closed feedback loop. The key area that the research tackles is as follows: Given a large corpus (e.g. thousands or millions) of example data, we can train a generative deep model. That model will hopefully contain some kind of ‘knowledge’ about the data and its underlying structure. The questions are: i) How can we investigate what the model has learnt? ii) how can we do this interactively and in real-time, and expressively explore the knowledge that the model contains iii) how can we use this to steer the model to produce not just anything that resembles the training data, but what *we* want it to produce, *when* we want it to produce it, again in real-time and through expressive, continuous interaction and control. Memo Akten is an artist and researcher from Istanbul, Turkey. His work explores the collisions between nature, science, technology, ethics, ritual, tradition and religion. He studies and works with complex systems, behaviour, algorithms and software; and collaborates across many disciplines spanning video, sound, light, dance, software, online works, installations and performances. Akten received the Prix Ars Electronica Golden Nica in 2013 for his collaboration with Quayola, ‘Forms’. Exhibitions and performances include the Grand Palais, Paris; Victoria & Albert Museum, London; Royal Opera House, London; Garage Center for Contemporary Culture, Moscow; La Gaîté lyrique, Paris; Holon Design Museum, Israel and the EYE Film Institute, Amsterdam. Please note: Updating of profile text in progress memo@memo.tv Email Mastodon Other links Website LinkedIn Twitter Github ​ Featured Publication(s): Top-Rated LABS Abstracts 2021 Deep visual instruments: realtime continuous, meaningful human control over deep neural networks for creative expression Deep Meditations: Controlled navigation of latent space Learning to see: you are what you see Calligraphic stylisation learning with a physiologically plausible model of movement and recurrent neural networks Mixed-initiative creative interfaces Learning to see Real-time interactive sequence generation and control with Recurrent Neural Network ensembles Collaborative creativity with Monte-Carlo Tree Search and Convolutional Neural Networks Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks Deepdream is blowing my mind All watched over by machines of loving grace: Deepdream edition Realtime control of sequence generation with character based Long Short Term Memory Recurrent Neural Networks Themes Game AI - Previous Next

  • Turning Zeroes into Non-Zeroes: Sample Efficient Exploration with Monte Carlo Graph Search

    < Back Turning Zeroes into Non-Zeroes: Sample Efficient Exploration with Monte Carlo Graph Search Link ​ Author(s) M Tot, M Conserva, DP Liebana, S Devlin Abstract ​ More info TBA ​ Link

  • Janet Gibbs

    < Back ​ Janet Gibbs Goldsmiths ​ iGGi PG Researcher ​ ​ Janet is exploring how multi-modal perceptual feedback contributes to a player's sense of presence in the virtual world. Jaron Lanier described Virtual Reality (VR) as the substitution of the interface between a person and their physical environment with an interface to a simulated environment. This interface is of particular significance in understanding how presence depends on the nature, extent and veridicality of our sensorimotor interaction with the virtual environment, and how that relates to our normal engagement with the real world. In practice, only selected parts of the interface are substituted - we are never fully removed from our physical environment. Our perceptual apparatus evolved to make sense of changing sensations in multiple modalities originating naturally and coherently from the same event or percept. By contrast, in VR, individually crafted feedback using different technologies for each modality are coordinated to appear as if from a single source. VR benefits from a long history of visual and audio technologies, developed in harness for virtual experiences from cinema to computer games. Haptics is a relative newcomer that must be blended with them to create coherent multimodal perceptual experiences. Additionally, haptics is closely related to proprioception, and to the wide range of tactile senses—texture, heat, pain etc—that current VR systems do not address. Building on sensorimotor theory of perception, Janet aims to establish how our perceptual system responds to multi-modal feedback that almost, but not quite, matches what we are used to, in making sense of the simulated environment of VR. ​ JGIBB016@gold.ac.uk Email Mastodon Other links Website LinkedIn Twitter Github ​ Featured Publication(s): Investigating Sensorimotor Contingencies in the Enactive Interface A comparison of the effects of haptic and visual feedback on presence in virtual reality Novel Player Experience with Sensory Substitution and Augmentation Investigating sensorimotor contingencies in the enactive interface Themes - Previous Next

  • Portfolio search and optimization for general strategy game-playing

    < Back Portfolio search and optimization for general strategy game-playing Link ​ Author(s) A Dockhorn, J Hurtado-Grueso, D Jeurissen, L Xu, D Perez-Liebana Abstract ​ More info TBA ​ Link

  • PAGAN for Character Believability Assessment

    < Back PAGAN for Character Believability Assessment Link ​ Author(s) C Pacheco Abstract ​ More info TBA ​ Link

  • Trends in organizing philosophies of game jams and game hackathons

    < Back Trends in organizing philosophies of game jams and game hackathons Link ​ Author(s) A Fowler, G Lai, F Khosmood, R Hill Abstract ​ More info TBA ​ Link

  • An appraisal-based chain-of-emotion architecture for affective language model game agents

    < Back An appraisal-based chain-of-emotion architecture for affective language model game agents Link ​ Author(s) M Croissant, M Frister, G Schofield, C McCall Abstract ​ More info TBA ​ Link

  • Game state and action abstracting monte carlo tree search for general strategy game-playing

    < Back Game state and action abstracting monte carlo tree search for general strategy game-playing Link ​ Author(s) A Dockhorn, J Hurtado-Grueso, D Jeurissen, L Xu, D Perez-Liebana Abstract ​ More info TBA ​ Link

bottom of page