top of page

Search Results

Results found for ""

  • Memo Akten

    < Back ​ Dr Memo Akten Goldsmiths ​ iGGi Alum ​ ​ Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression. This research investigates how the latest developments in Deep Learning can be used to create intelligent systems that enhance artistic expression. These are systems that learn – both offline and online – and people interact with and gesturally ‘conduct’ to expressively produce and manipulate text, images and sounds. The desired relationship between human and machine is analogous to that between an Art Director and graphic designer, or film director and video editor – i.e. a visionary communicates their vision to a ‘doer’ who produces the output under the direction of the visionary, shaping the output with their own vision and skills. Crucially, the desired human-machine relationship here also draws inspirations from that between a pianist and piano, or a conductor and orchestra – i.e. again a visionary communicates their vision to a system which produces the output, but this communication is real-time, continuous and expressive; it’s an immediate response to everything that has been produced so far, creating a closed feedback loop. The key area that the research tackles is as follows: Given a large corpus (e.g. thousands or millions) of example data, we can train a generative deep model. That model will hopefully contain some kind of ‘knowledge’ about the data and its underlying structure. The questions are: i) How can we investigate what the model has learnt? ii) how can we do this interactively and in real-time, and expressively explore the knowledge that the model contains iii) how can we use this to steer the model to produce not just anything that resembles the training data, but what *we* want it to produce, *when* we want it to produce it, again in real-time and through expressive, continuous interaction and control. Memo Akten is an artist and researcher from Istanbul, Turkey. His work explores the collisions between nature, science, technology, ethics, ritual, tradition and religion. He studies and works with complex systems, behaviour, algorithms and software; and collaborates across many disciplines spanning video, sound, light, dance, software, online works, installations and performances. Akten received the Prix Ars Electronica Golden Nica in 2013 for his collaboration with Quayola, ‘Forms’. Exhibitions and performances include the Grand Palais, Paris; Victoria & Albert Museum, London; Royal Opera House, London; Garage Center for Contemporary Culture, Moscow; La Gaîté lyrique, Paris; Holon Design Museum, Israel and the EYE Film Institute, Amsterdam. Please note: Updating of profile text in progress memo@memo.tv Email Mastodon Other links Website LinkedIn Twitter Github ​ Featured Publication(s): Top-Rated LABS Abstracts 2021 Deep visual instruments: realtime continuous, meaningful human control over deep neural networks for creative expression Deep Meditations: Controlled navigation of latent space Learning to see: you are what you see Calligraphic stylisation learning with a physiologically plausible model of movement and recurrent neural networks Mixed-initiative creative interfaces Learning to see Real-time interactive sequence generation and control with Recurrent Neural Network ensembles Collaborative creativity with Monte-Carlo Tree Search and Convolutional Neural Networks Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks Deepdream is blowing my mind All watched over by machines of loving grace: Deepdream edition Realtime control of sequence generation with character based Long Short Term Memory Recurrent Neural Networks Themes Game AI - Previous Next

  • Turning Zeroes into Non-Zeroes: Sample Efficient Exploration with Monte Carlo Graph Search

    < Back Turning Zeroes into Non-Zeroes: Sample Efficient Exploration with Monte Carlo Graph Search Link ​ Author(s) M Tot, M Conserva, DP Liebana, S Devlin Abstract ​ More info TBA ​ Link

  • Janet Gibbs

    < Back ​ Janet Gibbs Goldsmiths ​ iGGi PG Researcher ​ ​ Janet is exploring how multi-modal perceptual feedback contributes to a player's sense of presence in the virtual world. Jaron Lanier described Virtual Reality (VR) as the substitution of the interface between a person and their physical environment with an interface to a simulated environment. This interface is of particular significance in understanding how presence depends on the nature, extent and veridicality of our sensorimotor interaction with the virtual environment, and how that relates to our normal engagement with the real world. In practice, only selected parts of the interface are substituted - we are never fully removed from our physical environment. Our perceptual apparatus evolved to make sense of changing sensations in multiple modalities originating naturally and coherently from the same event or percept. By contrast, in VR, individually crafted feedback using different technologies for each modality are coordinated to appear as if from a single source. VR benefits from a long history of visual and audio technologies, developed in harness for virtual experiences from cinema to computer games. Haptics is a relative newcomer that must be blended with them to create coherent multimodal perceptual experiences. Additionally, haptics is closely related to proprioception, and to the wide range of tactile senses—texture, heat, pain etc—that current VR systems do not address. Building on sensorimotor theory of perception, Janet aims to establish how our perceptual system responds to multi-modal feedback that almost, but not quite, matches what we are used to, in making sense of the simulated environment of VR. ​ JGIBB016@gold.ac.uk Email Mastodon Other links Website LinkedIn Twitter Github ​ Featured Publication(s): Investigating Sensorimotor Contingencies in the Enactive Interface A comparison of the effects of haptic and visual feedback on presence in virtual reality Novel Player Experience with Sensory Substitution and Augmentation Investigating sensorimotor contingencies in the enactive interface Themes - Previous Next

  • Portfolio search and optimization for general strategy game-playing

    < Back Portfolio search and optimization for general strategy game-playing Link ​ Author(s) A Dockhorn, J Hurtado-Grueso, D Jeurissen, L Xu, D Perez-Liebana Abstract ​ More info TBA ​ Link

  • PAGAN for Character Believability Assessment

    < Back PAGAN for Character Believability Assessment Link ​ Author(s) C Pacheco Abstract ​ More info TBA ​ Link

  • Trends in organizing philosophies of game jams and game hackathons

    < Back Trends in organizing philosophies of game jams and game hackathons Link ​ Author(s) A Fowler, G Lai, F Khosmood, R Hill Abstract ​ More info TBA ​ Link

  • An appraisal-based chain-of-emotion architecture for affective language model game agents

    < Back An appraisal-based chain-of-emotion architecture for affective language model game agents Link ​ Author(s) M Croissant, M Frister, G Schofield, C McCall Abstract ​ More info TBA ​ Link

  • Game state and action abstracting monte carlo tree search for general strategy game-playing

    < Back Game state and action abstracting monte carlo tree search for general strategy game-playing Link ​ Author(s) A Dockhorn, J Hurtado-Grueso, D Jeurissen, L Xu, D Perez-Liebana Abstract ​ More info TBA ​ Link

  • Evaluating generalisation in general video game playing

    < Back Evaluating generalisation in general video game playing Link ​ Author(s) M Balla, SM Lucas, D Perez-Liebana Abstract ​ More info TBA ​ Link

  • Adam Katona

    < Back ​ Dr Adam Katona University of York ​ iGGi Alum ​ ​ Adam did his MSc in mechatronics at Budapest University of Technology and Economics. After graduation, he spent two years working on automated driving at Robert Bosch GmbH, during which he got exposed to both the classical and the machine learning approach of creating intelligent agents. Evolutionary computation continues to surprise us by producing creative and efficient designs. However despite our best efforts, artificial evolution had not produced anything ascomplex and interesting as natural evolution. As our hardware is becoming faster and number of cores in our chips increase, the lack of computational power is becoming less of an excuse. It is starting to become more and more obvious that some fundamental component of natural evolution is missing from our simulations. One possible candidate is the evolution of evolvability. Evolution seems to produce organisms which are well suited for further evolution. The goal of my research is to find mechanisms which allows evolution to increase evolvability, and incorporate these in the design of more efficient neuroevolution algorithms.This research is in the intersection of evolutionary computation, evolutionary developmental biology and neural networks. ​ mail.adamkatona@gmail.com Email Mastodon https://adamkatona.net/ Other links Website LinkedIn https://twitter.com/adamkat0na Twitter https://github.com/adam-katon Github ​ Featured Publication(s): Illuminating Game Space Using MAP-Elites for Assisting Video Game Design Complex computation from developmental priors Utilizing the Untapped Potential of Indirect Encoding for Neural Networks with Meta Learning Quality Evolvability ES: Evolving Individuals With a Distribution of Well Performing and Diverse Offspring Growing 3d artefacts and functional machines with neural cellular automata Time to die: Death prediction in dota 2 using deep learning Themes Game AI - Previous Next

  • Bridging Generative Deep Learning and Computational Creativity

    < Back Bridging Generative Deep Learning and Computational Creativity Link ​ Author(s) S Berns, S Colton Abstract ​ More info TBA ​ Link

  • Understanding and Strengthening the Computational Creativity Community: A Report From The Computational Creativity Task Force.

    < Back Understanding and Strengthening the Computational Creativity Community: A Report From The Computational Creativity Task Force. Link ​ Author(s) JM Cunha, S Harmon, C Guckelsberger, A Kantosalo, PM Bodily, K Grace Abstract ​ More info TBA ​ Link

bottom of page