top of page

Search Results

Results found for empty search

  • Memo Akten

    < Back Dr Memo Akten Goldsmiths iGGi Alum Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression. This research investigates how the latest developments in Deep Learning can be used to create intelligent systems that enhance artistic expression. These are systems that learn – both offline and online – and people interact with and gesturally ‘conduct’ to expressively produce and manipulate text, images and sounds. The desired relationship between human and machine is analogous to that between an Art Director and graphic designer, or film director and video editor – i.e. a visionary communicates their vision to a ‘doer’ who produces the output under the direction of the visionary, shaping the output with their own vision and skills. Crucially, the desired human-machine relationship here also draws inspirations from that between a pianist and piano, or a conductor and orchestra – i.e. again a visionary communicates their vision to a system which produces the output, but this communication is real-time, continuous and expressive; it’s an immediate response to everything that has been produced so far, creating a closed feedback loop. The key area that the research tackles is as follows: Given a large corpus (e.g. thousands or millions) of example data, we can train a generative deep model. That model will hopefully contain some kind of ‘knowledge’ about the data and its underlying structure. The questions are: i) How can we investigate what the model has learnt? ii) how can we do this interactively and in real-time, and expressively explore the knowledge that the model contains iii) how can we use this to steer the model to produce not just anything that resembles the training data, but what *we* want it to produce, *when* we want it to produce it, again in real-time and through expressive, continuous interaction and control. Memo Akten is an artist and researcher from Istanbul, Turkey. His work explores the collisions between nature, science, technology, ethics, ritual, tradition and religion. He studies and works with complex systems, behaviour, algorithms and software; and collaborates across many disciplines spanning video, sound, light, dance, software, online works, installations and performances. Akten received the Prix Ars Electronica Golden Nica in 2013 for his collaboration with Quayola, ‘Forms’. Exhibitions and performances include the Grand Palais, Paris; Victoria & Albert Museum, London; Royal Opera House, London; Garage Center for Contemporary Culture, Moscow; La Gaîté lyrique, Paris; Holon Design Museum, Israel and the EYE Film Institute, Amsterdam. Please note: Updating of profile text in progress memo@memo.tv Email Mastodon Other links Website LinkedIn BlueSky Github Featured Publication(s): Top-Rated LABS Abstracts 2021 Deep visual instruments: realtime continuous, meaningful human control over deep neural networks for creative expression Deep Meditations: Controlled navigation of latent space Learning to see: you are what you see Calligraphic stylisation learning with a physiologically plausible model of movement and recurrent neural networks Mixed-initiative creative interfaces Learning to see Real-time interactive sequence generation and control with Recurrent Neural Network ensembles Collaborative creativity with Monte-Carlo Tree Search and Convolutional Neural Networks Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks Deepdream is blowing my mind All watched over by machines of loving grace: Deepdream edition Realtime control of sequence generation with character based Long Short Term Memory Recurrent Neural Networks Themes Game AI - Previous Next

  • Dr William Smith

    < Back Dr William Smith University of York Supervisor William Smith is a Reader in the Computer Vision and Pattern Recognition research group in the Department of Computer Science at the University of York. He is currently a Royal Academy of Engineering/The Leverhulme Trust Senior Research Fellow and an Associate Editor of the journal Pattern Recognition. His research interests span vision, graphics and ML. Specifically, physics-based and 3D computer vision, shape and appearance modelling and the application of statistics and machine learning to these areas. The application areas in which he most commonly works are face/body analysis and synthesis, surveying and mapping, object capture and inverse rendering. A wide variety of tools and areas of maths are often useful in his research such as: convex optimisation, nonlinear optimisation, manifold learning, learning/optimisation on manifolds, computational geometry and low level computer vision (e.g. features and correspondence). He leads a team of five PhD students and one postdoc and has published over 100 papers, many in the top conferences and journals in the field. He was General Chair for the ACM SIGGRAPH European Conference on Visual Media Production in 2019 and is Program Chair for the British Machine Vision Conference in 2020. Research themes: Game AI Game Design Computational Creativity Graphics and rendering Content creation william.smith@york.ac.uk Email Mastodon https://www-users.cs.york.ac.uk/wsmith/ Other links Website https://www.linkedin.com/in/william-smith-b5421a70/ LinkedIn BlueSky https://github.com/waps101 Github Themes Creative Computing Design & Development Game AI Player Research - Previous Next

  • David Gundry

    < Back Dr David Gundry University of York iGGi Alum Using Applied Games to Motivate Speech Without Bias (Industry placement Lightspeed Research) Eliciting linguistic data faces several difficulties such as investment of researcher time and few available participants. Because of this, many language elicitation studies have to make do with few subjects and coarse sampling rates (measured in months). It would be ideal if a game could crowd-source relevant linguistic data with frequent, short game sessions. To this end, David’s research is looking into how games shape and elicit players’ linguistic behaviour. The established design patterns of gamification do not apply to a domain that lacks a ‘correct’ answer like language or personal beliefs and attitudes. David’s research shows how a player’s strategic goals will systematically bias data collection. It also shows how to design around this. The conclusion: The player’s choice of how to express a given datum must be strategically irrelevant in the game. David can remember the halcyon days when he had the free time to play games. Now he’s doing a PhD and has a one-year-old. He has an background in linguistics. He loves writing expressive code and designing clever little games. He wants to show that research games can be fun, not just effective. Please note: Updating of profile text in progress Email Mastodon Other links Website LinkedIn BlueSky Github Featured Publication(s): Trading Accuracy for Enjoyment? Data Quality and Player Experience in Data Collection Games Designing Games to Collect Human-Subject Data Validity threats in quantitative data collection with games: A narrative survey Busy doing nothing? What do players do in idle games? Intrinsic elicitation: A model and design approach for games collecting human subject data Themes Applied Games - Previous Next

  • Prof Richard Bartle

    < Back Prof. Richard Bartle University of Essex iGGi Co-Investigator Supervisor Richard Bartle is a renowned pioneer in game design and research. He co-wrote the first virtual world, MUD ("Multi-User Dungeon") in 1978, and has thus been at the forefront of the online games industry from its very inception. He is an influential writer on all aspects of virtual world design, development, and management. As an independent consultant, he has worked with many of the major online game companies in the U.K. and the U.S. over the past 30 years. His 2003 book, Designing Virtual Worlds , has established itself as a foundation text for researchers and developers of virtual worlds alike. His Player Type theory is taught in game design programmes worldwide (he appears in examination questions!). His interests are directed mainly virtual worlds, particularly Massively Multiplayer Online Role-Playing Games (MMORPGs, or MMOs), but cover all aspects of game design. He is keen to see AI used for non-player characters in MMOs (his PhD is in AI), and his current work considers the long-term moral and ethical implications of this. They’re maybe not what you might think they were at first glance… rabartle@essex.ac.uk Email Mastodon https://mud.co.uk/richard/ Other links Website https://www.linkedin.com/in/richardbartle/ LinkedIn BlueSky Github Themes Design & Development Game AI Player Research - Previous Next

  • Erin Robinson

    < Back Erin Robinson University of York iGGi PG Researcher Erin Robinson is a multimedia artist, experimental musician and PhD Researcher from London. Her work primarily involves the design of interactive installations, where she takes a participatory approach to evolving visual-scapes, but also takes form in fixed media, sound art, free improvisation, live visuals and immersive experiences. Her work critically engages with the concepts of posthumanism and postmodernism, exploring notions of authenticity and existence in the digital anthropocene by blurring lines between organic and non-organic entities, reality and virtuality, self and otherness. She is a founding member of SubPhonics, an experimental music and sound art collective based in London. Recent works include ‘Flora_Synthetica’, shown at Peckham Digital 2024, and ‘Pluriversal Perspectives: Moss’, shown at the South London Botanical Institute and Conference for Designing Interactive Systems (Copenhagen) 2024. A description of Erin's research: "My research adopts a practice-based approach to exploring participant-contributed materials, a technique positioned at the intersection of participatory and new media arts. This interactive technique enables participants to contribute aesthetic and semiotic materials to new media artworks through open forms of interaction, including but not limited to, text input, drawing, and video feed. Although both participatory and new media artistic practices involve audience engagement, traditional interactive media often impose restrictive computational frameworks. In contrast, participatory practices, typically conducted in person, allow participants greater freedom, resulting in deeper engagement and more diverse, unexpected outcomes that reflect the audience's perspectives and behaviours. This research underscores the potential of digital artworks to provide more expansive and identity-reflecting experiences by incorporating participant-contributed materials. By using strengths of participatory practices, digital artworks can achieve a richer and more personalised form of interaction, and meaningful engagement with audiences." erin.robinson@york.ac.uk Email Mastodon Other links Website LinkedIn BlueSky https://github.com/erinrrobinson Github Supervisor(s): Prof. Sebastian Deterding Themes Design & Development Immersive Technology - Previous Next

  • Helen Tilbrook

    < Back Helen Tilbrook University of York iGGi Administrator iGGi Admin iGGi Administrator at York helen.tilbrook@york.ac.uk Email Mastodon Other links Website LinkedIn BlueSky Github Themes - Previous Next

  • Dr Anna Bramwell-Dicks

    < Back Dr Anna Bramwell-Dicks University of York Supervisor Anna Bramwell-Dicks has an interdisciplinary background which started in Electronics and Music Technology before taking a sideways move to the field of Human-Computer Interaction research. She likes to combine her underlying interest in sound and music with applied psychology and creativity. She is very interested in research involving multimodal interaction (e.g. using audio, haptics, smell and/or proprioception as well as visuals within interfaces) particularly where audio is used to affect user’s behaviour or experiences. She is also very interested in accessibility research and any research in the application area of mental health and mental illness. As a lecturer in Web Development and Interactive Media, based in TFTI, Anna is always interested in work that involves designing and evaluating novel and interesting user experiences, particularly where that leads to the option to create fun, engaging, accessible experiences. She likes to work across a range of application areas ranging from learning environments to e-commerce to escape rooms and cultural exhibits! Anna is keen to work with students who want to design and develop gamified systems to support people with disabilities, physical or mental illness. Or, those who are also interested in multimodal experiences. Research themes: Accessibility Multimodal and multisensory systems Research methods anna.bramwell-dicks@york.ac.uk Email Mastodon Other links Website https://www.linkedin.com/in/anna-bramwell-dicks-2b941a28/ LinkedIn BlueSky Github Themes Accessibility Applied Games Design & Development Game Audio Player Research - Previous Next

  • Daniel Berio

    < Back Dr Daniel Berio Goldsmiths iGGi Alum AutoGraff: A Procedural Model of Graffiti Form. (Industry placement at Media Molecule) The purpose of this study is to investigate techniques for the procedural and interactive generation of synthetic instances of graffiti art. Considering graffiti as a special case of the calligraphic tradition, I propose a "movement centric" alternative to traditional curve generation techniques, in which a curve is defined through a physiologically plausible simulation of a (human) movement underlying its production rather than by an explicit definition of its geometry. In my thesis, I consider both single traces left by a brush (in a series of strokes) and the extension to 2D shapes (representing deformed letters in a large variety of artistic styles). I demonstrate how this approach is useful in a number of settings including computer aided design (CAD), procedural content generation for virtual environments in games and movies, computer animation as well as for the smooth control of robotic drawing devices. Daniel Berio is a researcher and artist from Florence, Italy. Since a young age Daniel was actively involved in the international graffiti art scene. In parallel he developed a professional career initially as a graphic designer and later as a graphics programmer in video games, multimedia and audio-visual software. In 2013 he obtained a Master degree from the Royal Academy of Art in The Hague (Netherlands), where he developed drawing machines and installations materializing graffiti-inspired procedural forms. Today Daniel is continuing his research in the procedural generation of graffiti within the IGGI (Intelligent Games and Game Intelligence) PhD program at Goldsmiths, University of London. Please note: Updating of profile text in progress Email Mastodon Other links Website LinkedIn BlueSky Github Featured Publication(s): Optimality Principles in the Procedural Generation of Graffiti Style SURFACE: Xbox Controlled Hot-wire Foam Cutter The role of image characteristics and embodiment in the evaluation of graffiti Emergence in the Expressive Machine The CyberAnthill: A Computational Sculpture Sketch-Based Modeling of Parametric Shapes Artistic Sketching for Expressive Coding Calligraphic stylisation learning with a physiologically plausible model of movement and recurrent neural networks Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks AutoGraff: Towards a computational understanding of graffiti writing and related art forms Kinematics reconstruction of static calligraphic traces from curvilinear shape features Interactive generation of calligraphic trajectories from Gaussian mixtures Sketching and Layering Graffiti Primitives. Kinematic Reconstruction of Calligraphic Traces from Shape Features Expressive curve editing with the sigma lognormal model Dynamic graffiti stylisation with stochastic optimal control Computer aided design of handwriting trajectories with the kinematic theory of rapid human movements Generating calligraphic trajectories with model predictive control Learning dynamic graffiti strokes with a compliant robot Computational models for the analysis and synthesis of graffiti tag strokes Towards human-robot gesture recognition using point-based medialness Transhuman Expression Human-Machine Interaction as a Neutral Base for a New Artistic and Creative Practice Themes Game AI - Previous Next

  • Ryan Spick

    < Back Dr Ryan Spick University of York iGGi Alum Deep Learning for Procedural Content Generation in Virtual Environments Ryan Spick is a PhD student with a computer science background, working on methods to improve how content (models, terrain, assets etc.) is created with an autonomous focus, with the main focus on generative deep learning to augment real-world data through a series of neural network layers to learn unlying properties of these data. Ryan has published a variety of papers around his main topic of generating content, such as terrain generation using generative adversarial networks and 3D voxel coloured model generation, to collaborations on other topics using deep learning, such as death prediction in a multiplayer online game and applying a recent map-elites algorithm. He has also worked with several leading industry researchers/games companies to further develop his research skill.If you have any ideas or collaboration opportunities please get in contact through any of the mediums below. Please note: Updating of profile text in progress ryan.spick@hotmail.co.uk Email Mastodon https://www.rjspick.com/ Other links Website https://www.linkedin.com/in/ryan-spick-505b63131/ LinkedIn BlueSky Github Featured Publication(s): System and Method for Point Cloud Generation System and method for training a machine learning model Robust Imitation Learning for Automated Game Testing Behavioural Cloning in VizDoom Illuminating Game Space Using MAP-Elites for Assisting Video Game Design Utilising VIPER for Parameter Space Exploration in Agent Based Wealth Distribution Models Human Point Cloud Generation using Deep Learning Naive mesh-to-mesh coloured model generation using 3D GANs Realistic and textured terrain generation using GANs Procedural Generation using Spatial GANs for Region-Specific Learning of Elevation Data Deep Learning for Wave Height Classification in Satellite Images for Offshore Wind Access Illuminating Game Space Using MAP-Elites for Assisting Video Game Design Time to die: Death prediction in dota 2 using deep learning Themes Game AI - Previous Next

  • Luiza Gossian

    < Back Luiza Gossian Queen Mary University of London iGGi PG Researcher Available for placement Luiza is a multidisciplinary researcher, game designer and developer interested in translating real world concepts into engaging game mechanics. She is passionate about creating games that can encourage an understanding of ourselves and the socially connected world we live in. Luiza is also an experienced painter, graphic designer and photographer and uses her visual skills and psychology background to prototype experimental game designs, design game documentation and craft atmospheric experiences. A description of Luiza's research: How can a subject as serious as genocide be successfully and respectfully translated into a casual game? Difficult subjects are often implemented with polar opposite approaches in games: either they are made to be highly emotional, socially conscious games that portray the gravity of a situation, yet are only played by those already informed and aware; or they are pure entertainment games that turn these subjects into wild amusement parks that appeal to broader gamer audiences yet do nothing to appropriately address the themes they glorify. Within this polarity there exists the potential to create games that tackle more serious subjects yet do so in a way that is more lighthearted and entertaining, and therefore more likely to reach the audiences who stand to gain the most. In her research, Luiza is exploring how to design games about genocide that break away from traditional approaches and embrace the ludic potential of games. Drawing on theories of intergroup and cultural psychology, as well as her own experiences, she is exploring how these difficult themes can be explored in engaging, effective and informative ways. Currently, she is developing a hypercasual game that abstracts the ten stages of genocide to be used as an educational primer, a Tetris-esq game that uses social media and government sources to present the realities of refugees fleeing their homes, and a cosy mystery-adventure game which enables players to uncover historical crimes in a far away land. l.gossian@outlook.com Email Mastodon http://www.gossianblurs.com/ Other links Website https://www.linkedin.com/in/lu-goss/ LinkedIn https://bsky.app/profile/lugossian.bsky.social BlueSky Github Supervisors: Prof. Sebastian Deterding Dr Anne Hsu Themes Applied Games Design & Development - Previous Next

  • James Goodman

    < Back Dr James Goodman Queen Mary University of London iGGi Alum James has picked up degrees in Chemistry, History, Mathematics, Business Administration and Machine Learning. After a career in Consultancy and IT Project Management he is now finally doing the research he always wanted to. James is interested in opponent modelling, theory of mind and strategic communication in multi-player games, and how statistical forward planning can be used in modern tabletop board-games (or other turn-based environments). With a constrained budget, how much time should an agent spend thinking about it's own plan versus thinking about what other players might be doing to get in the way. How does this balance vary across different games? His secondary research interests are in using AI-playtesting as a tool for game-balancing and game-design. james.goodman@qmul.ac.uk Email Mastodon https://www.tabletopgames.ai/ Other links Website https://www.linkedin.com/in/james-goodman-b388791/ LinkedIn BlueSky Github Supervisors: Dr Diego Pérez-Liébana Prof. Simon Lucas Featured Publication(s): Seeding for Success: Skill and Stochasticity in Tabletop Games From Code to Play: Benchmarking Program Search for Games Using Large Language Models Skill Depth in Tabletop Board Games Measuring Randomness in Tabletop Games A case study in AI-assisted board game design Following the leader in multiplayer tabletop games PyTAG: Challenges and Opportunities for Reinforcement Learning in Tabletop Games MultiTree MCTS in Tabletop Games Visualizing Multiplayer Game Spaces TAG: Terraforming Mars Fingerprinting tabletop games PyTAG: Challenges and Opportunities for Reinforcement Learning in Tabletop Games AI and Wargaming Metagame Autobalancing for Competitive Multiplayer Games Does it matter how well I know what you’re thinking? Opponent Modelling in an RTS game Weighting NTBEA for game AI optimisation Re-determinizing MCTS in Hanabi Noise reduction and targeted exploration in imitation learning for abstract meaning representation parsing UCL+ Sheffield at SemEval-2016 Task 8: Imitation learning for AMR parsing with an alpha-bound Themes Design & Development Game AI - Previous Next

  • Dr Patrik Huber

    < Back Dr Patrik Huber University of York Supervisor Patrik Huber is a researcher, developer and entrepreneur, working on 3D face reconstruction and face analysis in images and videos using 3D face models. He is a Lecturer (Assistant Professor) in Computer Vision in the Department of Computer Science of the University of York, UK, and he’s the Founder of 4dface.io, a small start-up specialising in 3D face models and realistic 3D face avatars for professional applications. His research is focused on computer vision, in particular, he is interested in the question of how to robustly obtain a metrically accurate, pose-invariant 3D representation of a face from 2D images and videos. He is interested in face tracking, 3D face modelling, analysis and synthesis, metrically accurate 3D face shape reconstruction, inverse rendering, and combining deep learning with 3D face models. Patrik is particularly interested in supervising students with a strong background and interest in computer vision, machine learning, computer graphics, and modern C++/Python, on topics related to creating 3D face avatars of players for immersive playing and social experiences , and using face analytics for professional e-sports . Research themes: 3D face avatars for games AR/VR Serious games and social interaction Immersive 3D player experiences Game Analytics Games with a Purpose E-Sports patrik.huber@york.ac.uk Email Mastodon https://www.patrikhuber.ch/ Other links Website https://www.linkedin.com/in/patrik-huber/ LinkedIn BlueSky https://github.com/patrikhuber Github Themes Applied Games Esports Game Data Immersive Technology Player Research - Previous Next

  • Bluesky_Logo wt
  • LinkedIn
  • YouTube
  • mastodon icon white

Copyright © 2023 iGGi

Privacy Policy

The EPSRC Centre for Doctoral Training in Intelligent Games and Game Intelligence (iGGi) is a leading PhD research programme aimed at the Games and Creative Industries.

bottom of page