top of page

Peyman Hosseini

Queen Mary University of London

iGGi PG Researcher

Available for placement

Peyman is interested in using his computer science knowledge to support society's well-being. Raised in a family where almost everyone’s work is somehow related to mathematics and its applications, he became passionate about algorithms and combinatorics from an early age. This prompted him to pursue an undergraduate degree in computer engineering with a focus on IT and AI. This background led him to start his PhD at IGGI on building more powerful yet efficient Natural Language Processing models for analysing textual data, a rich and abundant source of gaming feedback.



A description of Peyman's research:


Peyman's research focuses on advancing deep learning architectures for natural language processing and building tools on top of state-of-the-art models. To contribute to the fundamental understanding and practical application of deep learning in natural language processing, focusing on efficiency and effectiveness, he pursues two main objectives: 

  1. Designing more efficient models that match or surpass state-of-the-art performance with fewer parameters. 

  2. Systematically analyzing language models to develop solutions that enhance their effectiveness for end-users, such as game studios.  

His recent accomplishments towards these goals include: 

  • 1. Developing novel attention mechanisms:

    • 1.1 Optimized Attention: 25% parameter reduction     

    • 1.2 Efficient Attention: 50% parameter reduction     

    • 1.3 Super Attention: 25% parameter reduction with significant performance              improvements in language and vision tasks     

    • 1.4 All mechanisms demonstrate comparable or superior performance to standard              attention across various inputs.  

  • 2. Designing and training Hummingbird, a proof-of-concept small language model using Efficient Attention, available on HuggingFace.  

  • 3. Conducting a study on large language models' limitations in analyzing lengthy reviews for basic NLP tasks. Proposed solutions offer substantial performance improvements while reducing API costs by more than 90%.

Mastodon

Other links

Website

LinkedIn

Twitter

Github

-

bottom of page