About me
Today, we work.
👋 Hello there! I am Soham (so-hum). Pleased to meet you!
I am a graduate student at the Language Technologies Institute (LTI), Carnegie Mellon University.
My interests and past work lie in
- Large Language Model
- Multimodal Machine Learning
- Audio-based interactions
- Natural Language Processing
- Reinforcement Learning
- Artificial Social Intelligence
I have presented my research in various workshops and conferences like ACL, ISLS, APSIPA, and NeurIPS.
My efforts at CMU have been focused on various aspects of interactive intelligence, where I am working towards an on-device multimodal, virtual teaching assistant, advised by Prof Carolyn Rose.
My internship project at Apple was an effort to investigate how to run LLMs on a device. I built a Jax-based online-model distillation framework for distilling LLMs on distributed GPUs and identified limitations of the popular TinyBERT distillation algorithm for decoder-style LLMs.
I have also worked with Prof Chng Eng Siong from NTU Singapore, on improving speech and audio representations and using a curriculum-learning based approach to enhance model convergence.
My current aim is to maximize my learning, enhance my depth and breadth, and use the knowledge to research and develop applications that deliver pleasant end-user experiences. In addition, my appreciation of the importance of teamwork and collaborative progress will help me achieve my goal of creating meaningful technology that makes everyone’s lives better ☀️.
Happiness can be found, even in the darkest of times, if one only remembers to turn on the light.