Samyadeep Basu · Research Scientist
◯ Adobe Research · San Jose, CA

Samyadeep
Basu.

Research scientist working on multimodal language models, vision-language models, and model controllability. Currently a key researcher for the post-training stack for SLMs and VLMs at Adobe Research — shipping models that power grounding and retrieval in Adobe products.

1000+ citations h-index 14 15+ top-tier publications 2 granted patents
Samyadeep Basu
San Jose · 2026
01 / About A note on the work
Bio

I'm a Research Scientist at Adobe Research, where I am a key researcher in the post-training stack for small language models and vision-language models. My team's models serve grounding and retrieval use-cases across Adobe products. I also contribute to continuous image-editing models — see SliderEdit.

I completed my PhD at the University of Maryland with Soheil Feizi in the Center for Machine Learning. Before that, I spent two years at Microsoft AI as an Applied Scientist, with research stints at Microsoft Research (Cambridge & Redmond) and prior internships at Adobe.

My research sits at the intersection of understanding and control: how knowledge is stored and transferred inside multimodal models, and how to steer or edit those models with minimal intervention. Recent work spans mechanistic interpretability, post-training for VLMs, and large-scale reinforcement learning.

15+
Top-tier papers
1000+
Citations
14
h-index
7
Patents (2 granted)
02 / News What's recent
Mar 2026
Oral · CVPRSliderEdit accepted as Oral at CVPR 2026 — continuous image editing with fine-grained instruction control.
Feb 2026
EACLDecomposition-Enhanced Training for Post-Hoc Attributions accepted to EACL 2026.
Nov 2025
EMNLPPaper on bias and chain-of-thought faithfulness in large vision-language models accepted to EMNLP 2025.
Sep 2025
NeurIPSLocalizing Knowledge in Diffusion Transformers accepted to NeurIPS 2025.
Jun 2025
AdobeJoined Adobe Research as a full-time Research Scientist working on language modeling and multimodal projects.
May 2025
COLMPaper on Mechanistic Circuits for Extractive Question-Answering accepted to COLM 2025.
Jan 2025
ICLRRethinking Copyright Infringements in the Era of Text-to-Image Models accepted to ICLR 2025.
Sep 2024
NeurIPSTwo papers on mechanistic interpretability accepted to NeurIPS 2024; two more at EMNLP 2024.
03 / Selected Publications Recent work
2026
A. Zarei, S. Basu, M. Pournemat, S. Nag, S. Feizi
CVPR 2026
Oral
2026
S. Balasubramaniam, S. Basu, K. Goswami, S. Feizi, R. Rossi, V. Manjunatha, N. Lipka
EACL 2026
2026
W. Wei, H. Yang, T. Yang, S. Basu, H. Chen, R. Rossi, H. Eldardiry
Under Review
COLM
2025
A. Zarei, S. Basu, K. Rezaei, S. Nag, S. Feizi
NeurIPS 2025
2025
S. Basu, V. Morariu, R. Rossi, N. Zhao, Z. Wang, S. Feizi, V. Manjunatha
COLM 2025
2025
M. Moayeri, S. Basu, S. Balasubramaniam, P. Kattakinda, R. Brauneis, S. Feizi
ICLR 2025
2024
S. Basu, M. Grayson, C. Morrison, B. Nushi, S. Feizi, D. Massiceti
NeurIPS 2024
2024
S. Balasubramaniam, S. Basu, S. Feizi
NeurIPS 2024
2024
S. Basu, K. Rezaei, V. Morariu, C. Zhao, R. Rossi, V. Manjunatha, S. Feizi
ICML 2024
2024
S. Basu, V. Morariu, C. Zhao, S. Feizi, V. Manjunatha
ICLR 2024
2024
S. Basu, M. Sanjabi, S. Hu, D. Massiceti, S. Feizi
EMNLP 2024
2024
S. Basu, S. Hu, D. Massiceti, S. Feizi
AAAI 2024
2023
S. Basu, M. Stanley, J. Bronskill, S. Feizi, D. Massiceti
ICLR 2023
2021
S. Basu, P. Pope, S. Feizi
ICLR 2021
· · ·

Full publication list on Google Scholar

04 / Experience Trajectory
2025 — Present
Research Scientist
Adobe Research, San Jose
Key researcher for the mid-training and post-training stack for SLMs and Vision-Language models. Shipped models powering grounding and retrieval in Adobe products. Developed fast inference techniques delivering 10× latency improvements.
2022 — 2025
Ph.D., Computer Science
University of Maryland, College Park · with Prof. Soheil Feizi
Reliable deep learning: understanding models through data, and controlling generative and discriminative models with lightweight model editing and fine-tuning.
May — Nov 2024
Research Intern
Adobe Research, San Jose
Extracted mechanistic circuits for context-augmented language models.
Jan — May 2024
Research Intern
Microsoft Research, Redmond
Designed interpretability and model editing methods for multimodal language models.
May — Dec 2023
Research Intern
Adobe Research, Maryland
Designed interpretability and fast editing methods for text-to-image models.
Jun — Aug 2022
Research Intern
Microsoft Research Cambridge, UK
Developed FastDiffSel for difficult few-shot dataset extraction.
2020 — 2022
Applied Scientist
Microsoft AI
Large-scale language model training with Azure AI and MSAI. Worked on the Language Science team building, training and deploying LMs across enterprise scenarios.
2018 — 2020
M.S., Computer Science
University of Maryland, College Park · with Prof. Soheil Feizi
Early research on influence functions in deep learning, leading to ICLR 2021 and ICML 2020 publications.
05 / Contact Get in touch
Reach out

Always happy to discuss research on multimodal models, post-training, interpretability, and the systems work that supports them. Reach out for collaborations, mentorship, or just to chat about ideas.

→ samyadeepb@gmail.com → Google Scholar → GitHub → Twitter / X → Curriculum Vitae