(pronounced: Geet-aan-juh-lee)
I work on problems at the intersection of Natural Language Processing, Trust & Safety, and Explainable AI. My goal is to contribute towards reliable and interpretable AI frameworks for safer digital spaces.
I am currently a Visiting Scholar in the School of Applied and Creative Computing at Purdue, working on computational modeling of exploitative discourse. My focus is understanding how language models encode context and uncertainty when handling such discourse. I also maintain active research collaborations with the AKRaNLU Lab and the GAURD research group.
I completed my Ph.D. in Summer 2025, where I conducted research on language models for detecting online child grooming. I also explored broader challenges in language model reasoning, including bias amplification from alignment, Clever Hans phenomena, and word sense disambiguation.
You can learn more about my work by checking out my Google Scholar.
Recent Updates