Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
In this work we develop a speech inversion system to predict vocal tract parameters using the cortical features of acoustic speech. We demonstrate that the cortical features are correlated to the vocal tract parameters highlighting that the audiotry theory of speech perception is linked to the motor theory of speech production.
Recommended citation: Parikh, R., Seneviratne, N., Sivaraman, G., Shamma, S., Espy-Wilson, C. (2022) Acoustic To Articulatory Speech Inversion Using Multi-Resolution Spectro-Temporal Representations Of Speech Signals. Proc. Interspeech 2022, 4681-4685, doi: 10.21437/Interspeech.2022-10926 https://www.isca-archive.org/interspeech_2022/parikh22b_interspeech.pdf
In this work we demonstrate that deep neural network based end-to-end speech segregation models cue on to the harmonic structure of speech for grouping and segregating sources. We demonstrate that these networks completely fail to separate inharmonic sources, and that they are unable to learn how to segregate speech when trained on mixtures of inharmonic speech.
Recommended citation: R. Parikh, I. Kavalerov, C. Espy-Wilson and S. Shamma, "Harmonicity Plays a Critical Role in DNN Based Versus in Biologically-Inspired Monaural Speech Segregation Systems," ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, 2022, pp. 536-540, doi: 10.1109/ICASSP43922.2022.9747314. https://ieeexplore.ieee.org/abstract/document/9747314
In this work we demonstrate a white-box model inversion attack on Natural Language Understanding models. We show that an adversary can obtain sensitive information from the training data if given access to the model parameters.
Recommended citation: Rahil Parikh, Christophe Dupuy, and Rahul Gupta. 2022. Canary Extraction in Natural Language Understanding Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pages 552–560, Dublin, Ireland. Association for Computational Linguistics. https://aclanthology.org/2022.acl-short.61.pdf
This work explores the relationship between Acoustic Event Tagging (AET) and Acoustic Scene Classification (ASC) in a multi-task learning framework. Through extensive empirical analysis, we demonstrate that using AET as an auxiliary task improves ASC performance through regularization, regardless of the event types or dataset size.
Recommended citation: Parikh, R., Sundar, H., Sun, M., Wang, C., Matsoukas, S. (2022) Impact of Acoustic Event Tagging on Scene Classification in a Multi-Task Learning Framework. Proc. Interspeech 2022, 4192-4196, doi: 10.21437/Interspeech.2022-10905 https://www.isca-archive.org/interspeech_2022/parikh22_interspeech.pdf
This work introduces a prompt-tuning method to control memorized content extraction in LLMs, demonstrating both attack and defense strategies. Using GPT-Neo models, they show their attack increases extraction rates by 9.3 percentage points while their defense reduces extraction by up to 97.7% with minimal impact on model utility.
Recommended citation: Mustafa Ozdayi, Charith Peris, Jack FitzGerald, Christophe Dupuy, Jimit Majmudar, Haidar Khan, Rahil Parikh, and Rahul Gupta. 2023. Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1512–1521, Toronto, Canada. Association for Computational Linguistics. https://aclanthology.org/2023.acl-short.129/
This work, in collaboration with NASWA, leverages generative AI to extract real-time labor market data from online job postings, helping policymakers better understand trends and characteristics like education, remote work, and benefits across occupations. The findings aim to fill data gaps and inform labor, workforce, and economic policy.
Recommended citation: Mark Howison, William O. Ensor, Suraj Maharjan, Rahil Parikh, Srinivasan H. Sengamedu, Paul Daniels, Amber Gaither, Carrie Yeats, Chandan K. Reddy, and Justine S. Hastings. 2024. Extracting Structured Labor Market Information from Job Postings with Generative AI. Digit. Gov.: Res. Pract. Just Accepted (July 2024). https://dl.acm.org/doi/abs/10.1145/3674847
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.