Linguistics faculty and graduate students recently participated in UGA's AI Research Day 2022 on November 17. The day consisted of a keynote address, lightning presentations by members of the UGA community engaged in AI research, a discussion panel, and research posters describing the work of UGA students. Linguistics graduate students Austin Brailey-Jones and Sara E. Miller (with Peggy Renwick) won first place with their poster Application of Deep Learning to the Classification of Palatalized [t] in UK English; and Donald Dunagan and Shulin Zhang (with Maximin Coavoux, Shulin Zhang, Shohini Bhattasali, Jixing Li, Jonathan Brennan, John Hale) won third place with their poster Long-distance linguistic dependencies in Chinese and English brains. Congratulations to all of the winners in this year's AI Research Day! Application of Deep Learning to the Classification of Palatalized [t] in UK EnglishAustin Brailey-Jones, Sara E. Miller, Margaret RenwickDepartment of Linguistics, University of Georgia, USAFeed forward neural networks can classify acoustic data and can serve as a possible alternative to forced alignment which can overlook phonetic variation. Here we explore the application of deep learning in the classification of phonetic variation. Model training data was taken from the Audio British National Corpus and annotated to produce gold standard tokens. MFCC features were used to represent palatalization of [t] to [t͡ʃ] in naturalistic speech across word boundaries preceding /ju/, “you”. The model performs at 90% accuracy compared to gold judgments. In testing with novel data, the model achieves 70% - 100% success. Long-distance linguistic dependencies in Chinese and English brainsDonald Dunagan, Maximin Coavoux, Shulin Zhang, Shohini Bhattasali, Jixing Li, Jonathan Brennan, John HaleUniversity of Georgia, Université Grenoble Alpes, University of Toronto Scarborough, City University of HongKong, University of MichiganWords can occur arbitrarily far away from where they contribute their meaning in a sentence. Two examples are WH questions (WHQs), which begin with a WH-word like 'what' and object-extracted relative clauses (ORCs), in which a noun is modified by a sentence-like grammatical unit. While these long-distance dependencies have been extensively studied, never before have their brain bases been examined from a multi-lingual, naturalistic perspective. This study fills this gap by analyzing WHQs and ORCs in fMRI data collected while 35 Chinese participants (15 females) and 49 English participants (30 females) listen to translation-equivalent stories. These languages exhibit radical typological differences in word order in these constructions. It remains unknown whether the brain basis for comprehension in these languages is similar or different. Separate general linear model analyses were performed and voxel-level intersections were calculated between the results to identify common regions of selectively increased activation during the comprehension of these linguistic constructions. Further Bayesian region of interest analyses probed whether common increases were truly similar. We found remarkable cross-linguistic commonality for both constructions. WHQs were associated with increased activation in the left middle and superior temporal lobe, left temporoparietal junction, left inferior frontal gyrus, and bilateral medial frontal lobe. ORCs were associated with increased activation in the left middle temporal lobe, left inferior frontal gyrus, bilateral angular gyrus, bilateral posterior cingulate, bilateral precuneus, and left medial frontal lobe. These results support the hypothesis that, regardless of form, the brain bases of higher-level language processing are uniform across languages.