JWLLP-34 (2025/09): The 34th Joint Workshop on Linguistics and Language Processing
in part concurrently with
NextEdu 207: The 207th meeting of Association for Next-Generation Higher Education

Announcements

tentative workshop program outline [as of 2025/07/15: subject to change]

day one:
2025/09/05
Friday
JWLLP-34
Concurrently
with
NextEdu-207
14:00-14:10 preparations and registration
14:10-14:15 opening remarks for NextEdu-207
Yasunari Harada
14:15-14:45 Exploring Diverse Instructional Designs for Developing Non-Cognitive Skills
Eriko Uematsu
14:45-15:15 LOGIGLISH: A new approach to concurrently acquire critical thinking and English fluency
Kaho Fujikawa
15:15-15:45 Redefining Critical Thinking in English Education
Yuya Akatsuka
15:45-15:55 break and registration
15:55-16:05 opening remarks for JWLLP-34
Chu-Ren Huang, Jong-Bok Kim and Yasunari Harada
16:05-16:40 [Featured Talk]
LLM, GenAI and Autonomous Mutual Learning of English
Miwa Morishita, Yasunari Harada and Jason S. Chang
16:40-17:15 Does 'All' Mean 'Every'? Exploring Scope Preferences of Universal Quantifiers in Italian and Mandarin Chinese
Lavinia Salicchi and Yu-Yin Hsu
17:15-17:50 Investigating the Representation of Semantic Relations of Chinese Noun-Noun Compound in Transformer-based Language Models
He Zhou, Emmanuele Chersoni and Yu-Yin Hsu
18:00-20:30 informal reception [details to be announced]
day two:
2025/09/06
Saturday
JWLLP-34
10:10-10:30 preparations and registration
10:30-11:30 [Invited Talk on line]
Revisiting the Complementizer 'that' in the Age of AI
Xian Wang
11:30-12:10 [Featured Talk]
AI-aid for Japanese Justice and Legal systems
Hiroaki Yamada
12:10-13:40 lunch at faculty restaurant
13:40-14:40 [Invited Talk]
From Sensory Modalities to Embodied Cognition: Sensory Lexicon and Corpus-Driven Studies
Chu-Ren Huang
14:40-15:40 [Invited Talk]
Linking Language Development Research to Broader Language Sciences
Reiko Mazuka
15:40-15:50 break
15:50-16:50 [Featured Talk]
AI-aid for Japanese Justice and Legal systems
Hiroaki Yamada
16:50-17:50 [Invited Talk]
Over-counting of number of times?:
The vague image of modifying construction in Japanese and Taiwan Mandarin
Toshiyuki Sadanobu and Ya-Yun Cheng
18:45-20:30 dinner at some local restaurant [details to be announced]
day three:
2025/09/07
Sunday
JWLLP-34
10:00-10:15 preparations and registration
10:15-10:50 Can GPT-4 Grasp Metaphors in Political Discourse? A Case Study on ELECTION Metaphors
Jing Chen, Winnie Zeng, Emmanuele Chersoni, Kathleen Ahrens & Chu-Ren Huang
10:50-11:25 On the encoding of verb aspect in language models
Yuxi Li, Emmanuele Chersoni and Yu-Yin Hsu
11:25-12:25 [Invited Talk]
Across-Brain Networks Revealed Through Face-to-Face Social Interactions Using Hyperscanning fMRI: Eye Contact, Joint Attention, and Goal-Directed Conversation
Norihiro Sadato
12:25-12:35 closing remarks for JWLLP-34
Chu-Ren Huang, Jong-Bok Kim and Yasunari Harada

NextEdu 207 : The 207th meeting of Association for Next-Generation Higher Education

JWLLP-34 (2025/09): The 34th Joint Workshop on Linguistics and Language Processing

tentative workshop program in detail [as of 2025/07/15: subject to change]

day one:
2025/09/05
Friday
JWLLP-34
Concurrently
with
NextEdu-207
14:00-14:10 preparations and registration
14:10-14:15 opening remarks for NextEdu-207
Yasunari Harada
https://researchmap.jp/HaradaYasunari
14:15-14:45 Exploring Diverse Instructional Designs for Developing Non-Cognitive Skills
Eriko Uematsu
abstract for the talk:
This study analyzes a case of educational practice aimed at fostering non-cognitive skills?including collaboration, self-regulation, and media and financial literacy?and examines its pedagogical impact and potential. In Japan, the post-COVID-19 period has seen a rapid enhancement of ICT infrastructure in schools through the GIGA School Initiative, alongside the gradual introduction of AI-supported learning environments. In response to these shifts, the importance of non-cognitive skills, as emphasized by Heckman (2006), has been increasingly acknowledged, necessitating a redefinition of core educational competencies. The findings of this case study suggest that the cultivation of non-cognitive skills contributes to the development of leadership and dialogic communication abilities, underscoring their relevance in contemporary education.
short bio:
Dr. Eriko Uematsu, Ph.D., currently serves as a Specially Appointed Professor at Niigata University of Rehabilitation and a Research Fellow at the Institute for Information Education, Waseda University. She previously held the position of Senior Visiting Research Fellow at the Research Center for Advanced Science and Technology, the University of Tokyo, until March. Dr. Uematsu holds a doctorate in education and completed the Executive MBA program at Waseda Business School. Her research is grounded in a practice-oriented approach to educational science, with a particular focus on the application of advanced ICT in learning environments and the development of evidence-based educational innovations.
profile: https://researchmap.jp/eriko-u
14:45-15:15 LOGIGLISH: A new approach to concurrently acquire critical thinking and English fluency
Kaho Fujikawa
link to profile: requesting
abstract for the talk: requesting
15:15-15:45 Redefining Critical Thinking in English Education
Yuya Akatsuka
abstract for the talk:
Definitions of critical thinking vary widely. In the context of English education, critical reading and critical writing are often expected in secondary and higher education. However, the term “critical thinking” has frequently misunderstood or misapplied in EFL contexts. It is commonly assumed that critical thinking is considered to have opposing viewpoints, but does this truly fulfil its definitional requirements of 'critical thinking' ? This study aims to redefine critical thinking in EFL contexts from a constructivist educational perspective.
profile: https://researchmap.jp/7000017408
15:45-15:55 break and registration
15:55-16:05 opening remarks for JWLLP-34
Chu-Ren Huang and Yasunari Harada
profile for Chu-Ren Huang: https://www.churenhuang.com/
profile for Yasunari Harada: https://researchmap.jp/HaradaYasunari
16:05-16:40 [Featured Talk]
LLM, GenAI and Autonomous Mutual Learning of English
Miwa Morishita, Yasunari Harada and Jason S. Chang
abstract for the talk:
profile for Miwa Morishita: https://researchmap.jp/miwamorishita?lang=en
profile for Yasunari Harada: https://researchmap.jp/HaradaYasunari
profile for Jason S. Chang:
16:40-17:15 Does 'All' Mean 'Every'? Exploring Scope Preferences of Universal Quantifiers in Italian and Mandarin Chinese
Lavinia Salicchi and Yu-Yin Hsu
abstract for the talk:
Quantifiers indicate the number of entities referenced in an utterance. Doubly quantified sentences can be ambiguous in expressing either surface or inverse scope. While previous studies - mostly in English - investigated the preferred interpretation of quantifiers, cross-linguistic investigations remain limited In this study, we examine the scope preferences in two typologically distinct languages, Italian and Mandarin Chinese, focusing on universal quantifiers “all” (IT: “tutti”, ZH: “所有”) and “every” (IT: “ogni”, ZH: “?只”/“?个”).
Using a cumulative-window, word-by-word, self-paced reading paradigm, we collected reading times (RTs) from 30 Chinese and 40 Italian participants. Participants read sentence-pairs and rated on a 1-to-5 scale, how likely the second sentence interpreted the first. The subject of the first (context) sentence contained a quantifier (ALL/EVERY), and the second (target) sentence had either collective (COLL) or distributive (DIST) reading.
In Chinese, significant differences were found for mean RTs between EVERY-COLL and EVERY-DIST, with EVERY-DIST being processed faster (p = 0.023). In Italian, significant differences were found between all the pair-wise comparisons (ps < 0.0001), except between ALL-DIST and EVERY-DIST (p = 0.057). The ALL quantifier generally facilitated the COLL interpretation, while EVERY primed the DIST interpretation.
Interpretation ratings show that Italian participants ranked EVERY-DIST highest, followed by ALL-DIST, ALL-COLL, and EVERY-COLL. All combinations significantly differed from each other (ps < 0.0001), except ALL-DIST and ALL-COLL (p = 0.05). Similarly, in Chinese, EVERY-DIST was rated highest, followed by ALL-COLL, ALL-DIST, and EVERY-COLL. All combinations showed significant differences (ps < 0.0001).
Our investigation revealed that both languages consistently preferred EVERY-DIST. While Chinese RTs and ratings confirmed a preference for ALL-COLL over ALL-DIST, in Italian, ALL facilitates COLL, but speakers find both interpretations compatible.
short bio for Lavinia Salicchi:
short bio for Yu-Yin Hsu :
17:15-17:50 Investigating the Representation of Semantic Relations of Chinese Noun-Noun Compound in Transformer-based Language Models
He Zhou, Emmanuele Chersoni and Yu-Yin Hsu
abstract for the talk: abstract-HeZhou.pdf
A noun-noun compound consists of two nouns but functions as a single noun that denotes an entity. However, the semantic relation between the two nouns can be significantly varied, as the meaning of a noun-noun compound is not merely the combination of the meanings of each part. For example, in [aiqing gushi], where [aiqing] (love) modifies [gushi] (story), it means a story that is ABOUT love, while in [minjian gushi] , [minjian] (folk) modifies [gushi] (story) denoting a story that is FROM the folk. The semantic relations between the modifier and the head noun differ, even though both compounds share the same head noun. In this study, we investigate the semantic compositionality of Chinese noun-noun compounds from a computational linguistics perspective. Specifically, we examine how the Transformer-based language models represent the latent semantic relation knowledge existing within Chinese noun-noun compounds. To initiate our study, we constructed a dataset of Chinese noun-noun compounds with semantic relation annotations. The annotation process involved two steps: first, identifying noun-noun compounds from nouns, and second, identifying semantic relation(s) between the modifier and the head nouns. Consequently, the dataset comprises of 2,083 noun-noun compounds, each labeled with one or more possible semantic relations. To explore how the Transformers-based language models process noun-noun compounds and encode the semantic relations, we constructed featuring groups of compounds with shared lexical or semantic features. We then examined whether representations extracted from the language models can differentiate between noun-noun compounds based on whether they share the same semantic relation. We found: (1) Transformer-based language models do encode semantic relations within compounds, but to a limited extent, with this encoding being more pronounced in the middle layers of models. (2) Regarding compound representations, despite that using the mean-pooled embeddings of the modifier and the head noun encoded relatively more semantic relation information, encoder-only models such as BERT, RoBERTa, and multilingual BERT tend to encode more such information using solely the modifier noun. On the contrary, decoder-only models such as LLaMA3, Qwen3, and Deepseek-llm tend to encode more using solely the head noun. This can be attributed to the architecture differences between the two types of models. (3) Taking a closer look at encodings for different semantic relations, the models can capture more semantic relation information that denotes agentive actions such as MAKE. They also performed modestly with the purposive relation FOR and the locative relation IN. However, their capability to encode the more general relation ABOUT and the essive relation BE is comparatively limited.
18:00-20:30 informal reception [details to be announced]
day two:
2025/09/06
Saturday
JWLLP-34
10:10-10:30 preparations and registration
10:30-11:30 [Invited Talk on line]
Revisiting the Complementizer 'that' in the Age of AI
Xian Wang
abstract for the talk:
The complementizer ‘that’ has long been a subject of extensive study in linguistic disciplines such as functional grammar, discourse analysis, and language acquisition. More recently, it has attracted significant attention in translation studies through examinations of the explicitation hypothesis. Building on this body of work, our study investigates the extent to which LLM-powered chatbots can accurately pinpoint the factors contributing to the omission of the complementizer ‘that’. The goal is to evaluate the ability of these AI models to make nuanced linguistic judgments.
profile for Xian Wang: https://fah.umac.mo/staff/staff-english/wang-xian-vincent/
11:30-12:10 [Featured Talk]
AI-aid for Japanese Justice and Legal systems
Hiroaki Yamada
abstract for the talk:
Recent advances in language technologies have created numerous opportunities for legal information processing. This talk explores the study, development, implementation, and application of natural language processing in the Japanese legal domain. We will discuss applications beneficial to various stakeholders: for legal professionals, we examine relevant case search capabilities, RAG-based legal library services, and tools for organizing conflicting claims; for scholars, we consider large-scale case analysis and automated annotation systems; and for the general public, we explore the possibilities to provide more accessible legal resources and online dispute resolution platforms. The talk will highlight how AI can provide easier access to legal information while addressing practical implementation challenges within Japan's legal frameworkRecent advances in language technologies have created numerous opportunities for legal information processing. This talk explores the study, development, implementation, and application of natural language processing in the Japanese legal domain. We will discuss applications beneficial to various stakeholders: for legal professionals, we examine relevant case search capabilities, RAG-based legal library services, and tools for organizing conflicting claims; for scholars, we consider large-scale case analysis and automated annotation systems; and for the general public, we explore the possibilities to provide more accessible legal resources and online dispute resolution platforms. The talk will highlight how AI can provide easier access to legal information while addressing practical implementation challenges within Japan's legal framework.
profile for Hiroaki Yamada: https://researchmap.jp/hiroaki_yamada
12:10-13:40 lunch at faculty restaurant
13:40-14:40 [Invited Talk]
From Sensory Modalities to Embodied Cognition: Sensory Lexicon and Corpus-Driven Studies
Chu-Ren Huang
https://www.churenhuang.com/
14:40-15:40 [Invited Talk]
Linking Language Development Research to Broader Language Sciences
Reiko Mazuka
abstract for the talk:
Human infants acquire language from the speech they are exposed to without specific instruction or training. Considering how difficult it is to learn a new language later in life, research into the mechanisms of infant language acquisition could offer significant insights into the fields of linguistics and language processing.
In this presentation, results from a series of experimental studies on Japanese infants' phonological development will be presented, along with cross-linguistic studies comparing infants learning Japanese and other languages (e.g., English, French, Dutch, Korean). Analysis of speech input from which infants learn language will also be discussed. We will also discuss how these findings in language development research could impact linguistics and language processing research.
short bio:
Dr. Reiko Mazuka received her PhD in developmental psychology from Cornell University in 1990 and joined the Department of Psychology at Duke University. In 2004, she joined RIKEN Brain Science Institute (currently RIKEN Center for Brain Science) and opened the Laboratory for Language Development. Her investigations center on how infants learn the sound system of languages using behavioral, computational, and imaging techniques. Since March 2023, she has been a senior visiting researcher at RIKEN CBS, where she continues to carry out her research, as well as a Project Professor at the International Research Center for Neurointelligence at University of Tokyo, Senior Visiting researcher and a Visiting Professor at Waseda University and a Research Professor at Duke University.
https://researchmap.jp/reiko_mazuka
15:40-15:50 break
15:50-16:50 [Invited Talk]
On the uses of multiple wh-questions in discourse: A big data-driven approach
Jong-Bok Kim abstract for the talk:
Languages typically observe the so-called Law of the Coordination of Likes (LCL) that requires only constituents of the identical syntactic category to be coordinated. However, there are several environments where the LCL could be violated: one intriguing one is multiple wh-coordination, as illustrated by English and Korean examples, respectively.

  (1)   a.   When and what did John eat?
     b.   Mimi-ka encey kuliko mwues-ul mek-ess-ni?
          Mimi-NOM when and what-ACC eat-PST-QUE
        ‘When and what did Mimi eat?’

In both examples, the adverbial expression `when’ is coordinated with the nominal expression `what’, violating the LCL. In this study, we investigate large-size corpora like COCA (Corpus of Contemporary American English) and Sejong to figure out how multiple wh-coordination constructions are used in discourse.
16:50-17:50 [Invited Talk]
Over-counting of number of times?:
The vague image of modifying construction in Japanese and Taiwan Mandarin
Toshiyuki Sadanobu and Ya-Yun Cheng
https://researchmap.jp/sadanoburesearchmap
18:45-20:30 dinner at some local restaurant [details to be announced]
day three:
2025/09/07
Sunday
JWLLP-34
10:00-10:15 preparations and registration
10:15-10:50 Can GPT-4 Grasp Metaphors in Political Discourse? A Case Study on ELECTION Metaphors
Jing Chen, Winnie Zeng, Emmanuele Chersoni, Kathleen Ahrens, & Chu-Ren Huang
abstract for the talk: Metaphors are crucial communicative tools that express complex ideas and influence audience perception by emphasizing certain aspects while concealing others (Lakoff & Johnson, 1980/2003). In political discourse, metaphorical framing effects have been widely observed (Reijnierse et al., 2015, Burgers, 2016), yet their analysis traditionally relies on labor-intensive expert annotations, which limits scalability of large-scale studies. With the rise of large language models (LLMs) showing human-like performance across tasks (Gilardi et al., 2023; Srivastava et al., 2022), it remains an open question how effectively LLMs can assist metaphor analysis, particularly with novel metaphors, which are notoriously difficult for computational systems (Tong et al., 2023). To answer this question, this case study investigates to what extent GPT-4 can identify metaphors related to the concept of ELECTION in political discourse. We use a dataset developed by Zeng et al. (forthcoming), which includes 1,047 instances from government speeches, classified by experts into metaphorical use and literal use. We tested six prompting strategies varying in the inclusion of linguistic theory, metaphorical keywords, source domain cues, and illustrative examples. For each prompt, we collected three independent runs using temperature 0 to obtain deterministic outputs. Our results show that theory-informed prompts with examples achieved the highest accuracy (80.32%). Prompts without linguistic theory but enriched with metaphor-related keywords and source domain cues also performed well (79.08%), though with distinct error profiles. The former tends to miss some metaphors, while the latter more often mislabels literal expressions as metaphors. When examining output consistency across 18 independent runs, we found that 26.36% of metaphorical instances were consistently correctly identified, 10.60% were consistently misclassified, and 63.04% showed fluctuations across runs. This suggested that LLMs demonstrated some uncertainty in identify metaphors from actual data, despite near expert identification accuracy in some runs. Our future directions are thus to explore LLMs' understanding on metaphor through identifying source domains, compare mapping princeples between humans and models, and evaluate the coherence and acceptability of GPT-4’s paraphrasing of metaphorical expressions.
References:
  • Burgers, C. (2016). Conceptualizing change in communication through metaphor. Journal of Communication, 66(2), 250-265.
  • Gilardi, F., Alizadeh, M., & Kubli, M. (2023). ChatGPT outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30), e2305016120.
  • Lakoff, G., & Johnson, M. (1980/2003). Metaphors we live by. Chicago, IL: The University of Chicago Press.
  • Reijnierse, W. G., Burgers, C., Krennmayr, T., & Steen, G. J. (2015). How viruses and beasts affect our opinions (or not): The role of extendedness in metaphorical framing. Metaphor and the Social World, 5(2), 245-263.
  • Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., ... & Wang, G. (2022). Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
  • Tong, X., Choenni, R., Lewis, M., & Shutova, E. (2024). Metaphor understanding challenge dataset for llms. arXiv preprint arXiv:2403.11810.
  • shor bio for Winnie Zeng:
    Dr. Winnie Zeng is a Research Assistant Professor in the Department of Chinese and Bilingual Studies (CBS) at the Hong Kong Polytechnic University. Her research specializes in corpus linguistics, cognitive linguistics, and metaphor analysis in discourse domains such as politics, media, health, and environmental communication. Following her PhD graduation, she was awarded the Hong Kong RGC Post-doctoral Fellowship and worked as a Post-doctoral Fellow at CBS, where she conducted projects focusing on corpus-based metaphor analysis and computational approaches to discourse analysis (e.g., topic modeling, sentiment analysis). Her work has been published in journals including Lingua, Journal of Pragmatics, Metaphor and Symbol, Linguistics Vanguard, etc. profile or shor bio for Jing Chen:
    profile or shor bio for Emmanuele Chersoni:
    profile or shor bio for Kathleen Ahrens:
    profile for Chu-Ren Huan:
    https://www.churenhuang.com/
10:50-11:25 On the encoding of verb aspect in language models
Yuxi Li, Emmanuele Chersonisoni and Yu-Yin Hsu
abstract for this talk:
Aspect, a linguistic category describing how actions and events unfold over time, is traditionally characterized by three semantic properties: stativity, durativity and telicity. It is important for language models to have a proper handling of verb aspect, since the three semantic properties affect human semantic inferences about events and situations. We propose a study to investigate to what extent the three aspect properties are encoded in two popular language models for English, BERT and GPT-2. First, we use the technique of semantic projections to examine whether the values of the embedding dimensions of annotated verbs for stativity, durativity and telicity reflect human linguistic distinctions; second, we use distributional similarities to test if language models are sensitive enough to capture subtle contextual nuances of the verb telicity, by replicating the notorious Imperfective Paradox described by Dowty (1977). Our results show a robust encoding of aspect features, although durativity proves to be more challenging than the other two, and a better alignment of the BERT model with human patterns.
profile or short bio for Yuxi Li:
profile or short bio for Emmanuele Chersonisoni:
profile or short bio for Yu-Yin Hsu :
11:25-12:25 [Invited Talk]
Across-Brain Networks Revealed Through Face-to-Face Social Interactions Using Hyperscanning fMRI: Eye Contact, Joint Attention, and Goal-Directed Conversation
Norihiro Sadato
abstract for the talk:
Face-to-face communication influences the mental state of others by facilitating the exchange of information, ideas, and attitudes. Its foundation lies in the mother-infant relationship, characterized by the core elements of 'bi-directionality' and 'simultaneity,' which enable mutual predictability. While the prediction processes are subjective and occur independently within individuals, the shared processes and interactions between individuals are referred to as intersubjectivity. As such, a specialized methodology is essential to investigate intersubjectivity and understand the neural mechanisms underlying face-to-face communication. In this talk, I will share findings that explore the neural basis of intersubjectivity?an emergent phenomenon beyond individual-level analysis?through the application of hyperscanning fMRI to scenarios involving eye contact, joint attention, and goal-directed conversation.
short bio: Norihiro Sadato, MD, PhD
I graduated from the Kyoto University School of Medicine in 1983 and earned my PhD in Medical Sciences from the same university in 1994. From 1993 to 1995, I worked as a Visiting Research Fellow at the National Institute of Neurological Disorders and Stroke (NINDS), part of the NIH. In 1995, I was appointed as a Lecturer at Fukui Medical University, and in 1998, I was promoted to Associate Professor. In 1999, I assumed the position of Professor at the National Institute for Physiological Sciences (NIPS). Since 2023, I have served as a Professor at Ritsumeikan University while concurrently holding an adjunct professorship at NIPS. One of the key achievements during my time at NIH was discovering plastic changes in the primary visual cortex of the blind during Braille reading. This was accomplished through a multimodal approach using PET, fMRI, and TMS. Since 2003, my research has centered on the development of social cognition and its associated neural mechanisms, utilizing hyper-scanning fMRI techniques.
https://researchmap.jp/sadato
12:25-12:35 closing remarks for JWLLP-34
Jong-Bok Kim, Chu-Ren Huang and Yasunari Harada
profile for Chu-Ren Huang: https://www.churenhuang.com/
profile for Yasunari Harada: https://researchmap.jp/HaradaYasunari

notices