Disentangling language understanding and reasoning structures in cross-lingual chain-of-thought prompting

Research output: Contribution to conferencePaper

Abstract

Cross-lingual chain-of-thought prompting techniques have proven effective for investigating diverse reasoning paths in Large Language Models (LLMs), especially for low-resource languages. Despite these empirical gains, the mechanisms underlying cross-lingual improvements remain perplexing. This study, therefore, addresses whether the benefits of cross-lingual prompting arise from reasoning structures intrinsic to each language, or are simply a consequence of improved comprehension through cross-linguistic exposure. We employ neuron intervention and perturbation techniques to analyze and deactivate language-specific reasoning neurons during cross-lingual prompting, leading to performance disparities across languages, up to 27.4%. Our findings disentangle that these neurons are essential for reasoning in their respective languages but have minimal effect on reasoning in other languages, providing evidence for the existence of language-specific local reasoning structures and guiding the development of more interpretable and effective multilingual AI systems.
Original languageEnglish
Pages12200-12206
DOIs
Publication statusPublished - 2025
EventEMNLP 2025 - Suzhou, China
Duration: 4 Nov 20259 Nov 2025
https://2025.emnlp.org/

Conference

ConferenceEMNLP 2025
Period4/11/259/11/25
Internet address

Keywords

  • Cross-lingual chain-of-thought prompting techniques
  • Large Language Models (LLMs)

Fingerprint

Dive into the research topics of 'Disentangling language understanding and reasoning structures in cross-lingual chain-of-thought prompting'. Together they form a unique fingerprint.

Cite this