Abstract
Cross-lingual chain-of-thought prompting techniques have proven effective for investigating diverse reasoning paths in Large Language Models (LLMs), especially for low-resource languages. Despite these empirical gains, the mechanisms underlying cross-lingual improvements remain perplexing. This study, therefore, addresses whether the benefits of cross-lingual prompting arise from reasoning structures intrinsic to each language, or are simply a consequence of improved comprehension through cross-linguistic exposure. We employ neuron intervention and perturbation techniques to analyze and deactivate language-specific reasoning neurons during cross-lingual prompting, leading to performance disparities across languages, up to 27.4%. Our findings disentangle that these neurons are essential for reasoning in their respective languages but have minimal effect on reasoning in other languages, providing evidence for the existence of language-specific local reasoning structures and guiding the development of more interpretable and effective multilingual AI systems.
| Original language | English |
|---|---|
| Pages | 12200-12206 |
| DOIs | |
| Publication status | Published - 2025 |
| Event | EMNLP 2025 - Suzhou, China Duration: 4 Nov 2025 → 9 Nov 2025 https://2025.emnlp.org/ |
Conference
| Conference | EMNLP 2025 |
|---|---|
| Period | 4/11/25 → 9/11/25 |
| Internet address |
Keywords
- Cross-lingual chain-of-thought prompting techniques
- Large Language Models (LLMs)