香香腐宅

甬江数学讲坛559讲(明理数学大讲堂之数学讲座2026年第4讲)-Constant Bit-size Transformers are Turing Complete
2026-03-17 14:26
阅读次数:

报告时间:202641 16:00开始

报 告 人:Qian Li ( Shenzhen Research Institute of Big Data)

报告地点:9-113

报告题目:Constant Bit-size Transformers are Turing Complete

报告摘要The empirical success of Transformer-based LLMs on complex reasoning tasks has motivated research on their expressiveness. A central result in this line of research is that Transformers are Turing complete; that is, Transformers possess sufficient expressive power to compute any computable function. In this talk, I will first show that even constant bit-size Transformers are Turing complete. In particular, greater reasoning power requires only longer context windows and longer chain-of-thought (CoT), while all learnable parameters—including parameter count and numerical precision—may remain fixed constants. Moreover, the required context-window length scales with the space complexity, rather than the time complexity, of the simulated computation. I will then discuss how log-sparse attention preserves this efficient universality, providing a formal explanation for its practical success. These results help bridge modern neural architectures with classical models of computation and contribute to a deeper theoretical understanding of the computational power of LLMs.

报告人简介:Qian Li currently serves as a research scientist at Shenzhen International Center for Industrial and Applied Mathematics, Shenzhen Research Institute of Big Data. Prior to joining the SRIBD, he was an assistant professor at the Institute of Computing Technology, Chinese Academy of Sciences. He has a broad interest in theoretical computer science. He is also interested in machine learning and artificial intelligence.


关闭