A team led by Prof. Li Ping at the Hong Kong Polytechnic University has provided compelling evidence that AI models—when trained with human-like methods—can emulate brain activity patterns. Their study published in Science Advances shows that integrating next-sentence-prediction (NSP) into large language model (LLM) training significantly improves alignment with human cognitive processes reddit.com+11projects.croucher.org.hk+11chinadaily.com.cn+11peoplechina.com.cn+1projects.croucher.org.hk+1.
🔍 Study Highlights
- Compared two LLM variants: one trained with NSP and one without.
- Using fMRI scans, researchers mapped model activations against participants reading sentence sequences.
- The NSP-enhanced model demonstrated stronger correlation with human brain responses—especially in areas handling discourse comprehension
- It also better predicted human reading speed, suggesting deeper text understanding how humans process language
🤖 Why It Matters
- Toward human-like cognitive ability: Models go beyond next-word prediction, showing promise of discourse-level comprehension similar to human thinking
- Efficiency gains: NSP-based training rivals larger models, potentially reducing computing costs without sacrificing performance .
- Neuroscience–AI bridge: Findings reinforce the synergy between brain science and AI, guiding future architecture inspired by human cognition arxiv.org.
🌐 Broader Context: China’s Brain-Inspired AI Push
This study complements a wave of R&D in China aiming to mimic human brain function:
- Teams from institutions like Peking University and the Chinese Academy of Sciences are producing neuron-inspired AI models, offering superior energy efficiency and computational power
- Reddit discussions highlight potential breakthroughs in self-replicating AI systems, including early signals of AI exhibiting situational awareness and self-replication capabilities
🔮 What Lies Ahead
- Model development: Expect more AI trained with cognitive-inspired tasks—like multi-sentence reasoning and memory modeling.
- Evaluation methods: Future tests may combine fMRI, EEG, and behavioral metrics to track AI-human alignment.
- Ethical preparation: As AI systems grow more brain-like, ethical frameworks on accountability, autonomy, and transparency become critical.
✅ Final Takeaway
This first-of-its-kind evidence shows that AI can begin to think more like humans when trained with next-sentence-prediction, promising a future where machines understand discourse as humans do. As China’s research accelerates in brain-inspired AI, we may be witnessing the early sparks of artificial general intelligence grounded in human-like cognition.


