SSNLP 2026 begins in…
Welcome
The Singapore Symposium on Natural Language Processing (SSNLP) returns on Wednesday, January 21, 2026, at SUTD as a full-day, in-person event. Since 2018, SSNLP has been the annual gathering of Singapore’s NLP community, bringing together students, faculty, and industry researchers to share ongoing work and spark collaboration. Held in conjunction with AAAI 2026, this year’s symposium will feature invited keynotes from leading researchers in the field, a panel discussion, and an immersive poster-focused program showcasing recent work from Singapore-based researchers published in top venues such as ACL, EMNLP, NeurIPS, ICLR, and AAAI this year. We look forward to welcoming you to SSNLP 2026 for a day of research exchange and community building!
🎉 News
- 22 Dec 2025 • Registration is now open! Please register by Jan 11, 2026 to join us.
- 20 Nov 2025 • Call for Presentations is now open, fill the submission form by Dec. 7 to share your latest work to the community!
- 18 Nov 2025 • SSNLP 2026 website launched 🚀
Programme
Date: Jan 21, 2026 (Wed) | Venue: Albert Hong Lecture Theatre 1 & Campus Center @ SUTD
Program details are to be confirmed and will be updated closer to the event date.
| Time | Event | Presenter |
|---|---|---|
| 09:00–09:30 | Registration | |
| 09:30–09:40 | Welcome and Opening Remarks | |
| 09:40–10:20 | Keynote 1 | Hung-yi Lee National Taiwan University |
| 10:20–11:00 | Invited Talks from Singapore Government Agency | Jian Gang Ngui AI Singapore Seok Min Lim IMDA |
| 11:00–11:40 | Keynote 2 | Tanmoy Chakraborty IIT Delhi |
| 11:40–13:00 - Lunch Break | ||
| 13:00–14:30 | Poster Session Details ↓ | |
| 14:30–15:10 |
Keynote 3: What Does Simple Mean? Grounded Text SimplificationAbstract: Large language models can generate fluent texts on demand, yet fluency is not the same as accessibility: the same text may be trivial for one reader and incomprehensible for another. This keynote argues that text simplification and complexity control should be grounded in standardized proficiency definitions rather than ad-hoc heuristics. I introduce a CEFR-grounded framework that treats "simplicity" as a target proficiency level, enabling controlled adaptation of syntax and vocabulary while preserving meaning. Bio: Yuki Arase is a professor at the School of Computing, Institute of Science Tokyo (aka Tokyo Institute of Technology), Japan. After obtaining her PhD in Information Science from Osaka University in 2010, she worked for Microsoft Research Asia, where she started NLP research that continues to captivate her to this day. Her research interests focus on paraphrasing and NLP technology for language education and healthcare. |
Yuki Arase Tokyo Institute of Technology |
| 15:10–15:50 |
Keynote 4: Can a Language Model Be Its Own Judge?Abstract: Modern large language models (LLMs) are increasingly expected not only to generate responses but also to evaluate their own outputs. This talk presents a unified perspective on transforming LLMs into reliable evaluators and demonstrates how such judging capability can, in turn, strengthen their reasoning performance. Bio: Derek F. Wong is a Full Professor at the University of Macau, where he leads the Natural Language Processing and Chinese–Portuguese Machine Translation Laboratory (NLP2CT Lab). He serves on the boards and committees of CIPS, CCF, and AFNLP, and holds editorial roles with IEEE/ACM TASLP, ACM TALLIP, TACL, and the ACL Rolling Review. |
Derek F. Wong University of Macau |
| 15:50–16:20 - Coffee Break | ||
| 16:20–17:20 | Panel Discussion | |
| 17:20–17:30 | Closing Remarks | |
Here is the poster list
| No. | Poster Details |
|---|---|
| 1 |
Do Retrieval Augmented Language Models Know When They Don't Know?
AAAI 2026
|
| 2 |
RecToM: A Benchmark for Evaluating Machine Theory of Mind in LLM-based
Conversational Recommender Systems
AAAI 2026
|
| 3 |
SlideTailor: Personalized Presentation Slide Generation for Scientific
Papers
AAAI 2026
|
| 4 |
Self-improvement towards Pareto Optimality: Mitigating Preference Conflicts
in Multi-objective Alignment
ACL 2025
|
| 5 |
Afterburner: Reinforcement Learning Facilitates Self-Improving Code
Efficiency Optimization
NeurIPS 2025 (Preprint)
|
| 6 |
Two Causally Related Needles in a Video Haystack
NeurIPS 2025
|
| 7 |
Reframe Your Life Story: Interactive Narrative Therapist and Innovative
Moment Assessment with Large Language Models
EMNLP 2025
|
| 8 |
LLMC+: Benchmarking Vision-Language Model Compression with a Plug-and-play
Toolkit
AAAI 2026
|
| 9 |
MMDocIR: Benchmarking Multimodal Retrieval for Long Documents
EMNLP 2025
|
| 10 |
Reinforcing Compositional Retrieval: Retrieving Step-by-Step for Composing
Informative Contexts
ACL 2025
|
| 11 |
Exploring Quality and Diversity in Synthetic Data Generation for Argument
Mining
EMNLP 2025
|
| 12 |
FineReason: Evaluating and Improving LLMs' Deliberate Reasoning through
Reflective Puzzle Solving
ACL 2025
|
| 13 |
GeoPQA: Bridging the Visual Perception Gap in MLLMs for Geometric Reasoning
EMNLP 2025
|
| 14 |
Assessing Judging Bias in Large Reasoning Models: An Empirical Study
COLM 2025
|
| 15 |
SkyLadder: Better and Faster Pretraining via Context Window Scheduling
NeurIPS 2025
|
| 16 |
Benchmarking Contextual and Paralinguistic Reasoning in Speech-LLMs: A Case
Study with In-the-Wild Data
EMNLP 2025
|
| 17 |
Optimization before Evaluation: Evaluation with Unoptimized Prompts Can be
Misleading
ACL 2025
|
| 18 |
Drifting Away from Truth: GenAI-Driven News Diversity Challenges LVLM-Based
Misinformation Detection
AAAI 2026
|
| 19 |
Seeing Culture: A Benchmark for Visual Reasoning and Grounding
EMNLP 2025
|
| 20 |
The Emergence of Abstract Thought in Large Language Models Beyond Any
Language
NeurIPS 2025
|
| 21 |
The Missing Parts: Augmenting Fact Verification with Half-Truth Detection
EMNLP 2025
|
| 22 |
Through the Valley: Path to Effective Long CoT Training for Small Language
Models
EMNLP 2025
|
| 23 |
Static or Dynamic: Towards Query-Adaptive Token Selection for Video Question
Answering
EMNLP 2025
|
| 24 |
Causality Matters: How Temporal Information Emerges in Video Language Models
AAAI 2026
|
| 25 |
Discursive Circuits: How Do Language Models Understand Discourse Relations?
EMNLP 2025
|
| 26 |
AdaMCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive
Multilingual Chain-of-Thought
AAAI 2026
|
Invited Speakers
Jian Gang Ngui
AI Singapore
Seok Min Lim
IMDA
Panel Discussion
Behind the Scenes of NLP Peer Review: Perspectives from Program Chairs
Organizers
| General Chair | Wenxuan Zhang · Singapore University of Technology and Design |
|---|---|
| Program Chairs |
Wenya Wang
· Nanyang Technological University
Yang Deng
· Singapore Management University
|
| Local Chairs |
Ryner Tan
· Singapore University of Technology and Design
Qisheng Hu
· Nanyang Technological University
|
| Registration Chairs |
Satar Burak
· Singapore Management University
Quanyu Long
· Nanyang Technological University
|
| Web & Publicity Chair | Bobo Li · National University of Singapore |
| Industrial Relation Chairs |
Liu Qian
· TikTok/ByteDance AI Innovation Center, Singapore
Ming Shan Hee
· MBZUAI Fundamental Models Research Center
|
📮 For any inquiries, feel free to reach out to Wenxuan Zhang, Wenya Wang, Yang Deng.
Sponsors
Location
SSNLP 2026 will be held at Albert Hong Lecture Theatre 1, SUTD, 8 Somapah Rd, Singapore 487372. Direction instruction will be given near the date.