Basic Information
I am the ideal match for your needs, if you're seeking students with a robust foundation in:
NLP (LLMs), human cognition, and education (including computer science, STEM).
I am passionate about developing anthropomorphic agents to support underprivileged populations. Specifically, my endeavors modify current black-box language models (LMs) to emulate human understanding and reasoning procedure. Humans evolve sophisticated intelligence through explicit structured knowledge and symbolic systems (Tulving, E., 1985). In contrast, a significant challenge with current LMs, including the SOTA GPT-4o, is their inability to automatically construct such structured and symbolic representations within the neural networks, as humans do. This deficiency, lacking intermediate symbolic representations to bridge neurons and language outputs, leads to widely criticized unreliable reasoning in LMs.
To alleviate it, my research directs LMs to construct and reason with structured and symbolic representations. My research trajectory begins with event extraction through question-answering and synthetic data augmentation techniques (Proj.4), followed by constructing event schemas using human-computer interaction (Proj.3). These efforts assist models to interpret structured knowledge graphs from unstructured text. Subsequently, I investigate the translation of natural language into agent-executable symbolic language utilizing a human cognition theory named ZPD (Proj.2). This work enhances the model’s reliable reasoning capabilities via symbolic representations. Currently, my work explores pretraining LMs using a reconstruction loop that integrates both natural language and knowledge graphs (Proj.1). This approach embeds encoding-decoding skill, akin to human learning processes, into the LMs in the pretraining phase. Collectively, my research aims to advance artificial agents with human-like thinking abilities.
To know more about my experience, please check my CV, and SOP.
To know more about my research, please check my research projects. [slides] [video recording]
If you are interested in my work, feel free to email me.
Research Interests:
- Reasoning in Natural/Symbolic Language
- Language Model Pretraining
- Interdisciplinary in NLP, CV and Robotics
Expertise:
- Learning Science and Cognitive Science (6 years of experience, B.S., M.Ed)
- Human Learning and Knowledge Storage
- Learning Task Design
- Natural Language Processing (4 years of experience, MSE)
- Event Extraction, Schema Induction
- Reasoning in Natural and Symbolic Language
Publications
- Pretraining Language Models with NL-KG-NL Reconstruction Loop. 2025
Zhang, T., Mai, F., Flek, L.
paper - PROC2PDDL: Open-Domain Planning Representations from Texts. NLRSE@ACL 2024
Zhang, T.*, Zhang, L.*, Hou, Z., Wang, Z., Gu, Y., Clark, P., Callison-Burch, C., and Tandon, N.
paper poster oral - PDDLEGO: Iterative Planning in Textual Environments. *SEM 2024
Zhang, L., Jansen, P., Zhang, T., Clark, P., Callison-Burch, C., Tandon, N.
paper oral - WorldWeaver: Procedural World Generation for Text Adventure Games. Wordplay@ACL 2024
Jin, M., Kaul, M., Ramakrishnan, S., Jain, H., Chandrawat, S., Agarwal, I., Zhang, T., Zhu, A., Callison-Burch, C.
paper - Human-in-the-Loop Schema Induction. ACL Demo 2023
Zhang, T.*, Tham, I. *, Hou, Z. *, Ren, J., Zhou, L., Xu, H., Zhang, L., Martin, L., Dror, R., Li, S., Ji, H., Palmer, M., Brown, S., Suchocki, R., and Callison-Burch, C.
paper poster oral - Question-Answering Data Augmentation for Argument Role Labeling. 2022
Zhang, T., Sulem, E., Roth, D.
paper
Education
- MSE in Data Science, Jan. 2021 - Dec. 2022
University of Pennsylvania, Philadelphia, America - M.Ed in Learning Science and Technology, Sept. 2018 - Dec. 2019
University of Pennsylvania, Philadelphia, America - B.S in Educational Technology, Sept. 2014 - Jun. 2018
Beijing Normal University, Beijing, China
Research Experience
- Research Assistant: NLP Group at UPenn, May. 2022 - Jul. 2023
- Research Assistant: Cognitive Computation Group at UPenn, Mar. 2020 – Dec. 2022