Zhiying (Gin) Jiang

"If the human brain were so simple that we could understand it, we would be so simple that we couldn't."

@Quinn drew this Don't Starve character of me


Hi, I’m Gin, a researcher with a passion for understanding and improving both machine learning and human learning.

I’m the co-founder of AFAIK.io (NextAI 2024 cohort). AFAIK is a personalized learning platform that aims at addressing inequalities in access to higher education. Our platform enables anyone to systematically learn anything in-depth at their own level without concerns about hallucinations or misinformation.

Before founding AFAIK, I earned my PhD from the University of Waterloo, where I completed my degree in about 3.5 years under the mentorship of Professor Jimmy Lin and in collaboration with Professor Ming Li. Prior to that, I spent four years at Rensselaer Polytechnic Institute, conducting research at Blender under the guidance of Professor Heng Ji.

My research focuses on the interpretability and generalizability of machine learning models, with a strong interest in the intersection of information theory and learning. I’m deeply inspired by the idea that compression lies at the heart of both human and machine learning, guiding my exploration of fundamental, theory-driven approaches.

Beyond machine learning, I love neuroscience, physics, and food science.

Selected Publications

  1. ACL2023
    “Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors
    Jiang, Zhiying, Yang, Matthew, Tsirlin, Mikhail, Tang, Raphael, Dai, Yiqin, and Lin, Jimmy
    In Findings of the Association for Computational Linguistics (ACL) 2023
  2. ACL2023
    What the DAAM: Interpreting Stable Diffusion Using Cross Attention
    Tang, Raphael, Liu, Linqing, Pandey, Akshat, Jiang, Zhiying, Yang, Gefei, Kumar, Karun, Stenetorp, Pontus, Lin, Jimmy, and Ture, Ferhan
    In Proceedings of Association for Computational Linguistics (ACL), Best Paper Award, 2023
  3. NeurIPS2022
    Few-Shot Non-Parametric Learning with Deep Latent Variable Model
    Jiang, Zhiying, Dai, Yiqin, Xin, Ji, Li, Ming, and Lin, Jimmy
    In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS) Spotlight. 2022
  4. BlackBoxNLP
    How Does BERT Rerank Passages? An Attribution Analysis with Information Bottlenecks
    Jiang, Zhiying, Tang, Raphael, Xin, Ji, and Lin, Jimmy
    In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP 2021
  5. EMNLP2020
    Inserting Information Bottleneck for Attribution in Transformers
    Jiang, Zhiying, Tang, Raphael, Xin, Ji, and Lin, Jimmy
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings 2020
  6. EMNLP2020
    Document Ranking with a Pretrained Sequence-to-Sequence Model
    Nogueira, Rodrigo*, Jiang, Zhiying*, Pradeep, Ronak, and Lin, Jimmy
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings 2020