Back to publications
Feb 03, 2026
1 min read

Probing Memes in LLMs: A Paradigm for the Entangled Evaluation World

Authors:
Luzhou Peng , Zhengxin Yang* , Honglu Ji , Yikang Yang , Fanda Fan , Wanling Gao , Jiayuan Ge , Yilin Han , Jianfeng Zhan
Corresponding:
Zhengxin Yang*
Publish @
Preprint
Abstract:
Current evaluation paradigms for large language models (LLMs) characterize models and datasets separately, yielding coarse descriptions: items in datasets are treated as pre-labeled entries, and models are summarized by overall scores such as accuracy, together ignoring the diversity of population-level model behaviors across items with varying properties. To address this gap, this paper conceptualizes LLMs as composed of memes, a notion introduced by Dawkins as cultural genes that replicate knowledge and behavior. Building on this perspective, the Probing Memes paradigm reconceptualizes evaluation as an entangled world of models and data. It centers on a Perception Matrix that captures model-item interactions, enabling Probe Properties for characterizing items and Meme Scores for depicting model behavioral traits. Applied to 9 datasets and 4,507 LLMs, Probing Memes reveals hidden capability structures and quantifies phenomena invisible under traditional paradigms (e.g., elite models failing on problems that most models answer easily). It not only supports more informative and extensible benchmarks but also enables population-based evaluation of LLMs.

TLDR: This paper introduces the Probing Memes paradigm, a novel approach for evaluating large language models (LLMs) by conceptualizing them as composed of memes. By leveraging the Perception Matrix, Probe Properties, and Meme Scores, Probing Memes effectively captures model-item interactions, revealing hidden capability structures and quantifying phenomena invisible under traditional evaluation paradigms.