Pre-trained Language Models Can be Fully Zero-Shot Learners

Published in The 61st Annual Meeting of the Association for Computational Linguistics, 2023

Cite: Xuandong Zhao, Siqi Ouyang, Zhiguo Yu, Ming Wu, and Lei Li. 2023. Pre-trained Language Models Can be Fully Zero-Shot Learners. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15590–15606, Toronto, Canada. Association for Computational Linguistics. https://aclanthology.org/2023.acl-long.869/