Towards Trustworthy AI
Dr. Qiongkai Xu is a lecturer at Macquarie University, having earned his PhD from the Australian National University and previously served as a research fellow at the University of Melbourne. His research primarily focuses on Natural Language Processing, Privacy & Security, Machine Learning and Data Mining. Recently, his attention has been directed towards auditing machine learning models, specifically in two areas: 1) identifying and addressing privacy and security issues in ML/NLP models and their applications and 2) developing comprehensive evaluation theory and methods for ML/NLP models from various perspectives.
If you’re passionate about these research topics, I would love to hear your thoughts and ideas. Email: qiongkai.xu[at]mq.edu.au
Latest News
- [Sep, 2024] “Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients” is accepted to EMNLP 2024. Congratulations to Weijun and team!
- [Sep, 2024] I am fortunate to have won the Research Pitching Session at Macquarie University for the second time!
- [Sep, 2024] Our paper: “Generative Models are Self-Watermarked: Declaring Model Authentication through Re-Generation” is accepted to Transactions on Machine Learning Research (TMLR)! Congrats to Aditya and all co-authors!
- [Aug, 2024] Our Tutorial on “A Copyright War: Authentication for Large Language Models” in IJCAI 2024 is available (pdf).
- [Jun, 2024] We are grateful for multiple internal and external fundings! Opportunities will soon be available for PhD candidates, and Postdoctoral fellows. We look forward to collaborating with talented and enthusiastic individuals on challenges related to AI safety!
- [Jun, 2024] I am honored to serve as a Publication Chair on the Steering Committee for ACL 2025!
- [May, 2024] Two papers on “Embedding-as-a-Service Watermarking” and “LLM backdoor defense” are accepted to ACL!
- [May, 2024] We are grateful to the support from Gemma Academic Program Google Cloud Platform Credit Award!
- [Apr, 2024] Our paper: “SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks” is accepted to Transaction of ACL (TACL)! Congrats to Xuanli and all co-authors!
- [Apr, 2024] We are grateful to the support from OpenAI’s Researcher Access Program!
- [Apr, 2024] Our tutorial: “A Copyright War: Authentication for Large Language Models” is accepted to IJCAI 2024 (Tutorial). See you in Korea!
- [Mar, 2024] “Backdoor Attacks on Multilingual Machine Translation” is accepted to NAACL, the main conference. Congratulations to Jun!
- [Mar, 2024] I am honored to have been appointed as an honorary fellow at the University of Melbourne! PhD and MRes applicants to both MQ and UoM are welcomed.
- [Mar, 2024] “Attacks on Third-Party APIs of Large Language Models” is accepted to SeT-LLM@ICLR 2024.
- [Feb, 2024] I am honored to serve as an Action Editor / Area Chair for ACL Rolling Review!
- [Dec, 2023] Our EMNLP’23 tutorial on Security Challenges in NLP models are now available!
- [Nov, 2023] I am fortunate to have won the Research Pitching Session at Macquarie University!
- [Nov, 2023] I am honored to serve as an Area Chair for LREC-COLING 2024!
- [Oct, 2023] I am excited to become a lecturer (a.k.a assitant professor) at Macquarie University!
- [Oct, 2023] “Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation” is accepted to EMNLP 2023. Congratulations to Xuanli!
- [Oct, 2023] “Boot and Switch: Alternating Distillation for Zero-Shot Dense Retrieval” is accepted to Findings of EMNLP 2023. Congratulations to Fan!
- [Sep, 2023] I am honored to be awarded DAAD Net-AI fellowship, visiting University of Bonn and Leuphana University of Luneburg.
- [Jul, 2023] “Fingerprint Attack: Client De-Anonymization in Federated Learning” is accepted to ECAI 2023 (with acceptance rate 24%).
Selected Publications
-
Security Challenges in Natural Language Processing Models. Qiongkai Xu, Xuanli He. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP): Tutorial Abstracts, Dec 2023.
-
Rethinking Round-Trip Translation for Machine Translation Evaluation, Terry Yue Zhuo, Qiongkai Xu, Xuanli He, Trevor Cohn. In Findings of the Association for Computational Linguistics: ACL (Findings-ACL), Jul 2023.
-
Humanly Certifying Superhuman Classifiers, Qiongkai Xu, Christian Walder, Chenchen Xu. In Proceedings of the Eleventh International Conference on Learning Representations (ICLR), May 2023.
-
CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks. Xuanli He, Qiongkai Xu, Yi Zeng, Lingjuan Lyu, Fangzhao Wu, Jiwei Li, Ruoxi Jia. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS), Nov 2022.
-
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs. Qiongkai Xu, Xuanli He, Lingjuan Lyu, Lizhen Qu, Gholamreza Haffari. In Proceedings of the 29th International Conference on Computational Linguistics (COLING), Oct 2022.
-
Personal Information Leakage Detection in Conversations. Qiongkai Xu, Lizhen Qu, Zeyu Gao, Gholamreza Haffari. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Nov 2020.
-
Adhering, Steering, and Queering: Treatment of Gender in Natural Language Generation, Yolande Strengers, Lizhen Qu, Qiongkai Xu, Jarrod Knibbe In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI), Apr 2020.
-
Deep Neural networks for Learning Graph Representations. Shaosheng Cao, Wei Lu, Qiongkai Xu. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Feb 2016.
-
GraRep: Learning Graph Representations with Global Structural Information, Shaosheng Cao, Wei Lu, Qiongkai Xu In Proceedings of the 24th ACM international on conference on information and knowledge management (CIKM), Oct 2015.