TrustKDD2023 | International Workshop on Trustworthy Knowledge Discovery and Data Mining

The 1st International Workshop on Trustworthy Knowledge Discovery and Data Mining (TrustKDD)

In conjunction with the IEEE International Conference on Data Mining 2023 (ICDM2023)

December 1-4, 2023, Shanghai, China

Top

Notice: All ICDM Workshop will be held on December 1st, 2023!

Welcome to TrustKDD2023!

The rapid growth of data and the proliferation of data sources have resulted in a significant demand for advanced knowledge discovery and data mining (KDD) techniques. The trustworthiness of KDD results is important for subsequent reliable decision making. However, ensuring the trustworthiness of KDD results has become a major challenge, as the accuracy and reliability of KDD outcomes are often compromised by various factors, such as data quality, model bias, and privacy issues. The focus of this workshop is to provide a forum for researchers and practitioners to present and discuss innovative approaches and solutions to ensure the trustworthiness of KDD results, theoretical and conceptual insights to understand the trustworthiness of KDD results.
The 1st International Workshop on Trustworthy Knowledge Discovery and Data Mining (TrustKDD2023) will be held in conjunction with the IEEE International Conference on Data Mining (ICDM2023) on December 1-4.
We warmly welcome your participations and contributions from all corresponding fields!

Topics of Interest

Our workshop aims to bring together leading researchers, practitioners and entrepreneurs to exchange and share their experiences and latest research/application results on all aspects of Trustworthy Knowledge Discovery and Data Mining. It will provide a premier interdisciplinary forum to discuss the most recent trends, innovations, applications as well as the real-world challenges encountered, and the corresponding data-driven solutions in relevant domains.
The topics of interest include but not limited to:

  • Trustworthy data preprocessing and cleaning
  • Privacy-preserving KDD
  • Fairness and accountability in KDD
  • Explainability and interpretability of KDD results
  • Robustness and resilience of KDD models
  • Security and privacy of KDD systems
  • Ethics and social implications of KDD
  • Trustworthy KDD on spatio-temporal data, healthcare data, social networks, streaming data, text data and graph data
  • Real-world applications of trustworthy KDD, including trustworthy recommendation, trustworthy search, trustworthy outlier detection, trustworthy clustering, and trustworthy graph learning.

Submissions & Publications

Authors are invited to submit original papers, which have not been published elsewhere and which are not currently under consideration for another journal, conference or workshop. Any papers available on the Web (including arXiv) no longer qualify for submissions, as their author information is already public.
Submissions should be formatted according to double-column in IEEE conference template, and should not exceed 10 pages including the bibliography and any possible appendices. Submissions longer than 10 pages will be rejected without review. All submissions will be peer-reviewed by the Committee on the basis of technical quality, relevance to scope of our workshop, originality, significance, and clarity.
For more information on how to prepare your submission, please refer to ICDM2023 Guidelines.
Kindly note that your manuscript should be submitted on TrustKDD 2023 Submission link. We do not accept email submissions. All manuscripts are submitted as full papers and are reviewed based on their scientific merit. The reviewing process is confidential. There is no separate abstract submission step.
Note that all accepted papers will be included in the IEEE ICDM 2023 Workshops Proceedings(ICDMW) volume published by IEEE Computer Society Press, and will also be included in the IEEE Xplore Digital Library and indexed by EI. ​​Therefore, papers must not have been accepted for publication elsewhere or be under review for any other workshop, conferences or journals.

Important Dates

Paper Submission Deadline September 23, 2023
Notification of Acceptance September 24, 2023
Camera-ready Deadline and Copyright Forms October 1, 2023
Workshop Date December 1, 2023

Registration

All accepted papers, including workshops, must have at least one “FULL” registration. Registration information will be released as long as ICDM2023 is announced. The registration fee of a workshop paper is the same as that of a main conference paper. Please refer to the registration webpage of the main conference for the fee details. There is no extra page fee for all workshop papers.
For registration informations, please contact: Registration link.

Workshop Organizations

Organizers

  • Enhong Chen, University of Science and Technology of China
  • Le Wu, Hefei University of Technology
  • Hongzhi Yin, The University of Queensland
  • Jundong Li, University of Virginia
  • Defu Lian, University of Science and Technology of China

For more information, refer to Organizers.

Keynote Invited Speakers

  • Xiting Wang, Renmin University of China

Title: Model Interpretation and Alignment for Trustworthy AI
Abstract: In the era of large models, interpretability and model alignment have become critically important. Large models have an increasingly significant impact on people's work and lives, but they are also becoming more difficult to understand and control. Among the seven major research directions supported by OpenAI, interpretability and model alignment are two of them. How can we make deep learning models more transparent, understandable, easier to train, debug, and optimize, ensuring their alignment with human intent? This report will delve into these questions and introduce our recent research in Explainable Artificial Intelligence (XAI) and methods for learning from human feedback using reinforcement learning (RLHF), which we publish in ICML, NeurIPS, and KDD conferences.
Bio: Xiting Wang is a tenure-track assistant professor in Renmin University of China. She was previously a principal researcher at Microsoft Research Asia and obtained her Bachelor's degree and Ph.D. from Tsinghua University. Her research interest is explainable and trustworthy AI, and the technologies she developed have been applied in multiple products like Microsoft Bing and Microsoft News. Xiting is an area chair of IJCAI and AAAI, is the archive chair of IEEE VIS, and was awarded Best SPC by AAAI 2021. Two of her papers were selected as the spotlight article by IEEE TVCG (one spotlight each issue). She was invited to give a keynote speech in the SIGIR Workshop on explainable recommendation on 2022 and 2020, and is an IEEE senior member.

  • Zhenhua Dong, Huawei Noah’s Ark Lab

Title: Two perspectives about biases in recommender system: OoD and unfairness
Abstract: The goal of recommender system is getting the right information to the right people. Most of recommender system studies focus on the optimization of accuracy, which is not enough to be a trustworthy recommender system. There are many topics about trustworthy recommender system, in this talk, I will mainly focus on the biases studies from two perspectives, out of distribution(OoD) and unfairness. For the perspective of OoD, there is gap between the expected user preference and the observed user behaviors. The gap will bring many biases problem such as position bias, expose bias, trust bias et. al. The talk will introduce several causal inspired methods to mitigate the above bias such as intervention technologies, counterfactual learning. For the perspective of unfairness, I will introduce two kinds of fairness based on two stakeholders in recommender systems: user and content provider. For the user fairness, we propose counterfactual data augmentation methods to generate the counterfactual samples and achieve the fair data distribution. For the provider fairness, we propose provider max-min fairness for ranking. Finally, the talk will briefly discuss the challenges of trustworthiness.
Bio: Zhenhua Dong is a technology expert and project manager of Huawei Noah’s ark lab. He is leading a research team focused on recommender system and causal inference. His team has launched significant improvements of recommender systems for several applications, such as news feeds, App store, instant services and advertising. With more than 40 applied patents and 60 research articles in TKDE, SIGIR, RecSys, KDD, WWW, AAAI, CIKM etc., he is known for research on recommender system, causal inference and counterfactual learning. He is also serving as PC or SPC members of SIGKDD, SIGIR, RecSys, WSDM, CIKM, industry chair of RecSys 2024. He translated the book “the singularity is near” into Chinese, named “奇点临近”. He received the BEng degree from Tianjin University in 2006 and the PhD degree from Nankai University in 2012. He was a visiting scholar at GroupLens lab in the University of Minnesota during 2010-2011.

Program Schedule

Time Moderator Content
14:30-14:35 Organizers Opening Remarks.
14:35-15:15 Xiting Wang (Renmin University of China) Model Interpretation and Alignment for Trustworthy AI.
15:15-15:55 Zhenhua Dong (Huawei Noah’s Ark Lab) Two perspectives about biases in recommender system: OoD and unfairness.
15:55-16:15 Meghdad Mirabi, Ren ́e Klaus Nikiel, and Carsten Binnig SafeML: A Privacy-Preserving Byzantine-Robust Framework for Distributed Machine Learning Training.
16:15-16:35 Akito Yamamoto and Tetsuo Shibuya A Joint Permute-and-Flip and Its Enhancement for Large-Scale Genomic Statistical Analysis.
16:35-16:55 Yi Hu, Hanchi Ren, Chen Hu, Jingjing Deng, and Xianghua Xie An Element-Wise Weights Aggregation Method for Federated Learning.
16:55-17:15 Yifan Li and Chengxiang Zhai An Exploration of Large Language Models for Verification of News Headlines.
17:15-17:20 Organizers Closing Remarks.

If you have any questions or inquiries, please contact the workshop organizers at liandefu@ustc.edu.cn