Presentation at XAI 2024

Image credit: Unsplash

Abstract

With the development of machine learning, the concept of explainability has gained increasing significance. It plays a crucial role in instilling trust among clients regarding the results generated by AI systems. Traditionally, researchers have relied on feature importance to explain why AI produces certain outcomes. However, this method has limitations. Despite the existence of documents that introduce various samples and describe formulas, comprehending the implicit meaning of these features remains challenging. As a result, establishing a clear and understandable connection between features and data can be a daunting task. In this paper, we aim to introduce a novel method for explaining time-series classification, leveraging the capabilities of ChatGPT to enhance the interpretability of results and foster a deeper understanding of feature contributions within time-series data.

Date
Jul 17, 2024 — Jul 19, 2024
Location
Valletta, Malta
Valletta
Click on the Slides button above to view the built-in slides feature.

Slides can be added in a few ways:

  • Create slides using Hugo Blox Builder’s Slides feature and link using slides parameter in the front matter of the talk file
  • Upload an existing slide deck to static/ and link using url_slides parameter in the front matter of the talk file
  • Embed your slides (e.g. Google Slides) or presentation video on this page using shortcodes.

Further event details, including page elements such as image galleries, can be added to the body of this page.

黄逸然
黄逸然
Academic Associates

My research interests include Data Mining, XAI and Human Activity Recognition.