Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added paper #2

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Added paper #2

wants to merge 1 commit into from

Conversation

hschuff
Copy link

@hschuff hschuff commented Nov 4, 2022

@M-Nauta
Copy link
Collaborator

M-Nauta commented Nov 14, 2022

Thanks for trying to add a paper to our collection! The database entry is however not in the right format. Please use our template via https://utwente-dmb.github.io/xai-papers/#/add-paper to generate one. After filling in (at least) the required fields, a generated json entry will show up on the right of the website, which can simply be copied and pasted into db.json. See e.g. this PR example Filling in the information of your paper gave the following:

,{
  "Title": "F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering.",
  "url": "https://doi.org/10.18653/v1/2020.emnlp-main.575",
  "Year": "2020",
  "Venue": {
    "isOld": true,
    "value": "ACL"
  },
  "Authors": [
    "Hendrik Schuff",
    "Heike Adel",
    "Ngoc Thang Vu"
  ],
  "Type of Data": [
    "Text"
  ],
  "Type of Problem": [
    "Outcome Explanation"
  ],
  "Type of Model to be Explained": [
    "(Deep) Neural Network",
    "Bayesian or Hierarchical Network"
  ],
  "Type of Task": [
    "Question Answering"
  ],
  "Type of Explanation": [
    "Text"
  ],
  "Method used to explain": [
    "Supervised explanation training",
    "Interpretability built into the predictive model"
  ],
  "Abstract": "Explainable question answering systems predict an answer together with an explanation showing why the answer has been selected. The goal is to enable users to assess the correctness of the system and understand its reasoning process. However, we show that current models and evaluation settings have shortcomings regarding the coupling of answer and explanation which might cause serious issues in user experience. As a remedy, we propose a hierarchical model and a new regularization term to strengthen the answer-explanation coupling as well as two evaluation scores to quantify the coupling. We conduct experiments on the HOTPOTQA benchmark data set and perform a user study. The user study shows that our models increase the ability of the users to judge the correctness of the system and that scores like F1 are not enough to estimate the usefulness of a model in a practical setting with human users. Our scores are better aligned with user experience, making them promising candidates for model selection.",
  "Date": "2022-11-14T10:12:40.910Z"
}

So please update your PR :) After acceptance, the paper will show up on our website.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants