I was wondering if the Kappa Statistic metric provided by WEKA is an inter-annotator agreement metric.
Is is similar to Cohen's Kappa or Fleiss Kappa?
This slide is from the chapter 5 slides for Witten, et al.'s textbook.
https://www.cs.waikato.ac.nz/ml/weka/book.html (Because I have modified many of the slides, the slide number will be different.)