My job is to keep up with research on interpretable machine learning. But I fail. It’s not me, it’s arxiv. The daily flood of papers is too much. If interpretability researchers can’t keep up, how can a data scientist or machine learning engineer? I’ve written dozens of in-depth chapters on ML interpretation techniques and read hundreds of papers. Over time, I have found some useful categories to map the space of interpretation approaches.
The flooding effect within machine learning research really does make it hard to sort out the signal from the noise. I read a lot of papers that get more attention in pre-print form than from a journal. On the other hand a lot of pre-print papers end up being a decent read, but do not advance the field or make a solid contribution to the academy.
The flooding effect within machine learning research really does make it hard to sort out the signal from the noise. I read a lot of papers that get more attention in pre-print form than from a journal. On the other hand a lot of pre-print papers end up being a decent read, but do not advance the field or make a solid contribution to the academy.