Hacker Newsnew | past | comments | ask | show | jobs | submit | i_umit's commentslogin

I think this is a fair concern. I agree that the ML approaches can cause explainability and traceability issues but not all of them. We also support ML approaches like decision trees which can be considered more "debuggable". However you might not be able to solve every single problem with decision trees just because it is "debuggable". Besides, there are a lot of researchers are working on ML explainability. Our vision is simply building self-adaptive tuning systems which may not require constant development efforts to optimize OS and storage components for ever-changing workloads and new devices. You can find more discussion about this topic in the paper too.


You can find the related paper in arxiv: https://arxiv.org/abs/2111.11554


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: