purplecat (
purplecat) wrote2025-09-03 07:43 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
Eliciting Explainability Requirements for Safety-Critical Systems: A Nuclear Case Study
One of my post-docs, and I was previously on the supervisory team for her PhD, has been partly occupying her time generating publications from her thesis. This is one such. She's addressing the question of what people actually want when they ask that an autonomous system provide explanations. In particular, though she doesn't really get into that in the paper, most explainability research has focused specifically on neural networks that are classifying things into groups, not on robotic systems that are taking decisions about what to do next.
Eliciting Explainability Requirements for Safety-Critical Systems: A Nuclear Case Study talks through her approach, and tries to categorise her results into groups. There is also some formalisation of the requirements into logic, though via the use of structured English to make it more comprehensible. Lastly she reports on some lessons learned.
Eliciting Explainability Requirements for Safety-Critical Systems: A Nuclear Case Study talks through her approach, and tries to categorise her results into groups. There is also some formalisation of the requirements into logic, though via the use of structured English to make it more comprehensible. Lastly she reports on some lessons learned.