I’m sure you have seen this interesting rift between programmers that goes beyond the friendly competition about who favors which IDE or which programming language has the nicer syntax — the rift that extends to the very core of how we navigate the systems in front of us.
In essence, there are two kinds of people when it comes to computer navigation: Those who rely on the mouse and can’t understand why anyone would rather type text and on the other side, there’s us few who have seen the light and prefer using the keyboard as much as possible.
Deep neural network (DNN) is an indispensable machine learning tool for achieving human-level performance on many learning tasks. Yet, due to its black-box nature, it is inherently difficult to understand which aspects of the input data drive the decisions of the network. There are various real-world scenarios in which humans need to make actionable decisions based on the output DNNs. Such decision support systems can be found in critical domains, such as legislation, law enforcement, etc. It is important that the humans making high-level decisions can be sure that the DNN decisions are driven by combinations of data features that are appropriate in the context of the deployment of the decision support system and that the decisions made are legally or ethically defensible. Due to the incredible pace at which DNN technology is being developed, the development of new methods and studies on explaining the decision-making process of DNNs has blossomed into an active research field. A practitioner beginning to study explainable deep learning may be intimidated by the plethora of orthogonal directions the field is taking. This complexity is further exacerbated by the general confusion that exists in defining what it means to be able to explain the actions of a deep learning system and to evaluate a system's "ability to explain". To alleviate this problem, this article offers a "field guide" to deep learning explainability for those uninitiated in the field. The field guide: i) Discusses the traits of a deep learning system that researchers enhance in explainability research, ii) places explainability in the context of other related deep learning research areas, and iii) introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning. The guide is designed as an easy-to-digest starting point for those just embarking in the field.
I’m sure you have seen this interesting rift between programmers that goes beyond the friendly competition about who favors which IDE or which programming language has the nicer syntax — the rift that extends to the very core of how we navigate the systems in front of us.
In essence, there are two kinds of people when it comes to computer navigation: Those who rely on the mouse and can’t understand why anyone would rather type text and on the other side, there’s us few who have seen the light and prefer using the keyboard as much as possible.
Deep neural network (DNN) is an indispensable machine learning tool for achieving human-level performance on many learning tasks. Yet, due to its black-box nature, it is inherently difficult to understand which aspects of the input data drive the decisions of the network. There are various real-world scenarios in which humans need to make actionable decisions based on the output DNNs. Such decision support systems can be found in critical domains, such as legislation, law enforcement, etc. It is important that the humans making high-level decisions can be sure that the DNN decisions are driven by combinations of data features that are appropriate in the context of the deployment of the decision support system and that the decisions made are legally or ethically defensible. Due to the incredible pace at which DNN technology is being developed, the development of new methods and studies on explaining the decision-making process of DNNs has blossomed into an active research field. A practitioner beginning to study explainable deep learning may be intimidated by the plethora of orthogonal directions the field is taking. This complexity is further exacerbated by the general confusion that exists in defining what it means to be able to explain the actions of a deep learning system and to evaluate a system's "ability to explain". To alleviate this problem, this article offers a "field guide" to deep learning explainability for those uninitiated in the field. The field guide: i) Discusses the traits of a deep learning system that researchers enhance in explainability research, ii) places explainability in the context of other related deep learning research areas, and iii) introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning. The guide is designed as an easy-to-digest starting point for those just embarking in the field.
I couldn't reproduce this slowness.
Nah, we're all going back to work.
4K is HD now...
Do they actually have more than 15 "original" animated series...?
Eh, some companies will change, most won't.
They should give discounts to subscribers who had paid for HD quality...
World's most useless trick award goes to...
I'm surprised this didn't happen sooner...
Bye! Get the fuck out of California, asshole.
O RLLY..
Sounds terrible.
What a lovely post.