Imagine you oversee granting citizens access to top-secret information pertaining to national security. You must find the balance of issuing clearances quickly enough to protect the nation, but you cannot afford to grant access to nefarious actors.  To support your efforts, you deploy a sophisticated deep-learning solution that ingests thousands of disparate data sources in real-time and provides recommendations on whether to grant an individual access.  After seemingly operating properly for almost a year, a citizen challenges your decision to deny them clearance. After some research, you discover that you understand very little about how and why that individual was denied clearance. Further, after running some system reports, you discover that the system is denying clearance to women twenty percent more than similarly situated males.

The above example highlights the new frontier of challenges we face with artificial intelligence and machine learning. We have proven, beyond doubt, that these technologies are beneficial to government and business decisioning, but now we are facing the reality that our decisions, be they made by humans, machines, or both, must be defensible. This defensibility requires us to strive for technologies that are trackable, auditable, free of bias, and compliant with privacy and security best practices. This maturing area of focus and study is often called explainable AI, or XAI, and it is very important to everything we do at Torch.AI

Torch.AI’s mission is to create trust at scale. Achieving trust at scale requires advanced machine learning (ML) and AI to keep up with the speed of modern business. The word “trust,” as used in our mission statement, has two equally essential meanings. First, we want to aid our customers in making the best and most trusted business decisions, given the real-time context, data, and information available. Second, we want our customers to trust that they can explain the algorithms, processes, data, and models used in making decisions.

Was your model trained on biased data? Is your model still being used to solve the same problem it was trained for? Are you complying with privacy regulations?

As more and more decisions are made by ML/AI, scrutinizing these determinations is necessary and commonplace. When decisions are made wholly or in part by algorithms, there is an expanding area of law that has successfully challenged the results of those decisions if they can’t be audited, explained, or articulated. For example, if a credit agency denies a loan based on the recommendation of a custom deep learning solution and that decision is challenged, what are the consequences of not being able to explain the justification for that decision? Further, what if bias was alleged? Would they be able to produce enough evidence to defend themselves and explain the decision?

If you start pulling on the string of the dangers of unexplainable decisioning, you quickly recognize a host of legal, ethical, and practical pitfalls.  Was your model trained on biased data? Is your model still being used to solve the same problem it was trained for? Are you complying with privacy regulations?

As a result of these challenges, there is an emerging trend that functional ML/AI is no longer good enough and should be replaced by explainable and functional ML/AI. Explainable ML/AI has several benefits. Your analysts will have access to smarter solutions, the feedback loop will speed up, and you will be able to better audit why and how decisions are made. In addition, you will reduce compliance-related risks by having better insights into your decision-making processes.

At Torch.AI, we embrace the challenge of creating more explainable ML/AI and pride ourselves on the fact that our mission statement and corporate ethos require us to be at the forefront in developing the frameworks and technologies necessary to do so. Torch.AI is committed to building and deploying solutions that are state-of-the-art in defining how and why ML/AI led the analyst to a specific conclusion and ensuring customers have the necessary information to ensure their solutions aren’t acting with bias and are compliant with laws, regulations, and ethical best practices.

Torch.AI is encouraged that industry leaders across academic, commercial, and government sectors are working to create more explainable and auditable ML and AI.

DARPAOne of the clear frontrunners in the race to develop an explainable approach to machine learning and artificial intelligence is the Defense Advanced Research Project Agency (DARPA). DARPA is executing a multiyear Explainable Artificial Intelligence program with the participation of 12 separate teams with a focus on creating interpretation tools and techniques for deep learning and neural networks. The teams are composed of representatives from private corporations, private universities, and public universities, both national and international. DARPA’s XAI program is by far the most organized and advanced effort in the world of explainable machine learning and artificial intelligence.

In addition to tracking the innovations and best practices coming out of the academic and commercial sectors, Torch.AI is also participating in a cross-section thinktank that is developing an ML/AI framework that supports XAI. Further, Torch.AI’s entire design and development process deploys what we call “Auditability by Design.” This process stands alongside “privacy by design” and “security by design” and is mission-critical to everything we do at Torch.AI.  At a high-level, Torch.AI’s Auditability by Design is an approach and framework that ensures we are at the forefront for XAI and our customers deploy the most explainable solutions that accomplish their stated business objective. Please look out for future publications on XAI and Torch.AI’s Audibility by Design.