Transparent Vs. Black-Box AI, New Software Development
April 07, 2022 | 5 minutes read
While artificial intelligence has become an umbrella term that in fact defines a number of different technologies, many applications of artificial intelligence in the world today can be divided into two separate categories, opaque and transparent AI. Put in the simplest of terms, opaque artificial intelligence systems do not easily reveal why they have come to a particular solution or decision, while transparent artificial intelligence systems offer a straightforward explanation for all solutions or decisions that have been made. To this point, as is the case with any application of technology, opaque and transparent AI systems have varying uses and can be implemented in a number of differing configurations.
What is Opaque Artificial Intelligence?
Opaque artificial intelligence, also known as black-box AI, refers to artificial intelligence systems that solve problems and make decisions in accordance with inputs and operations that are not made visible to the general public. Many black-box AI systems function in conjunction with neural networks and machine learning algorithms, as software developers train these systems through massive amounts of training data, gradually enabling them to make decisions through the recognition of patterns, features, and similarities within the data that was used to train said systems. Common examples of black-box AI include self-driving cars, as well as voice popular voice assistants such as Alexa and Siri.
Through advancements that have been made in machine learning algorithms and neural networks in the past decade, these forms of artificial intelligence are able to handle very specific tasks with an incredible amount of accuracy and efficiency. For example, the very notion of self-driving cars was limited to the realm of science fiction only twenty years ago. However, while these algorithms are extremely powerful and innovative, it can be very difficult to understand why an algorithm has made a particular decision or arrived at a particular solution, even for the software developers that have created said algorithms. As such, while the inner workings of a voice assistant such as Siri might be negligible in the overall scheme of things, black-box AI that is implemented more broadly can lead to moral and ethical concerns.
To illustrate this point further, consider a machine learning algorithm that is used to either grant or deny a mortgage loan request. As the decision to approve or deny a loan can be influenced by any number of factors, including an individual’s salary, credit history, payment history, and length of employment, just to name a few, consumers must be able to ensure that the processes that are used to determine their worthiness for a loan are fair and unbiased. However, a mortgage lender that is using a black-box AI system to determine an individual’s worthiness for a loan would struggle to explain why a system made the decision to approve or deny the particular loan, as the billions of neurons, parameters, and training data that went into the decision in question makes transparency impractical at best.
Transparent AI
On the other end of the spectrum, transparent or rule-based AI refers to any model or system that is based purely upon predetermined rules that have been developed and implemented by a software engineer. In comparison to neural networks and machine learning algorithms, these rule-based systems make decisions and arrive at solutions in accordance with a hardline set of rules and facts. Software developers create such systems on the basis of “if-then” coding statements, where 1+1 will always equal 2. Prior to the development of neural networks and machine learning algorithms, as well as the computational complexity and power that is needed to support such systems, the vast majority of AI was created using ruled-based models.
Some common examples of rule-based artificial intelligence include fraud detection, engineering fault analysis, and medical diagnosis. As these examples involve aspects of society that are pivotal to almost every citizen, ranging from healthcare to the very buildings that we live in, software developers must be able to easily determine why and how these systems have arrived at a particular decision or solution. For example, if an AI system was used to diagnose an individual with cancer, they would obviously be seeking a detailed and nuanced explanation from their healthcare provider concerning how the cancer developed, how fast it is spreading, and what steps should be taken next, among other relevant details.
However, while rule-based AI systems provide a greater leveler of understanding and transparency to the general public when compared to black-box AI, these systems are far more limited in their capabilities and ambitions. For instance, many forms of rule-based AI essentially function as more advanced forms of Robotic Process Automation or RPA, software programs that are used to automate mundane and tedious tasks. As such, while these systems are very reliable and consistent, they do not align with the development of so-called general artificial intelligence, AI systems that can make human-like decisions on their own without the need for human guidance or interference.
As artificial intelligence and machine learning algorithms continue to evolve and reach new heights, these systems will undoubtedly be implemented into more aspects of human life and society. As such, it is imperative that these systems and models are created from a place of equality and fairness, as the decisions and solutions these systems and models arrive at could have long-standing implications and consequences for individuals around the world. As everyone has some sort of bias, whether they are conscious or unconscious of it, it is important that software developers and engineers consider the potential flaws that could be embedded within the algorithms they create, irrespective of any technological pursuits or feats that these algorithms may be able to achieve.