Post by account_disabled on Mar 11, 2024 2:04:49 GMT -6
We often hear that artificial intelligence algorithms are "black boxes", which provide answers but do not provide explanations. Knowledge graphs could represent an explanation for making algorithms more reliable. Let's see how! Alessio Pomaro Alessio Pomaro May 26, 2022 •5 min read Is AI a "black box"? Knowledge graphs can make it testable Is AI a "black box"? Knowledge graphs can make it testable Why do we often hear that artificial intelligence is a " black box " that gives answers but gives no explanation as to how it obtained those answers? Let's try to explain it in a simple way. Let's consider a simple artificial neural network . From a mathematical point of view, a neural network represents a function that transforms inputs ( i1 and i2 ) into an output ( o ).
A simple artificial neural network with two inputs and one output A India Mobile Number Data simple artificial neural network with two inputs and one output The connections between input and output are called " weights " ( p1 and p2 ). What role do the weights have within the function? They represent, as the word itself says, the weight ( importance ) of each input. So we can imagine that the inputs entering the function are multiplied by the weights. The weights are precisely the famous parameters of the neural network , i.e. the values that are made to vary during the training phase so that the output of the function is in line with the expected prediction. How does an artificial neural network work? A simple explanation As long as we reason on such a simple neural network, everything is very easy to understand and the weights can also be determined " with pen and paper ".
But if the network becomes very large, with a large number of intermediate output levels ( which in turn become inputs for other nodes ), and with billions of parameters that vary with each training example to optimize the determination of the output... it becomes truly unthinkable to understand how the algorithm works to generate the prediction, and it becomes unthinkable to understand which training data influenced the output. This is why AI is often associated with a " black box ". Furthermore, today we are moving towards solutions that autonomously create training examples , as if a " chess algorithm " learned by playing millions of games against itself instead of learning from known strategies, automatically updating the parameters of its neural network.
A simple artificial neural network with two inputs and one output A India Mobile Number Data simple artificial neural network with two inputs and one output The connections between input and output are called " weights " ( p1 and p2 ). What role do the weights have within the function? They represent, as the word itself says, the weight ( importance ) of each input. So we can imagine that the inputs entering the function are multiplied by the weights. The weights are precisely the famous parameters of the neural network , i.e. the values that are made to vary during the training phase so that the output of the function is in line with the expected prediction. How does an artificial neural network work? A simple explanation As long as we reason on such a simple neural network, everything is very easy to understand and the weights can also be determined " with pen and paper ".
But if the network becomes very large, with a large number of intermediate output levels ( which in turn become inputs for other nodes ), and with billions of parameters that vary with each training example to optimize the determination of the output... it becomes truly unthinkable to understand how the algorithm works to generate the prediction, and it becomes unthinkable to understand which training data influenced the output. This is why AI is often associated with a " black box ". Furthermore, today we are moving towards solutions that autonomously create training examples , as if a " chess algorithm " learned by playing millions of games against itself instead of learning from known strategies, automatically updating the parameters of its neural network.