Suddenly, everyone wants to know about AI. Is it hardware? Is it software? Will it hurt us? I recently watched speeches about AI from two politicians. One said that AI was all about the hardware. The other talked about the software and the data feeding it. The media immediately caught onto the hardware as being the right answer. After all it is the hardware companies that are in the news and experiencing skyrocketing stock valuation. Although hardware is required, it is not the essence or the enabling factor for creating AI. The correct answer is software and data. Hardware can make any computer algorithm run faster, including AI.
AI is a type of machine learning algorithm. Most implementations are deep neural nets. I asked ChatGPT what a neural net is, and it said, “A neural network is a computational model inspired by the human brain, designed to recognize patterns and solve complex problems. It consists of layers of interconnected nodes (neurons) that process input data and learn from it over time.” Neural nets have been around for a long time. I used to play around with them back in the 1990s. I had the pleasure of working on decision logic for the U.S. missile defense systems. We specifically decided not to use neural nets because you can’t see what is going on under the hood. For missile defense systems, we really need to know not only what decisions it made, but why it made those decisions.
Deep neural nets have many layers of interconnected nodes (neurons), including input and output layers. Each node in a layer receives signals from neurons in the previous layer and sends information to nodes in next layer. These connections have numerical weights that determine the strength of their influence on the next layer. These weights are determined by training the network based on tracing errors backwards through the network. Each node has a nonlinear response to its weighted input and either “fires” and sends a signal to nodes in the next layer or does not.
The real enabling factor for the recent hubbub about AI is the availability of very large, interconnected data sets, also called big data, enabled by the growth of the internet. These large data sets are required for training. For example, many pictures of cats could be used to train a neural net to recognize a cat. Each pixel in a picture could be sent to a node in the input layer. The next layer may identify edges and textures. The layer after that could identify shapes and morphology. The next layer may start to identify ears and eyes, and the output layer would put it all together and decide if the picture contains a cat. Deep neural nets generally have more than three internal layers. The training could be further refined using pictures showing only portions of a cat, rather than whole cats.
AI draws from existing data, currently created by humans. It is generally not creative. AI is a tool that echoes human creativity. So, how can it go off the rails? One way it could do harm is through inappropriate hardware connection. I asked ChatGPT how many parameters need to be adjusted during ChatGPT training. It replied that GPT-3 has 175 billion parameters, and GPT-4 is rumored to have over a trillion parameters in its largest versions. With such large sets of parameters, there is no way to determine all the failure modes of any system that include such large-scale AI implementations. Therefore, they should not be connected to things that can cause us harm, such as the power grid, water treatment plants, or nuclear weapon delivery systems.
Another potentially bad scenario involves iteration. Suppose one AI implementation creates another AI implementation, which creates another, and so on. We are already using AI to design computer chips and write code. Is there some point where an implementation will become self-aware, develop feelings, and become downright hostile to humans? I don’t think so. First, entropy will be increasing from one iteration to the next as the systems get more complex. Entropy has two meanings; one is information content which will surely increase. The other is randomness or disorder. I think the implementations will become more unwieldy and increasingly difficult to train, even by an AI.
What if a powerful AI fell into the wrong hands? I find it amazing that all civilizations around the world have a very similar set of moral beliefs. This only makes sense if humans were created by a moral God. There may be some terrorist attacks in the future that utilize AI, but I would expect the U.S. and other “western” countries to prevail, and I have faith that God will protect us in this life and the next.
I believe there is an omnipotent, omniscient, and omnipresent God. He has provided us with His word in the Bible, and specific prophecy in the book of Revelation. I don’t see anything like AI mentioned in the Bible. God looks after those of us that believe Jesus died for our sins and offers us eternal life. With that eternal outlook, we can experience more joy in life and worry less about potential disasters such as an AI apocalypse.
Comments
Post a Comment