Relax. This is going to be a neutral post. Hear me out.
The acronym "AI" commonly refers to a sophisticated type of information processing method. The tool resolves a given instruction by generating insights and approximations in lightspeed, based on available information. The intent of its activity is ultimately defined by a person, just like any other technology.
The danger originates from our use of the term "artificial intelligence" to refer to the analytical tool. The term suggests that the technology is some form of intelligence, which is misleading.
Granted, the definition of intelligence does relate to the processing of information. We even believed in our own attempt to measure our own intelligence by processing information on a standardized test, for example.
It's important to recognize that natural intelligence is found only in living things. As a system, intelligent life is able to sustain and reproduce itself, and achieves this by processing information about its environment. Single cell organisms, such as algae, operate like this, and are therefore intelligent.
If we were to invent this type of intelligence, the ability to process information is not enough. It must also be able to physically sustain and replicate itself all on its own.
If we built an intelligent machine today, it would consist of a few fully automated factories and energy-generating infrastructure in its physical body. It would need a swarm of drones that can be fueled by the same energy generators, so that the machine has mobility in required functions such as maintenance and repairs. It would also need a server the size of a house in order to process all the information independently. Keep in mind that any software with AI features actually require access to the server maintained by the software provider.
For the hypothetical intelligent machine, energy sustainability would not require our planet's biosphere. And it would likely identify humans as a resources rather than a collaborator in its existential purpose.
The difference between a single algae cell and this intelligent machine is staggering. This machine would be the size of a town, and its impact on our well-being would be highly destructive.
If the technology can't even compare to one of the simplest known form of life, then why are we still referring to it as a form of intelligence?
The term "Artificial Intelligence" was popularized from modern science fiction stories - books, movies, and other forms of published media. At some point, tech startups began offering software tools that transform text-based queries into interpreted solutions. These companies needed to promote their product to their uneducated market. They all eventually used the term "AI", since the term is already associated with published content that can give their potential buyer an idea of what their product is.
The danger is that these companies are now suggesting to trust in AI as something that is "intelligent and wise", when it is in fact a tool controlled by people. There are mass volumes of attempts to portray AI as something god-like, and they are getting away with it due to our lack of understanding.
If we are trying to invest in AI for the public good, then we need to recognize that the capacity for AI tools to do anything good depends entirely on the existing infrastructure it has access to.
If we look at our entire built environment as an example of the hypothetical intelligent machine, then we would need to invest in the development of its physical components. This would be a long list of development goals, and would begin with more tangible outcomes such as cleaner energy sources, or biodegradable product alternatives. Effective information processing also needs to be a part of this investment, but should only be a byproduct rather than the focus of the initiative. Eventually we would need to focus on the non-physical aspect of our infrastructure, which gets complicated, multi-disciplinary, and highly academic. Regardless, AI developers alone should remain as consultants and not be the leading voice for this type of initiative.
This is how we move beyond our "information age". Currently, we are moving backwards.
Instead of algae, AI tools are better represented as neurons. These are brain cells that interact with each other as a result of an intellectual task. Neurons are a dependent part of a biological life form, and cannot survive outside of their biological system. Just like AI tools, neurons require biological input in order to generate its own activity. And they will remain relatively stagnant otherwise until their physical properties decay into the elements.
Like neurons, overly investing in AI tools can introduce unhealthy levels of "neural" conditions in our existing infrastructure. There have already been many documented cases that predictive AI tools introduce high levels of prejudice or other forms of bias in non-technical organizational decision-making. A major problem with AI-powered predictive models is that we cannot audit the basis of their judgment. Extrapolations of human behavior are fundamentally not quantitative and thus cannot be automated with any acceptable level of confidence. A city that accepts this kind of infrastructure would effectively be "disassociated" from its own behavior.
There are better alternatives for the public to invest specifically in AI technology. We could develop visual systems that prevent injuries in hazardous work sites. We could integrate surveillance systems into sidewalk fixtures that enhance security in pedestrian spaces. We could introduce network coverage in hike trails, which would expand our capacity for emergency response in remote areas. We could develop self-diagnosis tools that would reduce waiting times in hospitals. A lot more lives would be saved with these ideas. None of these ideas would eliminate jobs either, as they would simply add data coordination in current roles, so management can rely on their expertise for data auditing. All of these ideas have a more direct and positively fruitful impact for the benefit of the public.
No prediction is needed here. We're still struggling to accurately predict weather. Let's improve that one first.
In a data-focused era, it's a good time for us to consider network-wide data security programs. Today's computational infrastructure is virtual, meaning that a lot of our processing and storage is happening "in the cloud". The presence of your personal data is now practically ubiquitous. If we're not careful, our infrastructure could condemn a crime fiction novelist based on the data that informed their work, regardless of their genuine community service. It might be a good time to extend our malware prevention and information security activities beyond our personal devices, and this must be done as a community.
Finally, the acronym "AI" is probably fine to use. I would have preferred "IA" to denote "intelligence augmentation", since all the tool does for us is augment our own intelligence. We just need our current AI champions to stop disinforming, and to start educating. Not just us. Probably even themselves.