AI: A Threat or a Tool or a New Hope?
It is known that artificial intelligence is not a new concept and that computer scientists have been working on it since Alan Turing’s studies. AI journey mostly began with rule-based algorithms aimed at finding the best ways to detect images or sounds, process and understand language. The artificial neural networks, which are the backbone of today’s AI, are as old as the concept of AI itself.
Rule-based AI achieved a few successes such as IBM’s Deep Blue and Honda’s ASIMO, but AI mostly experienced periods of stagnations. Artificial neural networks were not a new concept, yet they were not a remedy for overcoming stagnation either. The story began to change with Web 2.0 encompassing all devices and the production of extraordinary amounts of data. Artificial neural networks were taken off the dusty shelves, developed, and with the rise of deep learning models, we started talking about what is now called AI once again. Algorithms learning from data is an alternative approach to rule-based algorithm and learning (machine learning) gives better results with large amounts of data. AI, machine learning, and artificial neural networks are not new concepts; however, it must be acknowledged that there have been numerous advancements, research, and achievements in these fields in recent years.
Now, after a brief history, we can talk about the impacts and discussions surrounding AI. There are two extremes in AI discussions; the first one is the over-pessimists, and the other one is the over-optimists. It is not easy to generalize the characteristics of both groups, because their motivations for pessimistic or optimistic thoughts may differ. For example, some pessimists may discuss the reliability of AI in decision-making processes, some may focus more on ethical concerns, while others may concentrate on future impacts such as unemployment. Similarly, on the optimistic side, some may consider its contribution to making things easier, some may be hopeful about solving the biggest problems, or some may dream of building a one-employee, million-dollar company.
There are not only two extremes about the potentials of AI, but also two about its real value. The first one thinks that AI is an exaggerated hype, just an improved search engine. This belief is based on the fact that most of the hyped AI products are generative language models, which can find connections within knowledge better than search engines. The other group believes that today’s AI (mostly based on deep learning) is a real disruptive technology that can create a great transformation.
Should Threats be Taken Seriously?
If we try to find a balance between extremes, we can benefit from humanity’s experience. The truth is not in the extremes, but somewhere in between. Today, most people have no doubt that AI can successfully imitate some of humanity’s intellectual functions, such as analyzing, comparing and correlating. However, there are still challenges in reasoning and understanding causality. Researchers continue to work on improving these abilities, and certainly, there is a possibility of making progress in these functions one day.
The intellectual capabilities of generative large language models (what we refer to as AI today) are creating truly fascinating results, especially in coding. So why is the use of AI in coding talked about so much? Because it is the closest thing to the business world and has a genuinely exciting potential to benefit from AI. Of course, AI is also good at doing homework, recommending meal recipes, or providing answers to anything you’re curious about at any time, but these aren’t very relevant to the real business world. Coding, or the IT technology industry (digital image and video generation, front-end design, system engineering, etc.) in general, is a relatively new field, and the working rules and policies have not been fully established yet. Doctors, lawyers, and even mechanical and civil engineers have been “intellectual” workers for centuries and have had more years to create their policies, rights, and rules. Although it is technically possible to replace this workforce with AI, it is not easy from a legal perspective and regulations. On the other hand, the workforce of IT technology personnel is the most expensive. All these ideas explain why it is not surprising that the tech workforce is the first target of AI.
We face the same situation from the perspective of AI’s reliability. It is easier to verify the output of IT technology than the outputs of other white-collar workforces. It is not easy to trust an AI decision compared to a doctor’s or a judge’s decision. At least they have more regulatory, legal, and ethical obstacles than the IT technology workforce. From the perspective of the blue-collar workforce, AI’s physical capabilities are still in their infancy compared to its intellectual capabilities. Robotics technology is a challenging field to implement and test, and it is also much more expensive compared to human labor.
Will No One’s Professional Life Change Except for IT Staff?
It is clear that AI appliers aim to reduce dependence on the IT workforce by reducing complexity and enhancing automated management. This has been stated multiple times by many major tech CEOs. Industrial revolutions aimed the same thing for blue-collar labor, and now this revolution has set its sights on the new and most expensive labor class. But what about other intellectual labor? The progress of digitalization has been ongoing for decades, and many people use software applications in their professional lives. AI will acquire more tools for these individuals and improve existing applications. Although there have been some disappointments about the commercial returns of AI applications, the IT industry aims to increase their numbers and variety.
Can AI Solve Big Problems?
Many studies show that AI is achieving more successful results in many fields and it is clear that AI is quite successful in analyzing, finding encyclopedic knowledge, and uncovering correlations and relationships by using large data. However, the type of problem is really important for finding answers to big problems. There could be several reasons behind an unsolvable problem; there might be a lack of observations, lack of data, difficulties in testing, etc. If the lack of data can be filled with observations using advanced tools, then yes, the thing we call AI may bring a solution. However, it should not be forgotten that humankind also faces many problems based on optimization difficulty, and for some problems, there might not be a chance to observe or create data.
Some ideas claim that AI is transforming modern scientific methodology, from deduction to induction. However, the difficulties of problems about the universe and our nature still require more deductive approaches. So maybe humankind is entering a hybrid era in scientific methodology.
Conclusion
Technological shifts like AI (maybe we will call it revolution) shape our lives for decades. Some of them lead to concerns, while others bring more comfort. It is really not easy to predict for AI, it seems that both positive and negative impacts come together. Workforce dependency, the way we create knowledge and beliefs may change both positively and negatively. Today, we need more insights to understand whether it will be just a new technological tool for us or something entirely different from others. Maybe the best approach is to cautiously approach the excitement created by the hype and try to understand and learn what is happening.