Over the past decade, artificial intelligence (AI) has been the subject of study in an endless number of academic and business areas. From medicine and computer science to e-commerce, whole fields of knowledge and industry have flirted directly or indirectly with this powerful technology in its different forms or applications (digital signal processing, image analysis, data mining, etc.)
A wave of small but significant advances in different areas have shown how powerful AI could be once taken seriously. I say “seriously” because, at the beginning of the pandemic, the only AI solutions available were Machine Learning (ML) and Deep Learning (DL). Although they weren’t neither specific products, nor easy to implement in different business scenarios, both technologies were brutally helpful to those who took the risk of adopting them in their organizations to understand the knowledge hidden within large batches of data. Even then, ML & DL were more akin to a luxury that allowed some companies to get ahead.
Great advances were made in AI during the COVID-19 pandemic; advances that shook the market quite abruptly and made it clear that ML & DL were but the foundations of what some technologists have heralded as the biggest tech revolution of this century. It should not be forgotten: AI is an innovation driven by neural network technology, a scientific postulate that has been around the academic and research community for over 50 years.
Today, more than 200 new AI-based solutions are presented to the market every day.
That said, what were the external factors that led to the take-off of AI, a technology that has grown both feared and loved by the tech industry and the overall business community?
Those of us who are dedicated to technological development observe that Moore’s Law –where the CPU is at the heart of software development–is gradually dying. The semiconductor industry was among the worst hit during the pandemic, causing developers to seek new ways to obtain all the computational processing power needed to build their solutions.
Human curiosity has been the driving force which allowed us to solve problems –whether intellectual or technical– since ancient times. The development of new features derived from cloud technology allowed us to solve the majority of the challenges presented during the pandemic, driving with tremendous force the long-awaited transition from physical infrastructure to the cloud. IoT, AI and decentralized data transmission protocols were essential for this great technological transition, which seeks to strengthen data security and streamline the speed at which we build and deliver technology solutions.
NVIDIA was the most relevant player in this transition. In most cases where its technologies were implemented, the leap was beyond exponential.
The new “Huang’s Law” –named after NVIDIA’s CEO– is killing Moore’s Law in a super accelerated way. Today, more than 200 new AI-based solutions are presented to the market every day. It is estimated that by 2024 more than 90% of business solutions used by companies will be driven by AI. Several business solutions used today will be replaced by others that serve the same purpose but are handled much better by AI.
The new “Huang’s Law” is killing Moore’s Law in a super accelerated way.
For organizations, the risks of implementing AI in different business verticals have become minimal if we compare them to the level of risk that existed before GPT. Personnel with a high level of specialization and investment in high-end hardware are but a couple of the many benefits that come with the adoption of these sorts of solutions.
Complications still exist, nevertheless, with the main ones being:
- Privacy risks: AI can collect and analyze large amounts of personal data, which can put people’s privacy and security at risk.
- Security risks: AI can also be used to develop autonomous weapons, which can pose a risk to global, national and cyber security.
- Unemployment risks: AI-driven automation could eliminate jobs in a wide range of sectors, which could lead to economic and social inequality.
- Risks of bias and discrimination: If not properly designed, AI systems can be biased and discriminatory, which could perpetuate discrimination and inequality.
- Risks of misuse: If used for malicious purposes, AI can be used to spread misinformation, manipulate opinions and carry out cyber attacks.
These risks are inherent to the development of AI as a technology. Those of us who work with AI will have to deal with these risks. For some niches, the challenges of shaping the technology will be greater.
Risks will have to be turned into opportunities for individuals and organizations. It all depends on the lens through which the circumstance is viewed. As I see it, collaboration between machines and humans can solve most of the problems currently affecting humanity, and it has the potential to change the ways in which we live and develop professionally and personally.
In subsequent publications, we will address other risks that the widespread implementation of AI might bring with it. It is always pertinent to know the threats brought by malpractice in the construction of AI.
Add comment