Power Of Artificial Intelligence in Software
Development: 5 Myths and Realities
Artificial intelligence offers real value, but recognizing its limitations is critical for actually capitalizing on that value.
Artificial intelligence, or AI, is one of the most intriguing topics in software development today. It is also one of the most widely misunderstood. For software developers and IT teams, AI offers an array of tantalizing possibilities for making applications faster, more scalable and more efficient. However, in many cases, the hype surrounding AI doesn’t line up with the reality of what is actually possible or practical. Toward that end, here’s a look at five common AI myths related to software development and deployment.
1. Artificial inteligence is a new technology.
AI has gained major attention from the media in just the last few years. Likewise, most software products that pitch AI as a key feature (like AIOps-driven monitoring tools) are still very new. But, AI is not a new technology at all. The concept of machine learning and artificial intelligence in general stretches back many centuries (see, for example, the Brazen Head). And software applications have been using AI to do things like play checkers since the 1950s.
Thus, if AI seems like a relatively new technology in the software world, or one that has only become practically usable in the past few years, that is only because it took the media and marketers a long time to catch up. The reality is that AI has been an established field of computer science for more than half a century.
2. AI is smarter than humans.
Some AI advocates would have you believe that AI-powered applications are “smarter” than humans, in the sense that they can solve problems or develop ideas more creatively and effectively than human minds. But the reality is that AI-powered software applications don’t outthink humans. They simply think faster than humans.
And when it comes to use cases that require nuanced understanding of reasoning and human expression, AI fares particularly poorly, as IBM’s recent experiment with AI-powered debate software showed.
There is a chance that this could change in the future. Someday, AI might become so sophisticated that AI-driven applications are genuinely smarter than humans. But that day remains beyond the horizon.
3. AI will lead to smaller IT teams.
Many marketers of AI-powered software tools sell their products as a way for companies to reduce the size (and, by extension, cost) of their IT teams. By using AI to automate IT decision-making and operations, they say, companies can do more with fewer staff members.
Some observers go so far as to claim that AI, combined with other innovations, is edging us closer to a world of "NoOps," wherein IT operations teams are fully replaced by software.
It's certainly true that AI can help to increase automation and reduce the manual effort required to perform certain tasks. However, the idea that AI will remove the need for human engineers entirely is fantastical. Someone still has to set up and manage the AI-powered tools that do the operations work.
Plus, there is an argument to be made that AI is not making IT operations simpler; it is merely helping IT Ops teams keep up with the ever-increasing complexity of new software and infrastructure. Deploying and managing containers and microservices requires much more work than dealing with virtual machines or bare-metal servers. In this sense, AI is simply helping IT teams to maintain the status quo; it is not empowering them to gain new ground.
4. AI software is "set and forget."
On its face, AI tools can seem like a type of "set it and forget it" wonder. If data-powered algorithms and machine learning allow AI tools to make all the decisions they need, then humans don't have to do any work beyond the initial set up and data training, right?
Well, no. There are lots of reasons why even the best-designed AI tools need to be managed actively and continuously. They need to be constantly retrained with up-to-date data in order to make accurate decisions about ever-changing conditions. The quality of the data that they rely on must be carefully managed to ensure that it delivers the level of accuracy and clarity that the tools require. Humans may need to help provide ethical guidancefor AI algorithms.
5. AI will destroy the world.
The four AI myths that I have discussed above involve hype or an excess of confidence in the abilities of AI and machine learning. Now, I'd like to approach things from the opposite perspective by pointing out that AI is not at all a bad or useless technology.
Sure, AI has many shortcomings, and AI tools in many cases are not likely to live up fully to the promises behind them. But that doesn't mean that AI is the bane of our existence, or that software teams should not use it at all.
This is important to note because the conversation surrounding AI has so far tended to be bipolar in nature. On one side are technologists and futurists promising us that AI will lead us into utopia. On the other are fierce AI critics worried about an AI-driven dystopia marked by all manner of dehumanizing, unethical automations.
Neither of these views represents reality. AI will not fully replace humans, but it will make their jobs easier. AI won't completely remove the need to perform manual tasks, but it will reduce it. AI won't prove smarter than human beings, but it can provide insights that help them make smarter decisions.
The Future of IoT Devices Is in Question
While IoT devices and technology were all you heard about for a while, the buzz has dimmed. Here's why and what developers need to know about the future of IoT. Until very recently, the future of IoT seemed golden, but if you feel like you've been hearing less and less about the internet of things, you're probaby right. Is it just a matter of the future of IoT becoming the mainstream present of IoT, or is the fervor around the promise of IoT devices dimming?That's a question that many IT pros have likely been asking lately. To answer it, here's a look at current trends in the IoT ecosystem, as well as the challenges that IoT developers need to solve--pronto--if IoT is to keep growing as fast as analysts promised a few years ago.
In fact, according to Google Trends, interest in IoT peaked toward the end of 2016. Interest has ebbed and flowed since then, while slowly regressing toward pre-2016 levels.
Comments
Post a Comment