Some People Don’t Trust AI for Software Testing — Should You?

In .Data & applied AI, .Software, Blogfest-en by Baufest

AI has come a long way when it comes to earning the trust of humans. While many have never had a problem trusting AI to interpret speech or use GPS to guide a friend to their house, some may have never thought they’d climb inside a self-driving car. But that’s what countless people are doing every day, and, for many, trusting AI has become part and parcel of everyday life.

Thursday 12 - August - 2021
Baufest

At the same time, some are still wary, particularly when it comes to software testing. The trepidation is understandable: Getting your software testing wrong could result in expensive — even physically dangerous — malfunctions. However, like other cutting-edge tools, it helps to understand what AI’s good for and its limitations when it comes to software testing.

Here are some areas where AI excels in the testing arena and situations in which it may not live up to expectations, so you can decide whether you want to trust it with the software testing process.

Where AI Does a Good Job With Software Testing

AI is an ideal fit for several types of testing functions. Here are some of the most compelling.

Choosing the Right Test Scripts for Each Application

In addition to running the test scripts themselves, AI can be an effective solution for choosing which test scripts you should run. A software testing team may devise a number of test scripts and keep them in a repository, but how do you know:

  • Which scripts can be run on multiple applications?
  • Which scripts should be used to test each function of the app?
  • The order in which to run each script?

An AI system could be programmed to make these decisions for you. Not only would this save you considerable time, but it would help eliminate the possibility of your team neglecting to run a crucial test.

Automatically Performing Repetitive Operations

One of the most compelling reasons why AI is such an effective tool for software testing is its ability to perform repetitive operations automatically. In reality, an application already has checks and balances within its code.

For example, suppose you have an e-commerce app that enables the business to change prices on items by entering new amounts in a database. The checkout page pulls the price from the database, showing your customers exactly what they have to pay. If a human software tester navigates to the checkout page and no price shows up, it’s clear that something went wrong. But also, that kind of problem could have a ripple effect on other elements of the application, such as its “Buy Now” button. So in many cases, malfunctions can reveal themselves.

But here’s the problem: It may take extensive research — and time — to drill down to the exact cause of the issue. For example, if you first detect the pricing issue while testing the Buy Now button, you may have to do some digging before you figure out the problem stems from a database connection issue.

This is where AI is extremely helpful. An AI system can test every facet of an application to ensure that all interdependent processes are functioning well and communicating effectively with each other. Also, an AI software testing solution can do this in a matter of minutes.

Hunting Bugs

AI-based software testing really excels when it comes to bug hunting. This is because, with most applications, the process of finding bugs comes down to testing the ways in which applications generate and communicate data. These processes are very predictable and, in many situations, binary. Either you get the result you want, or you don’t.

Due to the underlying simplicity of the bug-hunting process, artificial intelligence systems have been an ideal solution — and will likely continue to be.

It’s not that AI has an inherent weakness when it comes to hunting bugs, but sometimes programmers or users may label something a “bug” when it’s something else.

For example, if a web application gets a lot of traffic all of a sudden and malfunctions, that’s not a “bug,” necessarily. It could be a throughput issue — the same kind of problem most apps would face if hit with a lot of traffic, such as with a distributed denial-of-service (DDoS) attack. And, yes, an AI-based testing procedure meant to sniff out bugs may miss this kind of issue. This raises an important question: Where does AI fall short when it comes to software testing?

The Limitations of AI for Software Testing

In some ways, skepticism when it comes to using AI for software testing is more than justified. As with any tool, it can be tempting to ask more of an AI-based testing solution than it’s capable of delivering.

For example, using AI to perform smoke tests makes a lot of sense. With smoke testing, you simply test some of the more basic functions of an application. These are some of the most straightforward testing procedures a team will undertake during the development life cycle.

But even though AI can do an excellent job of smoke testing, it may struggle to execute a truly comprehensive performance test. There are so many factors to consider when performance testing, it may be difficult to check off all the boxes.

This is particularly true with applications that depend on internet or intranet connections, where the quality of the connection can impact performance. It may be difficult, time-consuming, or infeasible to design an AI system that can perform a thorough performance test in a variety of connectivity scenarios, especially if you have to mimic constantly changing signal strengths.

The Big Question: What’s in the Box?

For some software testers, their hesitancy to trust AI is very simple: They don’t trust something they don’t understand, especially with something as important as app testing. Often, a software tester may be presented with an AI solution without being privy to how it works or its track record. Trusting that kind of solution would be like trusting a random person off the street to test drive a car you’re considering buying and asking for a full report of all its issues.

Yeah, they might notice a weird vibration or a tendency for the vehicle to drift a little, but will they be able to detect a 15% decrease in compression in the third cylinder? Maybe not. On the other hand, if the person was a seasoned mechanic and you knew where they went to school and what they had been taught, you may feel more comfortable.

Similarly, when a tester’s presented with “black box”-esque AI solutions, it can be understandably hard to hand over the keys.

The experienced professionals at Baufest know how, when, and whether to use AI during the testing process in a way that best benefits the end user. At Baufest, it’s less a matter of trust, and more a matter of leveraging a systematic, dependable development and deployment solution. To learn more about the potential of a partnership with Baufest, reach out today.