Three ways to leverage machine learning for test automation

By Jonathan Zaleski
With artificial intelligence (AI) and machine learning (ML), executives should be able to better slice and dice test data...

In recent years, software development has largely shifted to Agile and DevOps methodologies, aiming to finally make a more mature continuous integration (CI) / continuous delivery (CD) pipeline a reality. As part of this leap forward, organisations automated several processes, including coding, monitoring, and, of course, testing.

Done right, this is a huge advancement for DevOps teams, who can now move more quickly to meet the needs of buyers. However, the flip side is that failures with test scripts and frameworks account for many issues that DevOps teams face. 

DevOps processes involve a wide range of practitioners, including product managers, product owners, developers, test automation engineers, business testers and operation engineers. This means the data originates from different tools and personas, and needs to be normalised. To succeed in a complex DevOps digital journey, teams must adopt automated continuous testing that is reliable, self-maintained (as much as possible) and brings value with each test execution cycle.

Organisations that implement continuous testing within Agile and DevOps execute a large variety of testing types multiple times a day. With each test execution, the amount of test data that’s being created grows significantly, making the decision-making process harder.

With artificial intelligence (AI) and machine learning (ML), executives should be able to better slice and dice test data, understand trends and patterns, quantify business risks, and make decisions faster and continuously. 

Machine learning is vital for DevOps 

Scaling test automation and managing it over time remains a challenge for DevOps teams. Development teams can utilise ML both in the platform’s test automation authoring and execution phases, as well as in the post-execution test analysis that includes looking at trends, patterns and impact on the business.

Before going any further, it’s important to understand the root causes to why test automation is so unstable when not utilising these technologies:

  • The testing stability of both mobile and web apps are often impacted by elements within them that are either dynamic by definition (e.g. react native apps), or that were changed by the developers.
  • Testing stability can also be impacted when changes are made to the data that the test is dependent on, or more commonly, changes are made directly to the app (i.e. new screens, buttons, user flows or user inputs are added).
  • Non-ML test scripts are static, so they cannot automatically adapt and overcome the above changes. This inability to adapt results in test failures, flaky/brittle tests, build failures, inconsistent test data and more.

Let’s look at three ways ML can help your DevOps organisation with test automation. 

Make sense of high volumes of test data

Organisations that implement continuous testing within Agile and DevOps execute a large variety of testing types multiple times a day. This includes unit, API, functional, accessibility, integration and other testing types. 

With each test execution, the amount of test data that’s being created grows significantly, making the decision-making process harder. From understanding where the key issues in the product are, through visualising the most unstable test cases and other areas to focus on, ML in test reporting and analysis makes life easier for executives.

With AI/ML systems, executives should be able to better slice and dice test data, understand trends and patterns, quantify business risks, and make decisions faster and continuously. For example, learning which CI jobs are more valuable or lengthy, or which platforms under test (mobile, web, desktop) are faultier than others.

Without the help or AI or ML, the work is error prone, manual and sometimes impossible. With AI/ML, practitioners of test data analysis have the opportunity to add features around such things as test impact analysis, security holes and platform-specific defects. 

Make actionable decisions around quality for specific releases

With DevOps, feature teams or squads are delivering new pieces of code and value to customers almost on a daily basis. Understanding the level of quality, usability and other aspects of code quality on each feature is a huge benefit to the developers.

By utilising AI/ML to automatically scan the new code, analyse security issues and identify test coverage gaps, teams can advance their maturity and deliver better code faster. As an example, code-climate are able to automatically review any code changes upon a pull request and spot quality issues, and optimise the entire pipeline. In addition, many DevOps teams today leverage the feature flags technique to gradually expose new features, and hide them in cases of issues.

With AI/ML algorithms, such decision making could be made easier by automatically validating and comparing between specific releases based on predefined datasets and acceptance criteria.

Enhance test stability over time 

In traditional test automation projects, the test engineers often struggle to continuously maintain the scripts each time a new build is being delivered for testing, or new functionality is added to the app under test.

In most cases, these events break the test automation script. This is either because a new element ID that was introduced or changed since the previous app, or a new platform-specific capability or popup was added that interferes with the test execution flow. In the mobile landscape specifically, new OS versions typically change the UI and add new alerts or security popups on top of the app. These kinds of unexpected events would break a standard test automation script.

With AI/ML and self-healing abilities, a test automation framework can automatically identify the change made to an element locator (ID), or a screen/flow that were added between predefined test automation steps, and either quickly fix them on the fly, or alert and suggest the quick fix to the developers. Obviously, with such capabilities, test scripts that are embedded into CI/CD schedulers will run much smoother and require less intervention by developers.

There is no doubt that ML will shape the next generation of software defects with new categories and classification of issues. But most importantly, it will increase the quality and efficiency of releases.

By Jonathan Zaleski, Head of Applause Labs

Share

Featured Articles

Accenture Commits to Expanding its AI Vision with Adobe

Focusing its AI strategy on company transformation, Accenture partners with Adobe to develop industry-specific solutions using Gen AI to empower businesses

Businesses are not ‘Data Ready’ for Gen AI, says Alteryx

A report by Alteryx finds that organisations must prepare, as they are not ready to unlock real value from Gen AI as a result of insufficient data stacks

TacticAI: Google DeepMind Pioneer a Sports-Led AI Assistant

Google DeepMind’s TacticAI has been launched as part of a research collaboration with Liverpool FC to transform the sporting experience with AI

Bumble: Harnessing AI to Power Human Relationships

Data & Analytics

Kheiron Medical Technology can Detect Cancer with AI Test

Data & Analytics

Who is Mustafa Suleyman? DeepMind Founder Turned AI CEO

Machine Learning