Telerik blogs
Companies are starting to embrace advances in thinking around quality of releases being key, by adopting a “shift left” mindset to bring testing earlier into the delivery cycle.

Purposeful testing and feedback efforts early in the delivery cycle lead to better understanding of what’s to be built. The 'Bringing UI automation Into CI/CD' whitepaper by modernization strategist Jim Holmes discusses some of the key challenges and choices QA, test engineers and leads face along their journey of implementing automated UI testing.

Continuous Integration has been out there for decades making a great name for itself of bringing tremendous value to businesses. Although most organizations have already greatly benefited from continually building their source code and making sure the risks of adding breaking errors at the syntax or linking stages in the source code are mitigated, continuous delivery and deployment are much newer in the industry. As such many organizations still haven’t realized their critical value to a smoothly running quality-focused development cycle.

Going forward, companies are starting to embrace advances in thinking around quality of releases being key, by adopting a “shift left” mindset to bring testing earlier into the delivery cycle. Solid conversations as well as purposeful efforts to address issues early in the game lead to better understanding of what’s to be built, and ensure the team has appropriate coverage and collaboration instruments for all forms of automation down the end to end testing story.

Bringing UI Automation Into CI/CD

In this whitepaper you will learn:

•    Why it is important to adopt the "shift left" mindset as part of your CI/CD pipeline
•    How to make the most of immediate CI/CD benefits
•    How to bring UI testing into continuous integration and continuous delivery
•    What the most important considerations are when adding UI automation to CI/CD
•    How to increase the value of your pipeline with UI testing
•    How to ensure continuous improvement

"Cover all the bases, aggressively pursue automation, and avoid duplicating effort." 

—reads a recent Forrester 'Now Tech' Report by pointing this out as one of the key takeaway's that matter to the success of implementing Continuous Functional Test Automation Suites.

It sounds like it would make a lot of sense for any organization but let's look at what's behind by trying to understand the real meaning of these imperatives.

Cover All the Bases

Bringing UI Automation into the delivery process doesn’t mean that teams should automate every piece of test they have ever introduced. Not all software is created equal. Software applications require different testing approaches to ensure the system’s stability and usability.

Automating everything is neither feasible, nor required. However, stopping at the unit or integration testing level, although it helps make sure the code does what it should, might open up room for some of the most common testing failures—especially for end-user-facing applications.

The user doesn’t care about unit or integration tests but wants an UI that leads to the outcome promised by the software or application. Covering all important scenarios with the right amount and variety of testing types is mission critical in order to deliver quality user interfaces and experiences.

“Cover all the bases” is not a “one size fits all” formula but should be understood as selecting a healthy test mix and designing a testing approach that is part of the development pipeline from the very beginning and is fit for the intended use of the software or application.

Automation doesn’t have to be long and heavy but it sometimes is. If it doesn’t bring value though, the whole point is missed. Unit or API tests are at the bottom of the testing pyramid and for many teams—the only means of testing. This might work well for some organizations. Others need to make sure their UI can withstand any storm at the frontend surface—be that a certain amount of users hitting the website at the same time or or simply delivering flawless, uninterrupted user experience.

Adding just a few more verification steps as part of the UI automation might but must not necessarily help. Adding the right amount of verifications at the places where these bring value, by making sure the UI returns the desired result, is what matters to the success of automation.

Here we come to the second imperative. Building a solid testing base for the software delivery process occupies a lot of development time and resources to create and maintain, but how to avoid the “carrying water to the sea” effect?

Aggressively Pursue Automation

Can almost anything be automated? Understanding what to test on each stage of the development cycle will provide a great advancement for the team and the product—especially with regards to UI testing which often is the bottleneck creating pressure across the board. Aggressively pursuing automation doesn't mean everything in the UI should be automated as a significant amount of scenarios could already be covered via unit and integration tests or on component or system level.

Instead, UI test automation should focus on what's critical when looking "through the user's eyes"—an all important paradigm for many applications in today's demanding digital end user reality. The "aggressiveness" of testing should rather be concerned with how to engage the entire team in the quality efforts, how not to compromise with delivery efficiency, and ultimately—with what the end user gets.

Testing will always be in a stage, where the skillset of the team and the capabilities of the test automation tooling are what could decide on the success of automation so choose smart, automate wisely and most importantly—make sure you have integrated feedback loops as early as your pipeline allows for.

Avoid Duplicating Effort

What does this mean in an agile environment, where the aim is to have builds and integrations as frequent as possible if not continuously? Despite being extremely critical to the quality of releases, regression testing is where typically the most manual repetitive tasks take place resulting in two major challenges:

  1. Not being able to provide sufficient coverage of the testing scenarios and make sure critical app functionality has remained operational after changes have been applied to the UI during development
  2. Handling human-error-prone tasks while under pressure to deliver on time and without last minute surprises

Automation can greatly support the continuous testing efforts as part of the agile delivery cycle because it speeds up the feedback loop. That is essential, especially when product increment cannot afford to wait for week-long regression tests, performed by two-thirds of the team, to be run as part of each release.

Test automation suites such Test Studio provide a set of tools to perform the majority of the regression tests in an automated, productivity-enhancing matter, freeing up a lot of space for new features testing or other important team tasks.

We touched upon a few baseline aspects of adopting continuous testing efforts as part of the CI/CD cycle. The important question though is how not to fail with automation, regardless of if the automated tests are integrated into the delivery pipeline or not.

If they are though, this raises new concerns the team should take into consideration while implementing UI automation into CI/CD. After all, the ultimate goal is to succeed instead of letting some of the most well-known automation flaws kill motivation and turn your team off.  

If you are planning to implement UI test automation and make your pipeline more effective with UI testing, or if you need help with getting your organization in the right mindset for automation by proving ROI, make sure you check out the “Bringing UI Automation Into CI/CD” Whitepaper.

Download Whitepaper


Asya Ivanova
About the Author

Asya Ivanova

Asya Ivanova is the Product Marketing Manager for Telerik Test Studio. A passionate technology enthusiast, she has product experience within BIM, Master Data Management and pro audio. Together with the Test Studio team she keeps an eye on ways to make test automation easier for the QA engineer. Asya is a hi fi geek, spending her time with some good old and new album records. Connect on LinkedIn.

Related Posts

Comments

Comments are disabled in preview mode.