Written by 02:55 Automation, Languages & Coding, Testing

Automated testing of the desktop application: expediency and frameworks overview

Introduction

You have certainly heard about regression and acceptance testing. But do you know how much is actually spent on acceptance testing in a project?
We can quickly get an answer to this with the help of a time tracking system like TMetric.
On our project, acceptance-testing a desktop application of around 100 assemblies took more than 2 person-weeks. New QA specialists who didn’t know the application well were having the greatest difficulties. Compared to more experienced QA specialists, they spent much more time on each test case.
However, in my opinion, the most unpleasant part was this – if any critical errors are found before the release, acceptance testing must be performed again after these errors are fixed.
Written unit tests helped a little bit, but they still mostly reduced the time spent on regression testing. With this, when the amount of manual testing reached a critical level, we started moving towards automation.

ROI

Before writing automated UI tests, we needed to assess how profitable our investments are. We did this with the help of ROI (Return On Investment https://en.wikipedia.org/wiki/Return_on_investment)
Calculating the ROI of UI testing also became an interesting task with multiple unknown variables:

ROI = Profit / Expenses
or
ROI = (Profit – Expenses) / Expenses

At this stage, we needed a small prototype that would help us estimate all necessary expenses. It showed very peculiar results: performing acceptance testing takes about the same amount of time as automating this process. At first, this information looked questionable, but when we investigated further, the reasons became clear:

  • New QA specialists could have limited understanding of steps described in test cases. When this happens, a few people will be involved in acceptance testing to help understand the situation better. Here, we should also keep in mind the question of how relevant the information is that we have about environment settings and requirements.
  • Sometimes people involved in acceptance testing spend time learning technical documentation.
  • The app itself interacts with a specific set of services. If one of them is unavailable, the less experienced QA specialists will spend time describing bugs that developers will, in their turn, investigate. As a result, time is wasted because the necessary service just didn’t run properly after a blackout/hardware update/computer reboot.
  • QA testers’ computers aren’t very powerful. If there’s no SSD, you will already notice it during installation. Also, if the app is working under a heavy load, it’s possible that a slow paging file will be used.
  • To be honest, we got carried away and forgot that we’re working with automation. By the way, have you closed the Youtube tab in your browser?

Now, let’s return to ROI. To make things simple, the calculations were performed by time. Let’s calculate profit as savings on manual tests, and the time period we’ll look at is one year:

Profit = (X – Y) * N = (60 – 1) * 8 = 472 days

X – time spent on manual testing (60 days)
Y – time spent on performing automated tests (1 day)
N – the amount of time acceptance was performed

Next, we’ll look at the expenses:

Expenses = A + B + C + D + E + F = 0 + 10 + 5 + 50 + 7 + 8 = 80

A – The cost of the automation tool license. In our case, a free tool was used.
B – Training a QA specialist (10 days)
C – Preparing the infrastructure (5 days)
D – Developing tests (50 days)
E – Running tests and describing bugs discovered in the process (7 days)
F – Test maintenance (8 days)

Total:

ROI = Profit / Expenses = 472 / 80 = 5,9

Of course, some of the aspects here are estimated. To assess our own calculations, we spent some time investigating the capabilities offered by paid solutions and various ROI calculators. With this, we calculated the average ROI value of 2 or 3, which is a great result.

Existing Frameworks

Having looked at organizational questions, let’s focus on questions of the technical kind. The most important of them was choosing a framework for automating the testing of our desktop application. We had the following requirements based on our project’s features:

  • Tests will be developed and ran on Windows machines
  • The framework should be adapted for testing desktop applications
  • UI testing can be integrated into the CI process. We were already using Jenkins, so it was preferable here
  • The ability to write tests in a user-friendly IDE – it has to have syntax highlighting, test script navigation, and IntelliSense-style code completion
  • Minimal expenses on QA training. For certain reasons, our QA specialists didn’t want to write tests in Brainfuck
  • A community on Stack Overflow, MSDN, etc. is preferable

TestComplete

This platform initially appealed to us due to its maturity, which gave hopes regarding the technical aspects.
The first thing we encountered was an unstable and rather obsolete IDE. The environment handled syntax highlighting more or less decently, but there were significant issues with navigation (Go to definition), searching, and code autocompletion: this functionality didn’t work at all approximately 60% of the time. The built-in recorder and an Inspect analogue worked fine. In the end, the IDE gave us an unpleasant surprise when it started passing arguments to the application. This, expectedly, caused errors in the application’s performance:

--no-sandbox
program files (x86)\smartbear\testcomplete12\x64\bin\Extensions\tcCrExtension\tcCEFHost.dll;application/x-testcomplete12-0-chrome-browser-agent

At this stage, we involved TestComplete’s support into the situation to try save time and evaluate the quality of technical support before potentially buying a license. After a few letters were sent to the technical support, we got an answer – we should ignore the arguments passed to the application. Weird, isn’t it? Investigating further, we found out that these arguments are required to test applications that use CEF. In our next letter, we stated that we use CEF, and were told by the support specialists to not ignore the arguments. When we asked how exactly to use them, the answer changed back to “Ignore the arguments”.
Leaving our conversation with technical support, we turned to the IDE’s documentation (without much hope). It had more information, but we found nothing pertaining to the case in question. Also, according to this same documentation, the IDE should have behaved differently from the beginning.
It is assumed that tests in TestComplete will be written using VBScript.

If you look at it long enough, you can hear this. Microsoft suggest converting this “marvel” into PowerShell scripts. As an alternative, JavaScript and Python can be used, which helps the situation.

As a free tool, TestComplete would be bearable, but their site has a pricing page, and the prices are per-user. As a result, this is what we’ll get after purchasing the tool:

  • IDE you want to close
  • Compatibility with scripts from 1996
  • A recorder so that we don’t write everything manually
  • Another Inspect, but with bells and whistles
  • 2 types of tech support answers
  • Documentation that doesn’t represent reality

Worst trade deal ever, let’s move on.

Coded UI

Tactical retreat, regrouping, and we flank the issue. On one hand, we learned how to use Visual Studio as an IDE. On the other hand, our approach was based on the DevExpress UI components we’re using. As a result, we found some interesting info about the Coded UI framework which is officially used in DevExpress to automate UI testing. This framework is integrated into the internal testing process so much that there’s even a Visual Studio extension for it.
There was an extensive community, Microsoft promoted the tool on their website, and this product has also been mentioned on the “Microsoft Visual Studio” channel. Long story short, everything looked promising and we started preparing the framework.
The first requirement we encountered was Visual Studio Enterprise. Moreover, this version of Visual Studio was not only necessary for writing tests, but also for running them. This means that mstest through which launching will be performed in case with CI should also be a part of the Enterprise edition.
All necessary Coded UI tools can be installed by enabling corresponding checkboxes when VS is installed or modified.

The approach to writing tests was rather pleasant: commands integrated into the shell allowed to quickly launch a recorder that generates a unit test and a “map” class describing the UI. Additional tools integrated into VS provided the ability to create separate test classes without calling the code.
The only peculiarity we noticed was a partial class that had the description of the control and was divided into two parts. Along with many other things, it’s described in the documentation. This documentation is sufficient for a comfortable work process: code examples and screenshots make all technical information readily accessible and easy to understand. To put it simply, when the recorder describes the UI, a “Designer.cs” file is generated. This file is responsible for reusing the code that describes the user interface. Everything the recorder couldn’t handle should be written manually and saved somewhere outside of the class’ autogenerated part. This is very similar to the partial classes written by VS deigners when creating controls. The priority of operations performed on controls and of their state checks is described in a method to which the recorder helpfully adds a standard TestMethod attribute.
The clouds began to gather over the framework when we started looking into the things the recorder generated. First of all, it obscured some of the application’s issues: some controls’ Name property wasn’t specified, and the recorder deemed calling this ridiculous instance of rule violation acceptable and searched controls through the text. Also, it handled complex controls very inefficiently. For example, TreeView nodes were searched by node index which made the created “map” class unusable in the case of interface expansion. And the recorder’s value dropped significantly in our eyes – what’s the point in autogenerating the code if you need to check it afterwards?
We could make peace with all these things and find a commendable solution, but suddenly, thunder struck: Microsoft stated that this technology is now obsolete. With this, VS 2019 became the last version of Visual Studio that supports Coded UI. The possibility of being dependent on VS 2019 now and for a couple of years in advance didn’t really seem that scary, but our project is quite large, so difficulties could start somewhere down the line (2025, for example).
Let’s summarize. With Coded UI, we’ll have:

  • A powerful paid IDE
  • All infrastructure already created for tests: both on the side of the IDE and of our CI
  • The ability to ask any developer from our project for help because we’re writing tests in C# in the same IDE
  • An extensive amount of good-quality documentation
  • A few sad QA specialists who placed their code in the autogenerated part of the class and then lost it during the autogeneration process
  • A lot of generated code that kind of works and that you need to subject to strict review
  • An outrageously transparent approach to CI: you can write code for launching tests through mstest with your eyes closed
  • A slowly-dying red giant of automation which constantly grows from new tests and is dangerously close to turning either into a fading white dwarf represented by an absolutely isolated machine with irreversibly obsolete software or into a supernova blast when the project implodes under the pressure of new updates.

Everything sounded good except for the last point. This is why we needed to continue our search.

TestStack.White

We were working on prototyping tests with the help of White in parallel to investigating the functionality of Coded UI.
White itself is a wrap around the ‘Microsoft.Automation’ libraries which looked very promising, and White is also similar to Coded UI. However, on closer examination, we found it to be much more austere, and you could notice it everywhere – from the fact that there was no recorder to the actual test structure. For example, running the app, searching for a window, and pressing the “Execute” button looks like this:

var appPath = @"C:\Program files\UiAutomatedTestApplication\TestApplication.exe";
var app = TestStack.White.Application.Launch(appPath);

var windowSearchCriteria = SearchCriteria.ByAutomationId("MainForm");
var window = app.GetWindow(windowSearchCriteria, InitializeOption.NoCache);

var execute = window.GetElement(SearchCriteria.ByText("Execute"));
var invokePattern = (InvokePattern)execute.GetCurrentPattern(InvokePattern.Pattern);
invokePattern.Invoke();

app.WaitWhileBusy();

Even if there are no complaints when it comes to running the application, the necessity to work with the InvokePattern class is very questionable. The InitializeOption class also looks strange because it has access to the WithCache static member, but is supposed to be used strictly internally:

public class InitializeOption {
//
// Summary:
//     This option should not be used as this is only for internal white purposes
public static InitializeOption WithCache { get; }
public static InitializeOption NoCache { get; }
public virtual bool Cached { get; }
public virtual string Identifier { get; }
public virtual bool NoIdentification { get; }

//
// Summary:
//     Specify the unique identification for your window. White remembers the location
//     of UIItems inside a window as you find them. Next time when items inside the
//     same window is found they are located first based on position which is faster.
//
// Parameters:
//   identifier:
public virtual InitializeOption AndIdentifiedBy(string identifier);
public virtual void NonCached();
public override string ToString();
}

Strange decisions like this are everywhere, and the framework comes out to be too abstract for QA.

The documentation is of decent quality and has left a good general impression. The project’s source code was hosted at github, but the latest commit was dated back to January 8th, 2016.
Summing up the information about White, we would have:

  • Decent documentation
  • Access to the source code
  • A small community
  • The necessity to explain to all QA specialists that the control’s behaviour is implemented through the Pattern class
  • An old repository from which we would definitely need to fork

The most unpleasant part was the need to develop our own framework, which we would like to avoid. So we had to move on.

Appium

We’ve encountered Appium in our search before, but only started considering it seriously after Microsoft stopped using Coded UI.
At first glance, testing with the help of Appium looks like a slot machine with three reels. The first one shows the language for which there is an API that allows interaction with the driver. This provides the ability to write tests in any familiar language: Python, C#, Java, etc. The second reel shows the driver app that serves as an intermediate layer between tests and the product we’re testing. As it’s described in the documentation, interaction with tests is performed using JSON Wire Protocol – this is what actually gives us the ability to write tests in any language. And the third reel shows the object we’re testing. It doesn’t really matter if it’s a website, a mobile app, or a desktop app as long as the corresponding driver is running. As you can see, the components are elegantly interchangeable .
The estimation of the package’s relevance was satisfying – on the Github page, we could see that the repository had fresh commits. While examining the WinAppDriver repository, we learned that there even was a recorder in it.
We started noticing some issues while writing a prototype. For example, because the framework is too multi-purpose, the WindowsElement responsible for the desktop control has a FindElementByCssSelector method that throws the following exception on execution: “Unexpected error. Unimplemented Command: css selector locator strategy is not supported”. In the next article, we’ll talk about the issues we encountered while working with this framework in more detail, but for now I’d like to say that we managed to handle them all.

As a summary, here’s what we’ll have while using Appium:

  • The ability to test application functionality that requires interaction with a browser (opening the feedback page, online activation, checking email delivery) in the scope of one infrastructure and one test
  • The ability to work with any edition of Visual Studio
  • The ability to test a desktop application that uses a browser to render UI. A good example of this would be Azure Data Studio
  • All advantages we get with Coded UI
  • A free framework that Microsoft recommends using
  • Infrastructure that is familiar to QA specialists who worked with Selenium
  • A repository updated with fresh commits
  • A decently large community that is, however, not as large as Coded UI’s community
  • A recorder with limited functionality
  • The necessity to run a driver application for testing. Not very convenient, but it has it’s own logging functionality
  • A lot of oppurtunities for shooting yourself in the foot due to the WindowsElement‘s unfortunate inheritance from AppiumWebElement

Going through all points with requirements for frameworks and comparing all issues found in each of those framework, we finally chose Appium.

Conclusion

It was interesting to work with all of these frameworks because each one of them was based on a unique philosophy of approaching automated testing. Part of them only began their path while others were reaching their eclipse or have already faded away. You can avoid being lost in the many available solutions by creating a list of specific requirements for the tool and having a responsible team with well-established interaction between its members. And don’t forget that future tests are as much of a project as usual code is, with a backlog, boards, CI, refactoring, and everything else.

Tags: , Last modified: September 20, 2021
Close