testsigma
How to perform Mobile Compatibility Testing On Real Devices

Mobile Compatibility Testing: How to perform On Real Devices?

How many times have you heard the sentence that starts with “This software is compatible with Windows 8 and above” or “Only compatible with Android version 6.0 and above”? I am sure this would ring a bell for all gamers as games are highly sensitive to the platform they are run on.

But for non-gamers too this would be the first thing they need to check if they know they are operating on an older version of the operating system. This raises a question in our minds as developers and testers. If the end-users are so concerned and rely so much on the compatibility of the software, we should be extra cautious and confident enough while writing those OS or version numbers.

The compatibility of the software to a device is a very large term. It would essentially include everything that can use the software. However, considering the usage of mobile devices in today’s world, when we boil this down to mobile devices, it is called mobile compatibility and this post is all about managing that. From deriving the target devices to defining these numbers, the journey we take is explored in this post with practical examples. But before everything, let’s clear some air around mobile compatibility and its relevance in today’s world.

What is mobile compatibility?

As of today, there are 16 billion mobile devices in the world. They range from the first versions of the iPhone to the latest Android version. This difference varies from hardware specifications to screen resolutions to almost anything we can imagine.

mobile compatibility

When we witness such a wide range of devices, organizations tend to make it clear about their software and the list of devices it can be used on (sometimes adding the word optimal to hint at the best performance outputs). So, generally, we may find ourselves reading headlines or discussions such as this:

The above discussion is on Samsung’s commitment to providing four major OS upgrades to their newest device Samsung Galaxy S23. In other words, this would mean that Samsung commits that their device will be “compatible” with at least four major OS updates.

The word compatible ensures us that in at least four future upgrades, this device will be able to run the OS “smoothly”. This is an important highlight because the more compatible a device is, the newer third-party software it will be able to run on it. These softwares will range from presently available ones and the ones that will be released in at least four years.

This is where mobile compatibility plays a role. When we say a device is compatible with a software or vice-versa, we indicate that this software will work at its best on this device. This, however, will be a synchronized affair of hardware, operating system processes, OS APIs, software APIs, and a lot more.

For a developer, this is more than just writing an operating system version number on the release notes. Rather, as a developer, I would say that developer has very little part to play in this attribute.

What decides the mobile compatibility of software?

So the question is, what decides mobile compatibility for a software? Surprisingly, this starts from the first email or meeting that decides the requirement for a new version of the software.

Consider the process from the requirement to release as follows (customer-centric approach):

  • First, many customers raise their need of having a certain feature (it could either be new or something that competitor software is already pushing).

  • Appropriate teams convey this to the developers.

  • Developers decide to implement the feature.

  • After working out the solution, developers finally implement the requirement.

  • The newer version is released to the end-users.

As we can observe, even if something is implemented at the second-last option, the effect is chained up to a lot of people including the customer. At this point is where mobile compatibility is decided.

For this simple example, consider we already have the software in the market and are, currently, working on releasing a new version. Presently, our application requires a minimum Google Chrome version of 55. But the new requirement can only be satisfied if we use the font-display property of CSS. This property is supported only in Google Chrome 60 and above. If we need to use it (and considering no fallbacks are available and used), we cannot let the users open the web app on versions before 60 as they won’t be able to use the feature. So, our mobile compatibility here becomes Google Chrome version 60 from 55 for this particular version.

A similar thing happens when things get deprecated. If we use RTCPeerConnection.getStreamById() and it gets deprecated in Google Chrome 62, we need to change it with new code for newer versions. If something is available from earlier versions, it may not affect mobile compatibility that much. However, if RTCPeerConnection.getStreamById() needs to be replaced by a new API, the mobile compatibility changes.

Testing mobile compatibility

The previous section makes it clear what mobile compatibility is and how we decide what compatibility to record before release for the end-user. This is where the developer’s task finishes, and till now, it has been wholly based on the theoretical configurational match. The next step is to test whether the mobile compatibility we claim is working. So in mobile compatibility testing, we verify that the software works across browsers, operating systems, devices, versions, and everything in between. The list of devices that we test is obviously those that lie above the compatibility threshold given by the developers. In the end, our aim in performing the mobile compatibility testing is to confirm the correct working of all the features on all the devices. In this process, we encounter bugs and glitches to rectify.

Importance of mobile compatibility testing – The most affected elements

When we perform mobile compatibility testing successfully, we ensure that whatever we have developed is working fine. This means we are now aiming for the best user experience to the user. However, if we leave mobile compatibility to luck, which elements could we expect to be most affected?

Orientation

Generally and mostly, mobile devices are rectangular in shape i.e. their height-to-width ratio is always positive. While the transition from desktop (that has a landscape orientation) to the mobile device is easier to achieve (due to responsiveness and automatic scaling), it would not be that easy when done on the same device. For example, once testing is done on the mobile’s portrait orientation, the user may find some UI glitches when the mobile is turned 90 degrees. Since the viewport(mobile screen) remains the same, the elements may scale out of it (or less than that). This is a common problem in devices that are not tested properly.

orientation

The orientation-based bugs need not be too outward in nature, which means they are not something that anyone can point out, including those that are not technically sound. For instance, the above image is of a search result that comes when we search “flutter app for assistant”. When we turn the device to landscape mode, notice how the notification bar has extended to match the viewport but the application does not. As a result, a black strip is created between the viewport and the application boundary. This is a severe bug that can be detected by mobile compatibility testing.

Pre-production bugs identification

Bug rectification could easily be cited as one of the most expensive processes in the software development cycle. While a bug will always be uncovered, what matters is the stage at which the bug was found and transferred to the developers. The more time it takes, the more it will cost us. This graph from NIST shows how the cost of bug rectification increases according to the stage at which it is found:

cost of bug rectification

As we see in the last bar in the graph, the cost of correcting a bug when it is found after production is 30x to original cost. Mobile compatibility testing ensures that this does not happen and we can catch the bug at the testing phase. It will still cost us more than what it would at the coding phase, but, substantially less from post-production.

Identifying bug pre-production also saves us from rectifying the bug in a hurry which has a risk of affecting another part of the code and leading to another bug. Also, when bugs are found at the user’s end, they lead to bad publicity and mistrust among them.

Navigation handles and path flow

Different mobile devices have different screen sizes. Making everything responsive explicitly is not recommended to the developers, and therefore, a lot of the things are kept for auto-scaling through meta tags and media queries. Surprisingly, they work really well, especially when combined with front-end frameworks like Bootstrap. However, auto-scaling poses one small problem – it does not know which elements need to be shrunk at what scale. As a result, every element will be scaled down in a constant ratio that leads to a poor experience.

For example, typography is an important element of a web application. We want users to read our content to connect with the business instantly. However, if we rely on auto-scaling, the font size may shrink to 14, 12, or sometimes as low as 10 pts. This becomes unreadable and requires pinching out from the user. As an organization, this should always be avoided.

Similarly, navigation buttons and other path flows are affected. Sometimes the icon becomes too small that when the user tries to touch it, he presses another button that is on its side. Similarly, menu items or sidebar menu list becomes too small in font, and the clickable area is reduced, hindering the user’s navigation.

Customer satisfaction

From all the points discussed in this section, everything will eventually boil down to the experience a user is getting from the website. This is so because finally a good experience will generate good revenue and retain customers for us.

When mobile compatibility testing is left to luck, a user may witness a lot of small bugs that may make their experience a little rocky. Now the user can provide their feedback to the company and the company can quickly rectify it and release a patch within a day or two. However, only 1 out of 26 people go this far and let the business know about their experience. As a business, we can never rely on users that they will reach back and let us know. In the best case, they just abandon the website (or app) and never come back.

The second scenario is when users spread word of mouth to people they know. The people on the receiving end know them, so; obviously, they trust them and take their opinion. This number is a staggering 52% in case of a bad experience. So even if we assume that one person influences only 5 people, it means if 100 people have a bad experience, 260 additional users are lost. This is something that cannot be ignored.

The final scenario is posting a review online. This has the most damaging effect on a business and unsurprisingly is the most popular choice for users. 97% of people get influenced by a customer review, and this impacts around $400 billion in revenue in just one section of eCommerce. Once a negative review is spread about a web application, it will take a lot of work to neutralize its effect.

These three scenarios highlight the importance of customer satisfaction and the damage it can cost if the application is not tested for mobile compatibility.

Type of mobile compatibility tests

The next thing we need to focus on once we have decided to conduct mobile compatibility testing is the type of mobile compatibility tests. In other words, what makes mobile compatibility testing complete?

Hardware

Everything that will work on an electronic device, like a mobile phone, is due to hardware capabilities. In popular terms, we know a few of them as RAM and ROM; however, there are hundreds of these hardware devices that make an application work. For instance, you might have to test the camera or the microphone if that is included in the application. In mobile compatibility testing, the first task is to test the hardware of the device with the application. Work along with the threshold units and what type of hardware works best for the application.

Software

Once the hardware is tested, we need to make sure the application works with the software that will help the application run. Everything beyond hardware will mostly be software, and the compatibility of this software is one of the major issues that arise when testing is weak.

The software includes operating systems, browsers, kernel software, mobile manufacturers’ software (applications) that run third-party applications, etc.

Network

Mobile devices are, well, mobile in nature. They are not fixed in a place, and therefore, if your application consumes bandwidth, it needs to be tested. For testing mobile compatibility for varying bandwidth networks, the most important is to check whether any element loads on a weak network or not. Generally, when the application just shows a loader (or nothing), users abandon it. This is not the case when at least something is visible on the screen.

Versions

Finally, whatever we have covered in the mobile compatibility test types needs to be repeated on various versions of that respective section. For instance, when browser compatibility testing is done to check browser-based anomalies (general), we should repeat it on other browsers and their versions. This is important because the version number would be the primary thing in focus for the user when he looks for mobile compatibility. An application will definitely work on Android if it is available on Google Play but which version of it? A mobile web app will definitely open on a browser if the URL is correct and working but which version will show the web elements in the best state with correct working?

These tests are done to mark mobile compatibility testing as complete. However, we have two methods to conduct these tests. These are called types of mobile compatibility testing.

Types of mobile compatibility testing

When we prepare a test automation strategy or path flow for proceeding ahead in testing, we need to settle on the direction we need to move forward in. For mobile compatibility testing, we have either a forward direction or a backward one.

Backward compatibility testing

Whatever we might have been hinting at in this post comes under backward compatibility testing. In this type of mobile compatibility testing, we take our latest developed application and perform tests on the older version of software, hardware, apps, network, etc.

The main goal of backward compatibility testing is to serve those people who do not operate the latest version of all the software and hardware. And this number is extremely large. For instance, if we just take the case of the Android operating system, we will find that just 17.7% of people worldwide are on this latest build.

Backward compatibility testing

This means by backward compatibility mobile testing, we are covering all those people that are on the rest of the version. The same goes for browser versions, hardware versions, etc.

Forward compatibility testing

When we put our application on the opposite side of the backward compatibility testing definition, we get forward compatibility. Here, we assume that the user will not (or has not for earlier versions) update our application to the latest release. This is also a common scenario across the globe. So, do we know what would happen if the user updates their operating system but not our application? We don’t! And the answer to this is given by forward compatibility testing.

In forward compatibility testing, we test our application on the newer versions of software, hardware, network, etc. Generally, they work correctly but deprecations and modifications in a few things can create small hurdles. Those are tested and corrected through forward compatibility testing.

Type of devices to consider in mobile compatibility testing

One of the main concerns of any organization in the case of mobile compatibility testing is the type of device they need to have so that:

  • the budget does not rise too high.
  • mobile app (or web) quality is not compromised.

The choices for anyone ranging from a student to a freelancer to a big organization lie between these three types:

  • A simulator.
  • An emulator.
  • A real device.

A comparison between a simulator, an emulator, and a real device is a topic of great length. For now, we can directly jump to the conclusion and declare that the parameter accuracy, testing quality, and reliability that come from real device testing do not match any other type. However, the major concern that people have in procuring real devices is the cost associated with them. If someone asks why should I buy a mobile device worth $1000 when I can use an emulator for free, we really do not have much to debate about.

The only mid-way solution to this problem is to rent out the devices instead of buying them. It saves us costs of procurement and we return the devices when we are done. Along with it, we can save even more on our budget if we could rent these devices but never obtain them physically. Instead, keep them on the owner’s infrastructure and eliminate maintenance costs and everything in between. Now, all we need is one system with an internet connection and one browser. This is where Testsigma comes into play.

How to perform mobile compatibility testing on real devices with Testsigma?

Testsigma is a cloud-based automation testing platform serving from desktops to mobile devices. The best part of using Testsigma is the absence of any programming languages to write the test scripts. For this, Testsigma uses the English language and therefore the testers are able to focus more on testing rather than spending more time creating scripts and maintaining them.

Testsigma comes with real devices at their end and all those can be accessed by signing up for free on their platform.

Once sign up is done, log in to your account to open the dashboard:

testsigma dashboard login

Select the specification of the device you wish to use:

specification of devices

Next, write automation test cases in English that will be conducted on the selected device.

automated test cases

Click execute to run the tests.

testsigma- run tests

Testsigma also provides an option to use the mobile test recorder that can convert a tester’s action into test cases (that too in English!). These tests can be saved and re-run on various other systems. All this can be achieved without writing a single test.



Challenges in mobile compatibility testing

Working our way with Testsigma seems to be a smooth ride. But using this platform will only eliminate the challenges associated with a tool. For instance, one major challenge we face in scripted testing is the knowledge of supported programming languages. Due to this settling on a single tool also becomes harder for the team. However, mobile compatibility testing in itself is also surrounded by a few challenges without discussion of which will make this post incomplete. Let’s analyze them one by one in this section.

Extreme fragmentation

The digital world is extremely fragmented today. From digital watches to TVs to mobile devices, technology is improving every day and is incorporated into these devices regularly. As a result, the pace of their release has overtaken the pace by which an individual purchases them. Currently, we expect an annual release of upgrades of a device and many more devices that will be introduced within that year’s time.

Such a highly fragmented world poses one major challenge in front of testers performing mobile compatibility testing – how to test them all? What looks like an impossible feat is actually doable, and that too in a shorter time than expected. Cloud-based technologies like Testsigma are responsible for providing every type of device available to the testers. Since they cater to a wide variety of organizations, testers, and projects, whatever you are trying to find will probably be available for you. However, choosing the manual path may demand a lot more time than anticipated.

Network testing accuracy

Network testing requires us to test the mobile application and mobile web on various network bandwidths. This is an easier job. Just switch the network to 2G, 3G, Wifi, 4G, etc., and analyze the metrics. However, the difficult part is not when we already know the network bandwidth but when it changes regularly.

For instance, when a user is traveling, the network bandwidth is expected to change between faster frequency, slower frequency, and sometimes no network reception at all. In such cases, it is better to optimize the website to load at least a lightweight element so that the user has at least something to look at.

Recently, browsers have also started to implement the feature to let people know:

Network testing accuracy

However, this only works when network reception is not there. Testing all these scenarios is a challenge in mobile compatibility testing.

Hardware drainage and other metrics

The third most challenging thing in mobile compatibility testing is to measure the drainage effect of our application on hardware systems. When we run the application on a mobile device, it consumes the resources of the device according to its use. The intensity by which the app will use these resources may impact the hardware’s health and ultimately its life.

For instance, battery is one hardware about which the user is most concerned. Recently, Google Chrome introduced energy saver mode as it has been often criticized for producing heat and draining out battery.

Hardware drainage and other metrics

Since it is the most visible parameter, users are often most concerned about their battery life rather than, say, their RAM life. To make sure that users do not uninstall the application just because of hardware issues, mobile compatibility testing needs to keep track of each piece of hardware on each device. This is a challenge considering not only software versions but hardware too gets upgraded frequently (sometimes bi-annually).


Overcoming these challenges is a little trickier but once done, may give you a smooth ride in your testing phase.

Summary

Whether a mobile application is compatible with the user’s device or not is something that can turn the complete business to the opposite side. If the user is not able to use an application, they will not only uninstall it but may spread negative reviews on the internet. This results in a huge loss of business. To avoid such scenarios, we perform mobile compatibility testing that ensures that if the user is using a target device, he will not face any problems or bugs (ideally).

This post explores the concept of mobile compatibility testing along with the tests included in it. These tests can either be executed in backward flow or forward giving rise to two types of techniques used in this phase. Testers who work on mobile compatibility testing also face a few challenges out of which many can be eliminated by the use of a cloud-based online solution such as Testsigma that provides real devices on their infrastructure. With this section, we can conclude this guide and hope that it will prove as a fruitful reference for your future projects. Thank you for giving this post your valuable time.

Frequently Asked Questions

What tool is used for compatibility testing?

Since mobile compatibility testing demands a long array of mobile devices with automation-enabled functionality, any tool supporting these two cases may be sufficient. As a recommendation, online cloud-based tools such as Testsigma seem to work in the best interest of mobile compatibility testing.

What is Android compatibility testing?

Android compatibility testing is testing mobile devices operating on the Android operating system. This is a subset of mobile compatibility testing but includes all the strategies discussed in this post.

What is the process of compatibility testing?

Mobile compatibility testing is the process of testing the compatibility of your mobile app with different mobile devices. This process includes testing hardware, software, network, and other applications serially. The process of mobile compatibility testing can be time-consuming and too costly if done manually. Looking for a tool to automate this testing is a good alternative.


Test automation made easy

Start your smart continuous testing journey today with Testsigma.

SHARE THIS BLOG

RELATED POSTS


Native Apps vs Hybrid App Comparison 5 Top Key Differences
Native Apps vs Hybrid App Comparison: 5 Top Key Differences
Accessibility Testing Tools_banner image
Mobile App Performance Testing: Tools and Checklist
Test Cases for Mobile Application & How to Use for Testing
Test Cases for Mobile Application & How to Use for Testing