What You Do Not Know About Software Quality

von

Software Quality blog image

 

What you Do Not Know About Software Quality

We all want the software in the devices we use to be top quality. However, have you considered what is meant by Software Quality?

It is very difficult to articulate a set of criteria that summarises a comprehensive quality metric. As such, it is necessary to break the term down and look at various aspects, such as reliability, correctness, completeness, consistency, usability and performance.

Bear in mind that there may be one quality attribute that is more applicable for a class of device than another quality attribute; software quality is a multidimensional quantity which can be measured in many ways. In this introductory article, I will explore what determines software quality.

Quality Attributes

There are several indicators we can look at to determine the quality of software. We can divide these into one of two classes, static and dynamic. Static properties refer to the specific code and documentation associated with it and a dynamic quality attributes look at the application’s behaviour while in service.

Structured, maintainable, testable code (and having accurate and full documentation) are static quality metrics.

You may have come across complaints like “Your product is fantastic, I like the features it provides, but its user manual doesn’t help!” In this case, the user manual brings down the overall quality of the product.

If you are a software engineer working on corrective maintenance of an application, you will most likely need to understand parts of the code before changing it.

This is where metrics concerning things such as code documentation, code comprehensibility and software structure come into play. An under documented application will be difficult to understand and therefore difficult to change likewise, unstructured code will be harder to maintain and difficult to test.

Dynamic Testing

Attributes of dynamic quality include the reliability of the program, correctness, completeness, usability, consistency and performance.

Reliability refers to the likelihood of failure-free operation, correctness refers to an application’s expected operation and should always be tied to defined requirements.

For a tester, the operation should be correct according to the specifications; for a customer, they expect the operation to match the user manual.

Completeness refers to the availability of all the features specified in the user manual or the requirements. An incomplete program is one that does not execute all the necessary functions in full. Of course, with each recent version of an application, you can expect additional functionality, but this doesn’t mean that a version is incomplete, because there are few extra features in its next edition. We define completeness in the context of a set of features that could themselves be a subset of a larger set of features to be implemented in some future application version.

Consistency means adherence to a common set of conventions and common sense. For example, all buttons in the user interface should follow a common convention on colour coding. An example of inconsistency may occur when a database application shows a person’s date of birth in the database, the date of birth may be shown in various formats, regardless of the preferences of the user.

Reliability

People want software that works perfectly every time it’s used. That, however, rarely happens, if at all. Often software used today contains faults which cause it to fail on some input combination. The idea that software can be 100% fault free is false.

Because we know most software applications are flawed, the aim is to determine how regularly a piece of software will malfunction. This is known as ‘reliability’ and can be defined in one of two ways.

Non-functional Testing

Usability testing should include a product being tested by its potential users.

The software organisation invites selected potential users to test the product and report back. Customers in turn check for ease of use, expected functionality, consistency, safety and security.

Customers act as an important source of feedback, which may not have been considered by developers or testers within the organisation. Usability testing is also known as user-centric testing.

Performance refers to the amount of time it takes the application to perform a requested function. We refer to performance as a non-functional requirement and it is defined in terms such as “This task must be completed in one second”.

Software reliability is the probability of failure-tree operation of software over a given time interval and under given conditions.

Or

Software reliability is the probability of failure-free operation of software in its intended environment.

These two definitions have both benefits and disadvantages. The first of the two definitions above requires knowledge of its users’ expectations, which may be difficult or impossible to accurately determine. Nevertheless, if we can calculate an operational profile for a given class of users, then for this class of users we can find a reasonable estimation of reliability. The second definition is appealing in that one only requires a single number to signify the functionality of a software application that extends to all its users. These figures are hard to calculate, however.

Requirements and Test Criteria

Products, especially software, are designed to meet requirements. Requirements describe the functions you expect a product to perform. Once the product is ready, it is the requirements which determine the test criteria though of course, the requirements may have changed during product development from what was originally defined.

The product’s expected behaviour is determined by the tester’s interpretation of the specifications, regardless of whether they have been kept up to date. A program is considered tested if it behaves correctly on all inputs. However, even with the most trivial piece of software, the collection of all potential inputs is typically too large to test each one. Exhaustive testing is not often used in practice.

It is usually left to the tester to determine what makes up all possible inputs. The first step is to analyse the requirements. If the requirements are complete and unambiguous, the collection of all possible inputs can be defined.

A software component is considered functionally correct if it responds to each input as planned. Identifying the collection of invalid inputs and checking the software against those inputs are important components of testing. When the program behaviour on invalid inputs is not specified by the specifications, the programmer should build in functionality handle or reject these.

There are many aspects to juggle when building out a suite of tests to help improve your software quality. To improve the situation, automation and tools can assist with planning, constructing, building, running and reporting on tests against your code.

Tools to help you test

Cantata is the safety-certified unit and integration testing tool from QA Systems, enabling developers to verify standard-compliant or business-critical code on host native and embedded target platforms. It is a complete tool for constructing, running and reporting on your software tests at unit and integration level.

Cantata offers a comprehensive software testing tool which supports test construction, linking to requirements and sophisticated reporting. Cantata is available from QA Systems and its international network of authorised resellers. Further information on the Cantata product can be found at the QA Systems website.

There you can request a demonstration, contact our software quality experts, and request a free trial of Cantata & Cantata Team Reporting. If you want to be covered for your software testing, we look forward to hearing from you.

More to consider

Testing against requirements is not an ending task in your development, but rather a constant effort towards a reliable system. The easier tests are to construct, execute, monitor and report, the more useful they are to development teams.

Use of coverage can guide test case creation, help optimise a set of tests and provide an empirical measure of testing sufficiency. Without measuring how much of the code we tested, verification activity always risks shipping untested code.

While there are some hurdles to overcome by using coverage software in an embedded or continuous integration environment, the use of suitable tools and testing framework can solve the problems in almost all instances.

If you found this introductory article useful, QA-Systems have a detailed paper available for download for free here. This looks at the aspects of efficient reporting of test results.

Download the full 26 page white paper as a PDF.

You are also welcome to sign up to the QA-Systems newsletter… You will receive notifications of other useful software development content straight to your inbox.

RELATED RESOURCES