Why manual system tests?
While test automation is desirable in every software project, not all tests can be automated well (e.g. testing software that requires hardware). Furthermore, manual system tests often unveil unexpected faults simply because humans can be more sensitive in this regard.
But doesn’t testing happen anyway?
Actually, almost every developer “naturally” does some kind of manual testing. However, in practice, data needed for testing is not managed very well. In many free and commercial software projects today, test data and records of executed test runs are managed using text processors or spreadsheets. Even worse, in many cases these manual tests contain specific requirements for hardware or the environment of the software, but these are neither specified exactly nor are they recorded correctly in the test runs.
Due to these issues, it is hard for users to find out which features of a product were tested in which environment, on which hardware and with which result. This is especially awkward for users of free software projects that are hardware-related (e.g. projects which provide free firmware for smartphones or routers/firewalls). Often, users waste precious time digging through forums or mailing lists, trying to find out which functions actually might work on which devices. They often find outdated information and in many cases it is not clear if a device that was once supported has ever been actually tested on the current version of the software.
How does SystemTestPortal help here?
SystemTestPortal is a web application that allows users to create, run and analyze manual system tests. Usually, there are many stakeholders involved in testing and SystemTestPortal iss designed to be useful for most of them:
- test designers can think of good test cases, specify them and group them in sensible orders, so-called test sequences
- testers can execute the test cases and judge their results (pass, fail, partial fail, undecided, …) - and have everything recorded automatically
- test managers can plan and schedule tests
- developers can see how certain failures can be reproduced easily (issue trackers can link to protocols)
- end-users can see which tests pass/fail in which configurations, which helps them to decide which device to buy or which version of the software to install