| http://softwareautomationtesting.vibrantgroup.co.in/ |
Automated Software Testing Does What Manual Testing Cannot
Even the largest
software departments cannot perform a controlled web application test with
thousands of users. Automated testing can simulate tens, hundreds or thousands
of virtual users interacting with network or web software and applications.
- See more at:
http://support.smartbear.com/articles/testcomplete/manager-overview/#sthash.sGhd7MUU.dpuf
A growing trend in software
development is the use of testing frameworks such as the xUnit frameworks
(for example, JUnit and NUnit) that
allow the execution of unit tests to determine whether various sections of the code are
acting as expected under various circumstances. Test cases describe tests that need to be run on the program
to verify that the program runs as expected.
Automated Software Testing Improves Accuracy
Even the most
conscientious tester will make mistakes during monotonous manual testing.
Automated tests perform the same steps precisely every time they are executed
and never forget to record detailed results.
Automated Software Testing Increases Test Coverage
Automated software
testing can increase the depth and scope of tests to help improve software
quality. Lengthy tests that are often avoided during manual testing can be run
unattended. They can even be run on multiple computers with different configurations.
Automated software testing can look inside an application and see memory
contents, data tables, file contents, and internal program states to determine
if the product is behaving as expected. Automated software tests can easily
execute thousands of different complex test cases during every test run
providing coverage that is impossible with manual tests. Testers freed from
repetitive manual tests have more time to create new automated software tests
and
The
principle of automated testing is
that there is a program (which
could be
a job stream) that runs the program being tested, feeding it
the
proper input, and checking the output against the output that was
expected.
Once the test suite is written, no human intervention is
needed,
either to run the program or to look to see if it worked; the
test
suite does all that, and somehow indicates (say, by a :TELL
message
and a results
file) whether the program's
output was as
expected.
We, for instance, have over two hundred test suites, all of
which
can be run
overnight by executing one job
stream submission
command;
after they run, another
command can show which test suites
succeeded and which failed.
These test suites can help in many ways:
* As
discussed above, the test suites should always be run before a
new version is released, no
matter how trivial the modifications
to
the program.
* If
the software is
internally different for
different
environments (e.g. MPE/V vs. MPE/XL), but should have the same
external behavior, the
test suites should
be run on
both
environments.
* As you're making serious changes to the
software, you might want
to run the test suites even before the release, since they can
tell
you what still needs to be fixed.
* If you have the discipline to -- believe it or not -- write the
test suite before you've written
your program, you can even use
the
test suite to do the initial testing of your code. After all,
you'd have to initially test
the code anyway; you might as well
use your test suites to do that initial testing as well as all
subsequent tests.
