Applied Programming/Testing

This lesson introduces software testing, unit testing, and test-driven development.

Objectives and Skills
Objectives and skills for this lesson include:
 * Understand software testing, including unit testing, integration testing, system testing, and operational acceptance testing.
 * Create unit tests to meet given requirements.
 * Describe test-driven development.

Readings

 * 1)  Software testing
 * 2)  Unit testing
 * 3)  Test-driven development

Multimedia

 * 1) YouTube: Software Testing Tutorials for Beginners
 * 2) YouTube: Software Testing Types
 * 3) YouTube: What is Unit Testing?
 * 4) YouTube: Python Unit Testing - Pytest Introduction
 * 5) Youtube: Learn Pytest in 60 Minutes : Python Unit Testing Framework
 * 6) YouTube: Jest Crash Course - Learn How to Test your JavaScript Application

Examples

 * C#
 * /JavaScript/
 * /Python3/

Activities

 * 1) Research available test libraries for your selected programming language. Install a library and create a simple test to verify that the library works as expected.
 * 2) Create unit tests for the program created in the previous lesson. Test each parameter and result so that you have 100% coverage of all processing functions. Test with valid and invalid data, and verify that errors are raised when appropriate.
 * 3) If your test library supports it, add tests for input and output functions. Test with valid and invalid data. Use a test coverage tool to verify that you have 100% coverage with your unit tests.

Lesson Summary

 * Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.
 * Software testing is used in association with verification and validation.
 * Software testing may be considered a part of a software quality assurance (SQA) process.
 * Software testing can be conducted as soon as executable software (even if partially complete) exists.
 * Software testing is a combinatorial problem. For example, every Boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.
 * Seven software testing principles areː
 * 1) Testing shows the presence of defects.
 * 2) Exhaustive testing is impossible.
 * 3) Testing should start as early as possible in the software development life cycle.
 * 4) Defect Clustering. A small number of modules contain most of the defects detected.
 * 5) Pesticide Paradox. If the same set of repetitive tests are conducted, the method will be useless for discovering new defects.
 * 6) Testing is context-dependent.
 * 7) The absence of error is a fallacy, i.e., finding and fixing defects does not help if the system build is unusable and does not fulfill the user's needs and requirements.
 * Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method.
 * Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level.
 * The goal of unit testing is to isolate each part of the program and show that the individual parts are correct.
 * Using unit-test as a design specification has one significant advantage over other design methods: The design document can itself be used to verify the implementation.
 * Unit testing lacks some of the accessibility of a diagrammatic specification such as a UML diagram, but it may be generated from the unit test using automated tools.
 * Unit testing will not catch integration errors or broader system-level errors (such as functions performed across multiple units, or non-functional test areas such as performance).
 * Unit testing only shows the presence or absence of particular errors; it cannot prove a complete absence of errors.
 * Unit testing has the difficulty of setting up realistic and useful tests.
 * Unit testing is being developed on a different platform than the one it will eventually run on. The programmer cannot easily run a test program in the actual deployment environment, as with desktop programs.
 * Extreme programming uses the creation of unit test for test-driven development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isn't implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass.
 * Extreme programming's thorough unit testing allows simpler and more confident code development and refactoring, simplified code integration, accurate documentation, and more modular designs.
 * Unit Testing frameworks help simplify the process of unit testing because they have been developed for a wide variety of languages.
 * Frameworks include open source solutions such as the various code-driven testing frameworks known collectively as xUnit
 * It is generally possible to perform unit testing without the support of a specific framework by writing client code that exercises the units under test and uses assertions, exception handling, or other controls flow mechanisms to signal failure.


 * Test-driven development develops software by requiring specific test cases which the software must pass. This should encourage simple designs in new software, but also can be applied to improving and debugging older legacy code.
 * Test-driven development (TDD) is used in both extreme programming and scrum.
 * In TDD, the test is written prior to the code. This shifts developer focus to meeting the test requirements before attempting to write the code.
 * A general outline for implementing TDD would include: add a new test, run the test to confirm that it passes/fails as expected, write the code that will be tested, run the tests on the code, refactor the code if needed, and repeat.
 * Test Driven Development Cycle:
 * Add a Test - In test-driven development, each new feature begins with writing a test. To write a test, the developer must clearly understand the feature's specification and requirements.
 * Run All Tests - Run all tests and see if the new test fails. This shows that the new test does not pass without requiring new code because the required behavior already exists, and it rules out the possibility that the new test is flawed and will always pass.
 * Write The Code - Write some code that causes the test to pass. The programmer must not write code that is beyond the functionality that the test checks.
 * Run Tests - If all test cases now pass, the programmer can be confident that the new code meets the test requirements, and does not break or degrade any existing features. If they do not, the new code must be adjusted until they do.
 * Refactor Code - New code can be moved from where it was convenient for passing a test to where it more logically belongs.

Key Terms

 * assertion
 * A statement that a predicate (Boolean-valued function, i.e. a true-false expression) is always true at that point in code execution. It can help a programmer read the code, help a compiler translate it, or help the program detect its own defects.


 * behavior-driven development
 * Combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.


 * conformance testing
 * Testing or other activities that determine whether a process, product, or service complies with the requirements of a specification, technical standard, contract, or regulation.


 * coverage
 * A tool for measuring code coverage of Python programs. It monitors a program, noting which parts of the code have been executed, then analyzes the source to identify code that could have been executed but was not.


 * decision problem
 * A problem that can be posed as a yes-no question of the input values.


 * formal verification
 * The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.


 * halting problem
 * The problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running (i.e., halt) or continue to run forever.


 * integration testing
 * The testing of a group of related modules. It aims at finding interfacing issues between the modules.


 * manual testing
 * The process of manually testing software for defects. It requires a tester to play the role of an end user whereby they use most of the application's features to ensure correct behavior.


 * mocks, fakes, and stubs
 * Classification between mocks, fakes, and stubs is highly inconsistent across the literature. All three represent a production object in a testing environment by exposing the same interface. In the book The Art of Unit Testing mocks are described as a fake object that helps decide if a test failed or passed by verifying whether an interaction with an object occurred or not. Everything else is defined as a stub. In The Art of Unit Testing, fakes are anything that is not real, which, based on their usage, can be either stubs or mocks.


 * performance testing
 * A testing practice performed to determine how a system performs in terms of responsiveness and stability under a workload.


 * nose
 * A Python unit test framework, and can run doctests, unittests, and “no boilerplate” tests. Extends the test loading and running features of unittest.


 * pytest
 * Code-driven unit testing frameworks for Python based on xUnit


 * refactoring
 * a process of restructuring existing computer code without changing its external behavior


 * regression testing
 * Re-running functional and non-functional tests to ensure that previously developed and tested software still performs after a change


 * software testing
 * An investigation conducted to provide stakeholders with information about the quality of the software product or service under test.


 * test case
 * A specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as exercise a particular program path or verify compliance with a specific requirement.


 * Test-Driven Development (TDD)
 * A software development process which intends a developer to write a unit test that exposes either a software requirement or a defect


 * test harnesses or automated test framework
 * A collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behavior and outputs. It has two main parts: the test execution engine and the test script repository.


 * test suite
 * A collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviours.


 * undecidable problem
 * A decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer.


 * unit
 * The smallest testable part of any software or program.


 * unit testing
 * A software testing method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine whether they are fit for use.


 * xUnit
 * A framework that outsources to another system the graphical rendering of a view for human consumption