T-76.5613 Software testing and quality assurance

The sole purpose of this resource is to prepare students for the exam of the course T-76.5613 Software testing and quality assurance, which can be taken in Helsinki University of Technology. The exam will consist mostly of lecture definitions and questions, which this resource will try to provide answers to.

In the ideal situation, reading this page, instead of the way too long course book, would be more than enough to pass the course exam. So if you are a student taking this course, so please contribute!

Describe Garvin's five viewpoints to product quality and explain how these viewpoints apply to software quality.

 * Transcendent approach
 * Quality cannot be measured but can be learned to recognize.
 * Quality is therefore difficult to define because it is recognized only through experience. Similar to beauty, for example.
 * User-based approach
 * Focus on consumer preferences.
 * Products that satisfy consumer requirements are of highest quality.
 * Manufacturing based approach
 * Emphasize the supply side and are mainly concerned with “conforming to requirements”.
 * "99.5% are non-faulty".
 * Product-based approach
 * Quality as a measurable attribute.
 * Better measure => better quality.
 * Value-based approach
 * Quality is defined in terms of cost and prices.
 * Quality products provide performance at an affordable price.

Compare ISO 9126 quality model and McCall quality models for software quality?

 * The McCall Quality Model
 * Product revision: relates to the source code and development aspect of the system.
 * maintainability, flexibility, testability
 * Product transition: relates to reusing or re-purposing some or all of the system’s components.
 * portability, reusability, interoperability
 * Product operations: relates to qualities of the system while it is operational
 * correctness, reliability, efficiency, integrity, usability


 * ISO 9126 quality model
 * Functionality
 * Reliability
 * Usability
 * Efficiency
 * Maintainability
 * Portability

Describe different reasons that cause defects or lead to low quality software.

 * Software is written by people.
 * People are under pressure because of strict deadlines.
 * Reduced time to check quality.
 * Software will be incomplete.
 * Software is very complex.

Explain what the statement "software is not linear" means in the context of software defects and their consequences. Give two examples of this.

 * A small change in input or one-line error in the code may have a very large effect.
 * Example: Intel Pentium Floating Point Division Bug, several hospital systems
 * A change in code can also result in a minor inconvenience with no visible impact at all.

What is quality assurance and how it is related to software testing?

 * Create good and applicable methods and practices for achieving good enough quality level.
 * Ensure that the selected methods and practices are followed.
 * Support the delivery of good enough quality deliverables.
 * Provide visibility into the achieved level of quality.

Describe and compare different definitions of software testing that have been presented. How these definitions differ in terms of the objectives of testing?

 * Testing is the execution of programs with the intent of finding defects.
 * Testing is the process of exercising a software component using a selected set of test cases, with the intent of revealing defects and evaluating quality.
 * Testing is a process of planning, preparation, execution and analysing, aimed at establishing the characteristics of an information system and demonstrating the difference between actual and required status.

Describe the main challenges of software testing and reasons why we cannot expect any 'silver bullet' solutions to these challenges in the near future.

 * It is impossible to test a program completely. Too many inputs and outputs.
 * Requirements are never final.
 * Testing is seen as a last phase in the development cycle. This phase is often outsourced.

Testing is based on risks. Explain different ways of prioritizing testing. How prioritizing is applied in testing and how it can be used to manage risks?

 * The higher the risk, the more testing is needed.
 * Focus the testing effort:
 * What to test first?
 * What to test the most?
 * How much to test each feature?
 * What not to test?
 * Possible ranking criteria:
 * Test where a failure is most severe, most likely or most visible.
 * Customer prioritizes requirements according to what is most critical to customer's business.
 * Test areas where there have been problems in the past or where things change the most.

Describe typical characteristics and skills of a good software tester. Why professional testers are needed? How testers can help developers to achieve better quality?

 * Skills:
 * Destructive attitude and mindset.
 * Excellent communication skills.
 * Ability to manage many details.
 * Knowledge of different testing techniques and strategies.
 * Strong domain expertise.
 * Why professional testers:
 * Developers can't find their own defects.
 * Skills, tools and experience.
 * Objective viewpoint.
 * Testers can help developers by giving constructive feedback.

Describe the V-model of testing and tell how testing differs in different test levels?

 * Different levels:
 * Requirements <=> Acceptance testing
 * Functional specification <=> System testing
 * Architecture design <=> Integration testing
 * Module design <=> Unit testing
 * Coding
 * It is good to use each development specification as a basis for the testing.
 * It is easier to find faults in small units than in large ones.
 * Test small units first before putting them together to form larger ones.

Describe the purpose and main difference of Performance testing, Stress testing and Recovery testing.

 * Performance testing
 * Testing of requirements that concern memory use, response time, through-put and delays.
 * Stress testing
 * A form of testing that is used to determine the stability of a given system.
 * It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
 * Recovery testing
 * In software testing, recovery testing is the activity of testing how well the software is able to recover from crashes, hardware failures and other similar problems.
 * Example: While the application running, suddenly restart the computer and after that check the validness of application's data integrity.

Describe branch coverage and condition coverage testing. What can you state about the relative strength of the two criteria? Is one of them stronger than the other?

 * Decision coverage
 * Decision coverage is 100% if each control flow branch is at least once by a test case.
 * Condition coverage
 * Testing if each boolean sub-expression has evaluated both to true and false.
 * Example: (a<0 || b>0) => (true, false) and (false, true)
 * Therefore, condition coverage is stronger(?) than decision coverage.

How coverage testing tools can be used in testing? What kind of conclusions you can draw based on the results of statement coverage analysis?

 * Without a good white-box testing tool, analyzing and controlling coverage testing is hopeless.
 * A tool can tell us:
 * If we are missing some tests.
 * If we need better tests (weak spots in tests).
 * Risky areas that are hard to cover. These areas can be covered by reviews, or other methods.
 * Unreachable code, old code, debug code...
 * Tools can be used to guide testing.
 * Tools highlight non-covered areas of code.

=== Give examples of defect types that structural (coverage) testing techniques are likely to reveal and five examples of defect types that cannot necessarily be revealed with structural techniques. Explain why. ===
 * Structural testing is good for:
 * Revealing errors and mistakes made by developers.
 * Defect types that cannot be revealed with structural techniques:
 * There are missing features or missing code.
 * Timing and concurrency issues.
 * Different states of the software.
 * Same path reveals different defects in different states.
 * Variations of environment and external conditions.
 * Variations of data.
 * Does not say anything about qualities: performance, usability...

Describe the basic idea of mutation testing. What are the strengths and weaknesses of mutation testing.

 * Involves modifying program's source code in small ways.
 * These, so-called mutations, are based on well-defined mutation operators that either mimic typical user mistakes (such as using the wrong operator or variable name).
 * The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution.

Lecture 4 Questions: Black-box testing techniques
=== Compare function testing and scenario testing techniques. What kind of purposes these two techniques are good? What are shortcomings of the two techniques? How function testing and scenario testing could be used to complement each other? ===
 * Function testing is about testing one functionality of the software at a time.
 * Reveals errors that should be addressed early.
 * No interaction between functions is tested.
 * Does not say if the software solves the user's problem.
 * Scenario testing is about testing complicated and realistic stories of real usage of the system.
 * It is a story that is motivating, credible, complex, and easy to evaluate.
 * Easy to connect testing to documented requirements.
 * Scenarios are harder to automate(?).
 * They complement each others in the sense that the other focuses on real-life user requirements and the other on functional requirements.

List and describe briefly different ways of creating good scenarios for testing?

 * Read "An Introduction to Scenario Testing" for 12 best ways.
 * Good testing scenarios are:
 * Based on a story about how the program is used.
 * Motivating: Stakeholders get more involved.
 * Credible: Can happen in the real world.
 * Complex: Complex use of the program data and environment.
 * Results are easy to validate.

Describe at least five testing heuristics and explain why they are good rules of thumb and how they help testing.

 * Test at the boundaries.
 * Reveals common defects.
 * Test with realistic data and scenarios.
 * Quality of software becomes better when basic operations work.
 * Avoid redundant tests.
 * Make it more easy to execute, read and understand test cases.
 * Test configurations that are different from the developer's.
 * Developers are lazy to change their configurations.
 * Run tests that are annoying to set up.
 * Developers are lazy to test more advanced tests.

What is Soap Opera Testing? Why soap opera testing is not same as performing equivalence partitioning and boundary value analysis using extreme values?

 * Soap Opera testing can be considered as scenario testing with extreme values.
 * Testing scenarios are complicated and realistic stories of real usage of the system.
 * Stories based on the most extreme examples that could happen in practice.
 * The goal in scenario testing is to focus on business needs and realistic situations.
 * The main difference is that a soap opera test case tests the whole system from a practical point of view while EP and BVA test a specific function of the system with extreme values.

Describe different coverage criteria for combination testing strategies. How these criteria differ in their ability to reveal defects and cover functionality systematically?

 * 1-wise coverage
 * Every input parameter is covered with some test case.
 * Pair-wise coverage
 * All possible combinations of each pair of input parameters.
 * t-wise coverage
 * All possible combinations of t amount of input parameters.
 * N-wise coverage
 * Test all combinations of all parameters.

Describe the basic idea of decision table testing and the two different approaches to applying it.

 * Basic idea:
 * Model complicated business rules.
 * Different combinations of conditions lead to different expected outcomes (actions).
 * Sometimes it is difficult to follow which conditions correspond to which actions.
 * Easy to observe that all possible conditions are accounted for.
 * Approaches:
 * Describe only interesting rules
 * Describe all possible combinations

List and describe at least five different functions that a test plan can be used for.

 * Support quality assessment that enables wise and timely product decisions.
 * Support preparations, staffing, responsibilities, task planning and scheduling.
 * Support evaluation of the test project and test strategy.
 * Justify the test approach.
 * Benefits and limitations of the test approach.
 * For coordination.
 * For risk management.
 * Specify deliverables.
 * Record historical information for process improvement.

Describe six essential topics of test planning, what decisions has to be made concerning each of these topics?

 * Why: Overall test objectives.
 * 1) *Quality goals.
 * 2) What?: What will and won't be tested.
 * 3) *Prioritize. Provide reasoning.
 * 4) *Analyze the product to make reasonable decisions.
 * 5) How?: Test strategy.
 * 6) *How testing is performed:
 * 7) *Test techniques, test levels, test phases.
 * 8) *Tools and automation.
 * 9) *Processes:
 * 10) **How test cases are created and documented.
 * 11) **How defect reporting is done.
 * 12) Who?: Resource requirements.
 * 13) *People.
 * 14) **Plan responsibilities.
 * 15) *Equipment.
 * 16) *Office space.
 * 17) *Tools and documents.
 * 18) *Outsourcing.
 * 19) When: Test tasks and schedule.
 * 20) *Connect testing with overall project schedule.
 * 21) What if?: Risks and issues.
 * 22) *Risk management of the test project, not the product.

How estimating testing effort differs from estimating other development efforts? Why planning relative testing schedules might be a good idea?

 * Differences:
 * Testing is not an independent activity, depends lots of how development performs.
 * Critical faults can prevent further testing.
 * It is hard to predict how long it will take to find a defect.
 * Relative schedules are good because:
 * It is hard to know when testable items are done by development.
 * Some of the other phases might take more time than predicted.

How is test prioritization different from implementation or requirements prioritization, why cannot we skip all low priority tests when time is running out?

 * Prioritization should be thought of as distribution of efforts rather than execution order.
 * Do not skip tests with low priority.
 * Risk of missing critical problems.
 * Prioritization might be wrong.

What defines a good test plan? Present some test plan heuristics (6 well explained heuristics will produce six points in the exam)

 * Rapid feedback
 * Will get the bugs fixed faster.
 * Whenever possible testers and developers should work physically near each other.
 * Test plans are not generic.
 * Something that works for a project might not work for another.
 * A test plan should highlight the non-routine, project-specific aspects of the test strategy and test project.
 * Important problems fast.
 * Testing should be optimized to find important problems fast, rather than attempting to find all problems with equal urgency.
 * The later in the project that a problem is found, the greater the risk that it will not be safely fixed in time to ship.
 * Review documentation.
 * Enables more communication and the reasoning behind the document is understood better.
 * Maximize diversity.
 * Test strategy should address test platform configuration, how the product will be operated and how the product will be observed.
 * No single test technique can reveal all important problems in a linear fashion.
 * Promote testability.
 * The test project should consult with development to help them build a more testable product.

Describe the difference between designing test ideas or conditions and designing test cases. How this difference affects test documentation?

 * Test idea is a brief statement of something that should be tested.
 * Example: For a find function, "test that multiple occurrences of the search term in the document are correctly indicated".
 * Test case is a set of inputs, execution conditions and expected results.
 * Example: For a find function.
 * Input: Search term "tester" to a document containing 2 occurrences of the search term.
 * Conditions: Case insensitive search.
 * Expected results: The first occurrence is selected and both occurrences are highlighted with a green background.

=== Why is it important to plan testing in the early phases of software development project? Why could it be a good idea to delay the detailed test design and test case design to later phases, near the actual implementation? ===
 * Early test design is good because:
 * It finds faults quickly and early.
 * Cheaper to fix more early.
 * Faults will be prevented, not built in.
 * Not too early test case design is good because:
 * Test cases can then be designed in implementation order.
 * Test case design can be started from the most completed and best understood features.
 * Avoid anticipatory test design.

Give reasons why test case descriptions are needed and describe different qualities of good tests or test cases?

 * Test case descriptions are needed because:
 * They make test cases more repeatable.
 * More easy to track what features and requirements are tested.
 * Gives a proof of testing: evaluating the level of confidence.
 * Qualities of good test cases:
 * Power.
 * The test will reveal the problems.
 * Validity.
 * The problems are valid.
 * Value and credibility.
 * Knowledge of the problems bring value.
 * Coverage.
 * The test covers something that is already not covered.
 * Performable.
 * The test case can be performed as it is designed.
 * Maintainable.
 * Easy to make changes.
 * Repeatable.
 * Easy and cheap to run it.
 * Cost.
 * Does not take too much time or effort.
 * Easy evaluation.
 * It is easy to say if there is a defect or not.

=== What issues affect the needed level of detail in test case descriptions? In what kinds of situations are very detailed test case descriptions needed? What reasons can be used to motivate using less detailed, high level test case descriptions? ===
 * Very detailed test cases:
 * For inexperiences testers.
 * They need more guidance.
 * For specific testing techniques:
 * All pairs of certain input conditions need more details.
 * Motivations:
 * Repeatability.
 * Traceability.
 * Tracking progress.
 * Tracking coverage.
 * Less detailed test cases:
 * For experienced testers.
 * Motivations:
 * Maintainability.
 * Less cost of documentation.
 * More creative testing.
 * More satisfaction for testers.

How defect reports can be made more useful and understandable? What kinds of aspects you should pay attention to when writing defect reports?

 * Useful reports are reports that get the bugs fixed.
 * Minimal.
 * Just the facts.
 * Singular.
 * One report per bug.
 * Obvious and general.
 * Easy steps and show that the bug can be seen easily.
 * Reproducible.
 * Won't get fixed if it can't get reproduced.
 * Emphasize severity.
 * Show how severe the consequences are.
 * Be neutral when writing.

=== What is essential information in test reporting for management? How management utilizes the information that testing provides? Why a list of passed and failed test cases with defect counts is not sufficient for reporting test results? ===
 * Essential information for management:
 * Evaluation of quality of the software development project.
 * Problems and decisions that require management action.
 * Status of testing versus planned.

What are the five differences that distinguish exploratory testing from test-case based (or scripted) testing?

 * Scripted testing:
 * Tests are first designed and recorded, and then executed.
 * Execution can be done by a different person.
 * Execution can be done later.
 * Exploratory testing:
 * Tests are designed and executed at the same time.
 * Tests are often not recorded.
 * Can be guided by previous testing results.
 * Focus is on finding defects by exploration.
 * Enables simultaneous learning of the system.
 * No planning, no tracking, no recording and no documentation.
 * Depends on the tester's skill, knowledge and experience.

What benefits can be achieved using exploratory testing (ET) approach. What are the most important challenges of using ET? In what kinds of situations ET would be a good approach?

 * Benefits:
 * Writing test cases takes too much time.
 * Testing from the user's viewpoint.
 * ET goes deeper into the tested feature.
 * Effective way of finding defects.
 * Gives good overall view of quality.
 * Enables testing of the look and feel of the system.
 * Challenges:
 * Coverage
 * Planning and selecting what to test.
 * Reliance on expertise and skills.
 * Repeatability.
 * Good for situations:
 * The features can be used in many different combinations.
 * Agile situations, where things change fast.

Describe the main idea of Session-Based Test Management (SBTM). How the needs for test planning, tracking and reporting are handled in SBTM?

 * Enables planning and tracking exploratory testing.
 * Includes a charter that answers:
 * What? Why? How? What problems?
 * And possibly: Tools? Risks? Documents?
 * Allows reviewable results.
 * Done in sessions of ~90 minutes
 * Gets the testing done and allows flexible reporting.
 * Debriefing
 * Test planning, tracking and reporting is done with the help of charters.

Give reasons that support the hypothesis that Exploratory Testing could be more efficient than test-case-based testing in revealing defects?

 * Test-case-based testing produces more false defect reports.

Describe and compare reviews and dynamic testing (applicability, benefits, shortcomings, defect types)

 * Reviews:
 * A meeting or a process in which an artifact is presented to peers or the customer.
 * Benefits:
 * Identify defects and improve quality.
 * Can be done as soon as artifact is ready (or for incomplete components).
 * Distribution of knowledge.
 * Increased awareness of quality issues.
 * Cost of fixing found defects decrease radically.
 * Shortcomings:
 * Can only examine static documents.
 * Defect types:
 * Quality attributes.
 * Reusability.
 * Security.

Present and describe briefly the four dimensions of inspections.

 * Process:
 * Planning.
 * Overview.
 * Defect detection.
 * Defect correction.
 * Follow-up.
 * Roles:
 * Leader.
 * Moderator.
 * Author.
 * Inspectors.
 * Reader.
 * Recorder.
 * Nobody from management.
 * Reading techniques:
 * Ad-hoc based.
 * Checklist based.
 * Abstraction based.
 * Scenario based.
 * Products
 * Requirements.
 * Design.
 * Code.
 * Test cases.

Explain the different types of reviews and compare their similarities and differences.

 * Team reviews:
 * Less formal with lots of discussion and knowledge distribution.
 * Inspection:
 * Very formal meetings that enable improvement.
 * Walkthrough:
 * Author presents to others that are not prepared.
 * Pair review & Pass-around:
 * Individual check of a product.
 * Code review with a pair.
 * Audits:
 * Evaluation by another independent company.
 * Management review:
 * Ensure project progress (iteration demos in scrum).

Describe the costs, problems, and alternatives of reviews.

 * Cost:
 * 5-15% of development effort.
 * Planning, preparation, meeting, ...
 * Problems:
 * No process understanding.
 * Wrong people.
 * No preparation.
 * Focus on problem solving rather than defect detection.
 * Alternatives:
 * Pair programming
 * Joint Application Design

Lecture 10 Article Questions: Static Code Analysis and Code Reviews
=== Describe the taxonomy for code review defects for both functional and evolvability defects and describe the type of defect actually found in code reviews. (Article: What types of defects are really discovered in code reviews ) ===
 * Evolvability defects:
 * Documentation.
 * Documentation is information in the source code that communicates the intent of the code to humans (e.g., commenting and naming of software elements, such as variables, functions, and classes).
 * Visual representation.
 * Visual representation refers to defects hindering program readability for the human eye. (Indentation)
 * Structure.
 * Structure indicates the source code composition eventually parsed by the compiler into a syntax tree.
 * Functional defects:
 * Resource.
 * Resource defects refer to mistakes made with data, variables, or other resource initialization, manipulation, and release.
 * Check.
 * Check defects are validation mistakes or mistakes made when detecting an invalid value.
 * Interface.
 * Interface defects are mistakes made when interacting with other parts of the software, such as an existing code library, a hardware device, a database, or an operating system.
 * Logic.
 * The group logic contains defects made with comparison operations, control flow, and computations and other types of logical mistakes.
 * Timing.
 * The timing category contains defects that are possible only in multithread applications where concurrently executing threads or processes use shared resources.
 * Support.
 * Support defects relate to support systems and libraries or their configurations.
 * Larger defects.
 * Larger defects, unlike those presented above, cannot be pinpointed to a single, small set of code lines.
 * Larger defects typically refer to situations in which functionality is missing or implemented incorrectly and such defects often require additional code or larger modifications to the existing solution.

=== What is static code analysis and what can be said about the pros and cons of static code analyzes for defect detection (Article: Predicting Software Defect Density: A Case Study on Automated Static Code Analysis, Article Using static analysis to find bugs ) ===
 * Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis).
 * Finding of defects that lead to security vulnerabilities, such as buffer overflows, format string vulnerabilities, SQL injection, and cross-site scripting.
 * Another common bug pattern is when software invokes a method but ignores its return value.

Describe the Clean Room process model. What are the benefits of the Clean room model? How has the model been criticized?

 * The philosophy behind Cleanroom software engineering is to avoid dependence on costly defect-removal processes by writing code increments right he first time and verifying their correctness before testing. Its process model incorporates the statistical quality certification of code increments as they accumulate into a system.
 * It can improve both the productivity of developers who use it and the quality of the software they produce.
 * The focus of the Cleanroom process is on defect prevention, rather than defect removal.

Describe data-driven and keyword-driven test automation. How do they differ from regular test automation techniques, what are their benefits and shortcomings?

 * Data-driven testing is a methodology used in Test automation where test scripts are executed and verified based on the data values stored in one or more central data sources or databases. These databases can range from datapools, ODBC sources, csv files, Excel files, DAO objects, ADO objects, etc. Data-driven testing is the establishment of several interacting test scripts together with their related data results in a framework used for the methodology. In this framework, variables are used for both input values and output verification values: navigation through the program, reading of the data sources, and logging of test status and information are all coded in the test script. Thus, the logic executed in the script is also dependent on the data values.
 * Keyword-driven:
 * The advantages for automated tests are the reusability and therefore ease of maintenance of tests that have been created at a high level of abstraction.

What problems are associated with automating testing? What kinds of false beliefs and assumptions people make with test automation?

 * Assumptions:
 * Computers are faster, cheaper, and more reliable than humans; therefore, automate.
 * Testing means repeating the same actions over and over.
 * An automated test is faster, because it needs no human intervention.
 * We can quantify the costs and benefits of manual vs. automated testing.
 * Automation will lead to "significant labor cost savings."
 * The cost of developing the automation.
 * The cost of operating the automated tests.
 * The cost of maintaining the automation as the product changes.
 * The cost of any other new tasks necessitated by the automation.
 * Problems:
 * Each time the suite is executed someone must carefully pore over the results to tell the false negatives from real bugs.
 * Code changes might mean changes to automated test cases.

Lecture 12 Article Questions: Agile Testing
=== What kinds of challenges agile development approach places for software testing? Describe contradictions between the principles of agile software development and traditional testing and quality assurance. ===

=== Read the experiences of David Talby et al. presented in their article "Agile Software Testing in a Large-Scale Project". Describe how they tackled the following areas in a large-scale agile development project: Test design and execution, Working with professional testers, Activity planning and Defect management ===
 * Test design and execution
 * Everyone tests.
 * Increased test awareness.
 * Testability increased as developers knew they had to write tests to their code.
 * Product size = test size.
 * Brings a strong message to the team: only features that have full regression testing at each iteration are counted as delivered product size.
 * Untested work = no work.
 * Working with professional testers
 * Easing the professional tester's bottleneck: developers simply code less and test more.
 * Encourage interaction over isolation.
 * Traditionally tester is seen as quite independent.
 * Integrate tester into the team. Otherwise he won't find enough bugs.
 * Activity planning
 * Planning game
 * Customer describes priorities to stories that the system should implement the next iteration.
 * Team breaks down stories to development tasks and estimate the effort to these tasks.
 * Integrate feature testing as coding.
 * No task is considered complete before tests are written and running.
 * Consider regression testing as global overhead that is done in the end of the iteration.
 * Allocate bug-fix time globally.
 * Planning defect resolution as an individual task results in high overestimates.
 * Defect management
 * Use a team-centered defect management approach.
 * Everybody knows each other's knowledge areas because of daily standup meetings.
 * Fix defects as soon as possible.
 * Less false defects due to everybody working in the same room.

Additional questions from old exams
=== Describe the relationship of equivalence partitioning (EP), boundary value analysis (BVA) and cause-and-effect graphing (CEG). What are the differences of the three techniques. Can the techniques be used together to complement each other, why/why not? ===

=== Describe the basic idea of pair-wise testing. What kind of testing problems pair-wise testing is good for and why does it work? Describe also what shortcomings or problems you should pay attention to when applying pair-wise testing? ===