UNIT-6 Software Testing Fundamentals

Here are some important questions which are mostly asked in the University exam for unit 6 Software Testing Fundamentals

software testing fundamentals

Que 1. What is software testing (software testing fundamentals)? State the software testing objectives and principles.

Software testing techniques provide systematic guidance for designing tests that

  • Exercise the internal logic of software components, and
  • Exercise the input and output domains of the program to uncover errors in program function, behavior and performance.

Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design, and code generation.

A) Testing Objectives

In an excellent book on software testing, Glen Myers states a number of rules that can serve well as testing objectives:-

  1. Testing is a process of executing a program with the intent of finding an error.
  2. A good test case is one that has a high probability of finding an as-yet undiscovered error.
  3. A successful test is one that uncovers an as-yet- undiscovered error. These objectives imply a dramatic change in viewpoint.
    • They move counter to the commonly held view that a successful test is one in which no errors are found.
    • Our objective is to design tests that systematically uncover different classes of errors and to do so with a minimum amount of time and effort.
    • If testing is conducted successfully according to the objectives stated previously, it will uncover errors in the software.
    • As a secondary benefit, testing demonstrates that software functions appear to be working according to specification, that behavioral and performance requirements appear to have been met. In addition, data collected as testing is conducted provide a good indication of software reliability and some indication of software quality as a whole.
    • But testing cannot show the absence of errors and defects, it can show only that software errors and defects are present.

B) Testing Principles

Before applying methods to design effective test cases, a software engineer must understand the basic principies that guide software testing. Davis suggests a set of testing principles-

  • All Tests should be Traceable to Customer Requirements: As we have seen, the objective of software testing is to uncover errors. It follows that the most severe defects (from the customer’s point of view) are those that cause the program to fail to meet its requirements.
  • Tests should be Planned long before Testing Begins: Test planning can begin as soon as the requirements model is complete. Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned and designed before any code has been generated.
  • The Pareto Principle Applies to Software Testing: Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program components. The problem, of course, is to isolate these suspect components and to thoroughly test them.
  • Testing should begin in the Small & Progress toward Testing in the Large: The first tests planned and executed generally focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire system.
  • Exhaustive Testing is not Possible: The number of path permutations for even a moderately sized program is exceptionally large. For this reason, it is impossible to execute every combination of paths during testing.
  • To be most Effective, Testing should be Conducted by an Independent Third Party: By most effective means testing that has the highest probability of finding errors (the primary objective of testing). The software engineer who created the system is not the best person to conduct all tests for the software.

Que 2. Explain White-Box Technique

White-box testing, sometimes called glass-box testing, is a test case design method that uses the control structure of the procedural design to derive test cases.

  1. Using white-box testing methods, the software engineer can derive test cases that
    • Guarantee that all independent paths within a module have been exercised at least once,
    • Exercise all logical decisions on their true and false sides,
    • Execute all loops at their boundaries and within their operational bounds, and
    • Exercise internal data structures to ensure their validity.
  2. A white-box testing is crucial because of these reasons:-
    • Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. Errors tend to creep into our work when we design and implement function, conditions, or mainstream. controls that are out of the mainstream.
    • We often believe that a logical path is not likely to be executed when, in fact, it may be executed on a regular basis. The logical flow of a program is sometimes counter-intuitive, meaning that our unconscious assumptions about flow of control and data may lead us to make design errors that are uncovered only once path testing commences.
    • Typographical errors are random. When a program is translated into programming language source code, it is likely that some typing errors will occur. Many will be uncovered by syntax and type checking mechanisms, but others may go undetected until testing begins.
  3. Each of these reasons provides an argument for conducting white-box tests. Black-box testing, no matter how thorough, may miss the kinds of errors noted here. White-box testing is far more likely to uncover them.

Que 3. Explain basic path testing. Give example.

There are many kinds of white-box testing which are as follows

Basic path Testing is a white-box testing technique enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.

  1. The procedure for deriving the flow graph and even determining a set of basis paths deals with graph matrix.
  2. To develop a software tool that assists in basis path testing, a data structure, called a graph matrix, can be quite useful.
  3. A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of nodes on the flow graph. Each row and column corresponds to an identified node, and matrix entries correspond to connections (an edge) between nodes.
  4. A simple example of a flow graph and its corresponding graph matrix is shown in Figure Referring to the figure, each node on the flow graph is identified by numbers, while each edge is identified by letters. A letter entry is made in the matrix to correspond to a connection between two nodes.
    For example, node 3 is connected to node 4 by edge b. To this point, the graph matrix is nothing more than a tabular representation of a flow graph. However, by adding a link weight to each matrix entry, the graph matrix can become a powerful tool for evaluating program control structure during testing.
    The link weight provides additional information about control flow. In its simplest, the link weight is 1 (a connection exists) or 0 (a connection does not exist). But link weights can be assigned other, more interesting properties:
    • The probability that a link (edge) will be executed.
    • The processing time expended during traversal of a link.
    • The memory required during traversal of a link.
    • The resources required during traversal of a link.
  5. To illustrate, we use the simplest weighting to indicate connections (0 or 1). The graph matrix in Figure 1 is redrawn as shown in Figure 2. Each letter has been replaced with a 1, indicating that a connection exists (zeros have been excluded for clarity). Represented in this form, the graph matrix is called a connection matrix.
  6. Referring to Figure 2 each row with two or more entries represents a predicate node. Therefore, performing the arithmetic shown to the right of the connection matrix provides a method for determining cyclomatic complexity.

Que 4. Explain any one method of control structure testing?

Although basis path testing is simple and highly effective, it is not sufficient in itself. These broaden testing coverage & improve quality of white-box testing.

Some of the methods of control structure testing-

A) Condition Testing
B) Data-flow Testing
C) Loop Testing

A) Condition Testing

  1. Condition testing is a test case design method that exercises the logical conditions contained in a program module.
  2. A simple condition is a Boolean variable or a relational expression, possibly preceded with one NOT() operator. A relational expression takes the form,
    El <relational-operator> E2
    Where, E1 and E2 are arithmetic expressions and is one of <, <=, =, != (non- equality), >, or >=.
  3. A compound condition is composed of two or more simple conditions, Boolean operators, and parentheses. We assume that, Boolean operators allowed in a compound condition include OR (1), AND (&) and NOT (-).
  4. A condition without relational expressions is referred to as a Boolean expression. Therefore, the possible types of elements in a condition include a Boolean operator, a Boolean variable, a pair of Boolean parentheses (surrounding a simple or compound condition), a relational operator, or an arithmetic expression.
  5. If a condition is incorrect, then at least one of the condition is incorrect. Therefore, types of errors in a condition include the following:-
    • Boolean operation error (incorrect /missing/extra Boolean operations)
    • Boolean variable error
    • Boolean parenthesis error
    • Relational operator error
    • Arithmetic expression error
  6. The condition testing method focuses on testing each condition in the program. Condition testing strategies generally have two advantages. First, measurement of test coverage of a condition is simple, Second, the test coverage of conditions in a program provides guidance for the generation of additional tests for the program.
  7. The purpose of condition testing is to detect not only errors in the conditions of a program but also other errors in the program.
  8. A number of condition testing strategies have been proposed.
    • Branch testing is probably the simplest condition testing strategy. For a compound condition C, the true and false branches of C and every simple condition in C need to be executed at least once.
    • Domain testing requires three or four tests to be derived for a relational expression. For a relational expression of the form,
      El <relational-operation> E2
    • Three tests are required to make the value of El greater than, equal to, or less than that of E2. This strategy can detect Boolean operator, variable, and parenthesis errors, but it is practical only if n is small.
    • BRO (branch and relational operator) testing, the technique guarantees the detection of branch and relational operator errors in a condition provided that all Boolean variables and relational operators in the condition occur only once and have no common variables.

B) Data Flow Testing

The data flow testing method selects test paths of a program according to the locations of definitions and uses of variables in the program.

C) Loop Testing

  • Loops are the cornerstone for the vast majority of all algorithms implemented in software.
  • Loop testing is a white-box testing technique that focuses exclusively on the validity of loop constructs.
  • Four different classes of loops can be defined: simple loops, concatenated loops, nested loops, loops, and unstructured loops. This is shown in figure.
  1. Simple loops
    The following set of tests can be applied to simple loops, where n is the maximum number of allowable passes through the loop.
    a) Skip the loop entirely.
    b) Only one passes through the loop.
    c) Two passes through the loop.
    d) M passes through the loop where m < n.
    e) N 1, n, n + 1 passes through the loop.
  2. Nested Loops
    If we were to extend the test approach for simple loops to nested loops, the number of possible tests would grow geometrically as the level of nesting increases. This would result in an impractical number of tests. Beizer suggests an approach that will help to reduce the number of tests:
    a) Start at the innermost loop. Set all other loops to minimum values.
    b) Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or excluded values.
    c) Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum values and other nested loops to “typical”. values.
    d) Continue until all loops have been tested.
  3. Concatenated Loops
    Concatenated loops can be tested using the approach defined for simple loops, if each of the loops is independent of the other. However, if two loops are concatenated & the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not independent. When the loops are not independent, the approach applied to nested loops is recommended.
  4. Unstructured Loops
    Whenever possible, this class of loops should be redesigned to reflect the use of the structured programming constructs.

Que 5. Explain Cyclomatic testing.

Cyclomatic complexity is a software metric that gives the quantitative measure logical complexity of the program.

The cyclomatic complexity defines the number of independent paths in the basis set of the program that provides the upper bound for the number of tests that must be conducted to ensure that all the statements have been executed at least once.

There are three methods of computing cyclomatic complexity.

Method 1: The total number of regions in the flow graph is a cycloamatic complexity.

Method 2: The cyclomatic complexity, V(G) for a flow graph G can be defined as
V(G) E-N+2
Where E is total number of edges in the flow graph. N is the total number of nodes in the flow graph.

Method 3: The cyclomatic complexity V(G) for a flow graph G can be defined as
V(G)= P+1
where, P is the total number of predicate nodes contained in the flow graph G.

Let us understand computation of cyclomatic complexity with the help of an example,

Consider following code fragment with line numbered.

  1. {
  2. if (a <- b)
  3. F1 ();
  4. else{
  5. if (a < c)
  6. F2 ();
  7. else
  8. F3 ();
  9. }

To compute cyclomatic complexity we will follow these steps,

Step 1: Design flow graph for given code fragment.

Step 2: Compute regions, predicate (i.e. decision nodes) edges and total nodes in the flow graph.

  • There are 3 regions denoted by R1, R2 and R3.
  • Nodes 1 and 3 are predicate nodes because which branch to be followed is decided at these points.
    Total edges = 7
    Total nodes=6
  • Step 3: Apply formula in order to compute cyclomatic complexity.
    • Cyclomatic complexity = Total number of regions=3
    • Cyclomatic complexity = E-N+2=7-6+2-3
    • Cyclomatic complexity = P+1=2+1

where, P is predicate nodes (node 1 and node 3 are predicate nodes because from these nodes only the decision of which path is to be followed is taken) = 3
Thus cyclomatic complexity is 3 for given node.

Also Read – Unit III : Basic Radio Propagation and Multiple Access Techniques

Also Read – Software Engineering Tutorial – GeeksforGeeks

Que 6. Explain Black-Box Testing.

  1. Black-box testing, also called behavioral testing, focuses on the functional requirements of the software.
  2. That is, black-box testing enables the software engineer to
  3. Derive sets of input conditions that will fully exercise all functional requirements for a program.
  4. Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary approach that is likely to uncover a different class of errors than white-box methods.
  5. Black-box testing attempts to find errors in the following categories:
    (1) incorrect or missing functions,
    (2) interface errors,
    (3) errors in data structures or external data base access,
    (4) behavior or performance errors,
    (5) initialization & termination errors.
  6. Unlike white-box testing, which is performed early in the testing process, black-box testing tends to be applied during later stages of testing. Because black- box testing purposely disregards control structure, attention is focused on the information domain.
  7. By applying black-box techniques, a set of test cases are derived that satisfy the following Criteria:
    • Test cases that reduce, by a count that is greater than one, the number of additional test cases that must be designed to achieve reasonable testing and
    • Test cases that tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.
  8. Black-box Testing can be achieved by following methods:-
    1) Graph Based Method
    2) Equivalence Partitioning
    3) Boundary Value Analysis

1) Graph Based Testing

The first step in black-box testing is to understand the objects that are modeled in software and the relationships that connect these objects. Once this has been accomplished, the next step is to define a series of tests that verify “all objects have the expected relationship to one another.”

Stated in another way, software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each object and relationship is exercised and errors are uncovered.

To accomplish these steps, the software engineer begins by creating a graph-a collection of nodes that represent objects; links that represent the relationships between objects; node weights that describe the properties of a node (e.g., a specific data value or state behavior); and link weights that describe some characteristic of a link.

The symbolic representation of a graph is shown in Figure 15.4 Nodes are represented as circles connected by links that take a number of different forms. A directed link (represented by an arrow) indicates that a relationship moves in only one direction.

A bidirectional link, also called a symmetric link, implies that the relationship applies in both directions. Parallel links are used when a number of different relationships are established between graph nodes.

Graph represents the relationships between data objects and program objects, enabling us to derive test cases that search for errors associated with these relationships.

2) Equivalence Partitioning

Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all character data) that might otherwise require many cases to be executed before the general error is observed.

Equivalence partitioning strives to define a test case that uncovers classes of errors, thereby reducing the total number of test cases that must be developed.

3) Boundary Value Analysis

For reasons that are not completely clear, a greater number of errors tends to occur at the boundaries of the input domain rather than in the center.

It is for this reason that boundary value analysis (BVA) has been developed as a testing technique.

Boundary value analysis leads to a selection of test cases that exercise bounding values.

BVA extends equivalence partitioning by focusing on data at the edges of an equivalence class.

Que 7. Explain verification and validation in detail.

  1. Software testing is one element of a broader topic tha is often referred to as verification and validation.
  2. Verification refers to the set of activities that ensure that software correctly implements a specific function.
  3. Validation refers to a different set of activities that ensure that the software that has been built in traceable to customer requirements.
  4. Boehm states another way,
    Verification: “Are we building the product right?”
    Validation: “Are we building the right product?”
  5. The definition of V&V encompasses many of the activities that we have referred to as software quality assurance (SQA).
  6. Verification and validation encompasses a wide array of SQA activities that include formal technical reviews, quality and configuration audits, performance monitoring, simulation, feasibility study, documentation review, database review, algorithm analysis, development testing, qualification testing, and installation testing.
  7. Although testing plays an extremely important role in V&V, many other activities are also necessary.
  8. Testing does provide the last bastion from which quality can be assessed and, more pragmatically, errors can be uncovered. But testing should not be viewed as a safety net. As they say, “You can’t test in quality. If it’s not there before you begin testing, it won’t be there when you’re finished testing.”
  9. Quality is incorporated into software throughout the process of software engineering. Proper application of methods and tools, effective formal technical reviews, and solid management and measurement all lead to quality that is confirmed during testing.
  10. Miller relates software testing to quality assurance by stating that “the underlying motivation of program testing is to affirm software quality with methods that can be economically and effectively applied to both large-scale and small-scale systems.”

Que 8. Discuss the software testing strategy, for conventional software architectures, draw suitable diagram.

  1. The software engineering process may be viewed as the spiral illustrated in Figure.
  2. Initially, system engineering defines the role of software and leads to software requirements analysis, where the information domain, function, behavior, performance, constraints, and validation criteria for software are established.
  3. Moving inward along the spiral, we come to design and finally to coding. To develop computer software, we spiral inward along streamlines that decrease the level of abstraction on each turn.
  4. A strategy for software testing may also be viewed in the context of the spiral.
    A) Unit Testing: Unit testing begins at the vortex of the spiral and concentrates on each unit (i.e., component) of the software as implemented in source code.
    B) Integration Testing: Testing progresses by moving outward along the spiral to integration testing, where the focus is on design and the construction of the software architecture.
    C) Validation Testing: Taking another turn outward on the spiral, we encounter validation testing, where requirements established as part of requirements analysis are validated against the software that has been constructed.
    D) System Testing: Finally, we arrive at system testing, where the software and other system elements are tested as a whole. To test computer software, we spiral out along streamlines that broaden the scope of testing with each turn.

Testing within the context of software engineering is actually a series of four steps that are implemented sequentially. The steps are shown in Figure.

Considering the process from a procedural point of view, this following are the steps required for software testing:-

  1. Initially, tests focus on each component individually. ensuring that it functions properly as a unit, hence the name unit testing.
  2. Unit testing makes heavy use of white-box testing techniques, exercising specific paths in a module’s control structure to ensure complete coverage and maximum error detection.
  3. Next, components must be assembled or integrated to form the complete software package.
  4. Integration testing addresses the issues associated with the dual problems of verification and program construction.
  5. Black-box test case design techniques are the most prevalent during integration, although a limited amount of white-box testing may be used to ensure coverage of major control paths.
  6. After the software has been integrated (constructed), a set of high-order tests are conducted.
  7. Validation criteria (established during requirements analysis) must be tested.
  8. Validation testing provides final assurance that software meets all functional, behavioral, and performance requirements. Black-box testing techniques are used exclusively during validation.
  9. The last high-order testing step falls outside the boundary of software engineering and into the broader context of computer system engineering. Software, once validated, must be combined with other system elements (e.g., hardware, people, & databases).
  10. System testing verifies that all elements mesh properly and that overall system function/performance is achieved.

Que 9. Explain strategies issues of software testing.

Even the best strategy will fail if a series of overriding issues are not addressed. The following issues must be addressed if a successful software testing strategy is to be implemented:

  1. Specify Product Requirements in a Quantifiable Manner Long before Testing Commences: Although the overriding objective of testing is to find errors, a good testing strategy also assesses other quality characteristics such as portability, maintainability, and usability. These should be specified in a way that is measurable so that testing results are unambiguous.
  2. State Testing Objectives Explicitly: The specific objectives of testing should be stated in measurable terms. For example, test effectiveness, test coverage, mean time to failure, the cost to find and fix defects, remaining defect density or frequency of occurrence, and test work-hours per regression test all should be stated within the test plan.
  3. Understand the users of the Software and Develop a Profile for each user Category: Use-cases that describe the interaction scenario for each class of user can reduce overall testing effort by focusing testing on actual use of the product.
  4. Develop a Testing Plan that Emphasizes “Rapid Cycle Testing”: The feedback generated from these rapid cycle tests can be used to control quality levels and the corresponding test strategies.
  5. Build “Robust” Software that is designed to Test Itself: Software should be designed in a manner that uses antibugging techniques. That is, software should be capable of diagnosing certain classes of errors. In addition, the design should accommodate automated testing and regression testing.
  6. Use Effective Formal Technical Reviews as a Filter prior to Testing: Formal technical reviews can be as effective as testing in uncovering errors. For this reason, reviews can reduce the amount of testing effort that is required to produce high-quality software.
  7. Conduct Formal Technical Reviews to assess the Test Strategy and Test Cases Themselves: Formal technical reviews can uncover inconsistencies, omissions, and outright errors in the testing approach. This saves time and also improves product quality.
  8. Develop a Continuous Improvement Approach for the Testing Process: The test strategy should be measured. The metrics collected during testing should be used as part of a statistical process control approach for software testing.

Que 10. Explain unit testing in detail.

  1. Unit testing focuses verification effort on the smallest unit of software design-the software component or module.
  2. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module.
  3. The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing.
  4. The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.
  5. The tests that occur as part of unit tests are illustrated schematically in Figure.
  6. The module interface is tested to ensure that information properly flows into and out of the program unit under test. Tests of data flow across a module interface are required before any other test is initiated. If data do not enter and exit properly, all other tests are moot.
  7. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. In addition, local data structures should be exercised & the local impact on global data should be ascertained (if possible) during unit testing.
  8. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.
  9. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. And finally, all error handling paths are tested.
  10. Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow.
  11. Among the more common errors in computation are
    (1) misunderstood or incorrect precedence,
    (2) mixed mode operations,
    (3) incorrect initialization,
    (4) precision inaccuracy,
    (5) incorrect symbolic representation of an expression.

Que 11. Explain top down and bottom up integration process.

Integration Testing

Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to interfacing. uncover errors associated with interfacing.

The objective is to take unit tested components and build a program structure that has been dictated by design. It is of two types

1) Top-Down Integration
2) Bottom Up Integration

1) Top-Down Integration

  • Top-down integration testing is an incremental approach to construction of program structure.
  • Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program).
  • Modules subordinate and ultimately subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.
  • Referring to Figure, depth-first integration would integrate all components on a major control path of the structure. Selection of a major path is somewhat arbitrary and depends on application specific characteristics.
  • For example, selecting the left hand path, components M1, M2, M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated.
  • Then, the central and right-hand control paths are built. Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally.
  • From the figure, components M2, M3, and M4 ( replacement for stub S4) would be integrated first. The next control level, M5, M6, and so on, follows.
  • The integration process is performed in a series of five steps:-
    • The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module.
    • Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components.
    • Tests are conducted as each component is integrated.
    • On completion of each set of tests, another stub is replaced with the real component.
    • Regression testing may be conducted to ensure that new errors have not been introduced.
  • The process continues from step 2 until the entire program structure is built. The top-down integration strategy verifies major control or decision points early in the test process.
  • In a well-factored program structure, decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major control problems do exist, early recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated.
  • Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels. Stubs replace low level modules at the beginning of top-down testing; therefore, no significant data can flow upward in the program structure. The tester is left with three choices:
    • Delay many tests until stubs are replaced with actual modules,
    • Develop stubs that perform limited functions that simulate the actual module, or
    • Integrate the software from the bottom of the hierarchy upward.
  • The first approach (delay tests until stubs are replaced by actual modules) causes us to loose some control over relation between specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the highly constrained nature of the top-down approach.
  • The second approach is workable but can lead to significant overhead, as stubs become more and more complex.

2) Bottom-Up Integration

  • Bottom-up integration testing, as its name implies. begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure).
  • Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.
  • A bottom-up integration strategy may be implemented with the following steps:-
    • Low-level components are combined into clusters (sometimes called builds) that perform a specific software sub-function.
    • A driver (a control program for testing) is written to coordinate test case input and output.
    • The cluster is tested.
    • Drivers are removed and clusters are combined moving upward in the program structure.
  • Integration follows the pattern illustrated in Figure 16.5. Components are combined to form clusters 1, 2,and 3. Each of the clusters is tested using a driver. Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma. Similarly, driver D3 for cluster3 is removed prior to integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and so forth.
  • As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels of program structure are integrated top down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.

Que 12. Explain why regression testing is necessary.

  • Each time a new module is added as part of integration testing, the software changes.
  • New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly.
  • In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.
  • In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changed.
  • Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.
  • Regression testing may be conducted manually, by re- executing a subset of all test cases or using automated capture/playback tools.
  • Capture/playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison.
  • The regression test suite (the subset of tests to be executed) contains three different classes of test cases:-
    1) A representative sample of tests that will exercise all software functions.
    2) Additional tests that focus on software functions that are likely to be affected by the change.
    3) Tests that focus on the software components that have been changed.
  • As integration testing proceeds, the number of regression tests can grow quite large. Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions.
  • It is impractical and inefficient to re-execute every test for every program function once a change has occurred.

Que 13. Explain what is Smoke testing in brief?

  • Smoke testing is an integration testing approach that is commonly used when “shrink wrapped” software products are being developed.
  • It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis.
  • In essence, the smoke testing approach encompasses the following activities:-
    • Software components that have been translated into code are integrated into a “build.” A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions.
    • A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should be to uncover “show stopper errors that have the highest likelihood of throwing the software project behind schedule.
    • The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be top down or bottom up.
  • The daily frequency of testing the entire product may seem to be surprising. However, frequent tests give both managers and practitioners a realistic assessment of integration testing progress.
  • The smoke test should exercise the entire system from end to end. It does not have to be exhaustive, but it should be capable of exposing major problems.
  • The smoke test should be thorough enough that if the build passes, you can assume that it is stable enough to be tested more thoroughly.
  • Smoke testing might be characterized as a rolling integration strategy. The software is rebuilt (with new components added) and exercised every day.
  • Smoke testing provides a number of benefits when it is applied on complex, time critical software engineering projects:
    • Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and other show-stopper errors are uncovered early, thereby reducing the likelihood of serious schedule impact when errors are uncovered.
    • The quality of the end-product is improved. Because the approach is construction (integration) oriented, smoke testing is likely to uncover both functional errors and architectural and component-level design defects. If these defects are corrected early, better product quality will result.
    • Error diagnosis and correction are simplified.
    • Progress is easier to assess. With each passing day, more of the software has been integrated and more. has been demonstrated to work.
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now
Linkedin Page Join Now

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top