Friday, February 8, 2013

C

Glossary of Testing Terms A B C D E F GB

Capture/Playback Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.

Capture/Replay Tool: See Capture/PlaybackTool.

CAST: Acronym for computer-aided software testing. Automated Software Testing in one or more phases of the software life-cycle. See also ASQ.

Cause-Effect Graph: A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

Capability Maturity Model for Software (CMM): The CMM is a process model based on software best-practices effective in large-scale, multi-person projects. The CMM has been used to assess the maturity levels of organization areas as diverse as software engineering, system engineering, project management, risk management, system acquisition, information technology (IT) or personnel management, against a scale of five key processes, namely: Initial, Repeatable, Defined, Managed and Optimized.

Capability Maturity Model Integration (CMMI): Capability Maturity Model Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization. CMMI helps integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes.

Seen by many as the successor to the CMM, the goal fo the cMMI project is to improve the usability of maturity models by integrating many different models into one framework.

Certification: The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use.

Chow's Coverage Metrics: See N-Switch Coverage.

Code Complete: A phase of developement where functionality is implemented in its entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: A measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that looks at the code directly and as such comes under the heading of White Boxz Testing.

To measure how well a program has been tested, there are a number of coverage criteria - the main ones being:
  • Functional Coverage - has each function in the program been tested?
  • Statement Coverage - has each line of the source code been tested?
  • Condition Coverage - has each evaluation point (i.e. a true/false decision) been tested?
  • Path Coverage - has every possible route through a given part of the code been executed?
  • Entry/exit Coverage - has every possible call and return of the function been tested?
Code-Based Testing: The principle of structural code based testing is to have each and every statement in the program executed at least once during the test. Based on the premise that one cannot have confidence in a section of code unless it has been exercised by tests, structural code based testing attempts to test all reachable elements in the software under the cost and time constraints. The testing process begins by first identifying areas in the program not being exercised by the current set of test cases, follow by creating additional test cases to increase the coverage.

Code-Free Testing: Next generation software testing technique from Original Software which does not require complicated scripting language to learn. Instead, a simple point and click interface is used to significantly simplify the process of test creation. See TestDrive-Gold.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program loic, analyzing the code with respect to a checklist of historically common programming errors, and anlyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program vairiables is manually monitored, to analyze the programmer's login and assumptions.

Coding: The generation of source code.

Compatibility Testing: The process of testing to understand if software is cimpatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Complete Path Testing: See Exhaustive Testing.

Component: A minimal software item for which a separate specification is available.

Component Testing: The testing of individual software components.

Component Specification: A description of a component's function in terms of its output values for specified input values under specified preconditions.

Computation Data Use: A data use not in a condition. Aslo called C-use.

Concurrent Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. See Load Testing.

Condition: A Boolean expresssion containing no Boolean operators. For instance, A<B is a condition but A and B is not.

Condition Coverage: See Branch Condition Coverage.

Condition Outcome: The evaluation of a condition to TRUE or FALSE.

Conformance Criterion: Some method of judging whether or not the component's action on a particular specified input value conforms to the specification.

Conformance Testing: The process of testing to determine whether a system meets some specified standard. To aid in this, may Test Procedures and test setups have been developed, either by the standard's maintainers or external organizations, specifically for testing conformance to standards.

Conformance testing is often performed by external organizations; sometimes the standards body itself, to give greater guarantees of compliance. Products tested in such a manner are then advertised as being certified by that external organization as complying with the standard.

Context Driven Testing: The context-driven school of software testing is similar to Agile Testing that advocated continuous and creative evaluation of testing oppurtunities in light of the potential information revealed and the value of that information to the organization right now.

Control Flow: An abstract representation of all possible sequences of events in a program's execution.

Control Flow Graph: The diagrammatic representation of the possible alternative control flow paths through a component.

Control Flow Path: See Path.

Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Correctness: The degree to which software conforms to its Specification.

Coverage: The degree, expressed as a percentage, to which a specified coverage item has been tested.

Coverage Item: An entity or property used as a basis for testing.

Cyclomatic Complexity: A software metric (measurement). It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program's source code.

No comments:

Post a Comment

Hi Friends,

As I am self taught.....this blog mainly acts as a reference to myself and to others who are new and learing. Would appreciate your valuable comments and suggestions and most welcome to participate in posts or discussions.

Thanks
Anu