Thursday, February 28, 2013

L

LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ Coverage: The percentage of LCSAJs of a component which are exercised by a test case suite.

LCSAJ Testing: A test case design technique for a component in which test cases are designed to execute LCSAJs.

Logic-Coverage Testing: Sometimes referred to as Path Testing, logic-coverage testing attempts to expose software defects by exercising a unique combination of the program's statements known as a Path.

Load Testing: The process of creating demand on a system or device and measuring its response. Load testing generally refers to the practice of modeling the expected usage of a software program by simulating multiple users accessing the program's services concurrently. As such, this testing is most relevant for multi-user systems, often one built using a client/server model, such as web servers. However, other types of software systems can be load tested also. For example, a word processor or graphics editor can be forced to read an extremely large document; or a financial package can be forced to generate a report based on several years' worth of data. The most accurate load testing occurs with actual, rather than theoretical, results. See also Concurrent Testing, Performance Testing, Reliability Testing, and Volume Testing.

Localization Testing: This term refers to making software specifically designed for a specific locality. This test is based on the results of globalization testing, which verifies the functional support for that particular culture/locale. Localization testing can be executed only on the localized version of a product.

The test effort during localization testing focuses on:
  • Areas affected by localization, such as UI and content
  • Culture/locale-specific, language-specific, and region-specific areas.
In addition, localization testing should include:
  • Basic functionality tests
  • Setup and upgrade tests run in the localized environment
  • Plan application and hardware compatibility tests according to the product's target region.
Log: A chronological record of relevant details about the execution of tests.

Loop Testing: Loop testing is the testing of a resource or resources multiple times under program control.

K

KBS (Knowledge Based System): A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the knowledge base to respond to a user's request for advice.

Key Performance Indicator: Quantifiable measurements against which specific performance criteria can be set.

Keyword Driven Testing: An approach to test script writing aimed at code based automation tools that separated much of the programming work from the actual test steps. The results is the test steps can be designed earlier and the code base if often easier to read and maintain.

Knowledge Engineering: The process of codifying an expert's knowledge in a form that can be accessed through an expert system.

Known Error: An incident or problem for which the root cause is known and for which a temporary Work-around or a permanent alternative has been identified.

J

I

ITIL (IT Infrastructure Library): A consistent and comprehensive documentation of best practice for IT Service Management, ITIL consists of a series of books giving guidance on the provision of quality IT services, and on the accommodation and environmental facilities needed to support IT.

Implementation Testing: See Installation Testing.

Incremental Testing: Partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.

Independence: Separation of responsibilities which ensure the accomplishment of objective evaluation.

Independent Test Group (ITG): A group of people whose primary responsibility is to conduct software testing for other companies.

Infeasible Path: A path which cannot be exercised by any set of possible input values.

Inference: Forming a conclusion from existing facts.

Inference Engine: Software that provides the reasoning mechanism in an expert system. In a rule based expert system, typically implements forward chaining and backward chaining strategies.

Infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, Automated Test Tools, office environment and procedures.

Inheritance: The ability of a class to pass on characteristics and data to its descendants.

Input: A variable (whether stored within a component or outside it) that is read by the component.

Input Domain: The set of all possible inputs.

Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement.

Installability: The ability to a software Component or system to be installed on a defined target platform allowing it to be run as required. Installation includes both a new installation and an upgrade.

Installability Testing: Testing whether the software or system installation being tested meets predefined installation requirements.

Installation Guide: Supplies instructions on any suitable media, which guides the installer trough the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.

Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Event can include shortage of disk space, unexpected loss of communication, or powder out conditions.
Such testing focuses on what customers will need to do to install and set up the new software successfully and is typically done by the software testing engineer in conjunction with the configuration manager. Implementation testing is usually defined as testing which places a complied version of code into the testing or pre-production environment, from which it may or may not progress into production. This generally takes place outside of the software development environment to limit code corruption from other future releases which may reside on the development network.

Installation Wizard: Supplies software on any suitable media, which leads the installer through the installation process. It shall normally run the installation process, provide feedback on installation outcomes and prompt for options.

Instrumentation: The insertion of additional code into the program in order to collect information about program behavior during program execution.

Integration: The process of combining components into larger groups or assemblies.

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Interface Testing: Integration testing where the interfaces between system components are tested.

Isolation Testing: Component testing of individual components in isolation from surrounding components.

H

Harness: A test environment comprised to stubs and drivers needed to conduct a test.

Heuristics: The informal, judgement knowledge of an application area that constitutes the "rules of good judgement" in the field. Heuristics also encompass the knowledge of how to solve problems efficiently and effectively, how to plan steps in solving a complex problem, how to improve performance, etc.

High Order Tests: High-Order testing checks that the software meets customer requirements and that the software, along with other systems elements, meets the functional, behavioral, and performance requirements. It uses Black-Box techniques and requires an outsider perspective. Therefore, organizations often use an Independent Testing Group (ITG) or the users themselves to perform high-order testing.

High-order testing includes Validation Testing, System Testing (focusing on aspects such as reliability, security, stress, usability, and performance), and Acceptance Testing (includes alpha and beta testing). The testing strategy specifies the type of high-order testing that the project requires. This depends on the aspects that are important in a particular system from the user perspective.

Tuesday, February 26, 2013

G

Glossary of Testing Terms A B C D E F G B

Genetic Algorithms: Search procedures that use the mechanics of natural selection and natural genetics. It uses evolutionary techniques, based on function optimization and artificial intelligence, to develop a solution.

Glass Box Testing: A form of testing in which the tester can examine the design documents and the code as well as analyze and possibly manipulate the internal state of the entity being tested. Glass box testing invloves examining the design documents and the code, as well as observing at run time the steps taken by algorithms and their internal data. See Structural Test Case Design.

Goal: The solution that the program or project is trying to reach.

Gorilla Testing: An intense round of testing, quite often redirecting all available resources to the activity. The idea here is to test as much of the application in as short a period of time as possible.

Graphical User Interface (GUI): A type of display format that enables the user to choose commands, start programs, and see lists of files and other options by pointing to pictorial representations (icons) and lists of menu items on the screen.

Gray (Grey) Box Testing: A testing technique that uses a combination of Black Box Testing and White Box Testing. Gray box testing is not black box testing because the tester does know some of the internal workings of the software under test. In gray box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the gray box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.

F

Glossary of Testing Terms A B C D E F GB

Failure: Non performance or deviation of the software from its expected delivery or service.

Fault: A manifestation of an error in software. Also known as a Bug.

Feasible Path: A path for which there exists a set of input values and execution conditions which causes it to be executed.

Feature Testing: A method of testing which concentrates on testing one feature at a time.

Firing a Rule: A rule fires when the "if" part (premise) is proven to be true. If the rule incorporates an "else" component, the rule also fires when the "if" part is proven to be false.

Fit For Purpose Testing: Validation carried out to demonstrate that the delivered system can be used to carry out the tasks for which it was designed and acquired.

Forward Chaining: Applying a set of previously determined facts to the rules in a Knowledge Base to see if any of them will fire.

Full Release: All components of the release unit that are built, tested, distributed and implemented together. See also Delta Release.

Functional Specification: The document that describes in detail the characteristics of the product with regard to its intended capability.

Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software. Functional Decomposition broadly relates to the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts by function composition. In general, this process of gaining insight into the identity of the constituent components (which may reflect individual physical processes of interest, for example), or for the purpose of obtaining a compressed representation of the global function, a task which is feasible only when the constituent processed possess a certail level of modularity (i.e. independence or non-interaction).

Functional Requirements: Defie the internal workings of the software: that is, the calculations, technical details, data manipulation and processing and other specific functionality that show how the use cases are to be satisfied. They are supported by non-functional requirements, which impose constraints on the design or implementation (such as performance requirements, security, quality standards, or design constraints).

Functional Specification: A document that describes in detail the characteristics of the product with regards to its intended features.

Functional Testing: See also Black Box Testing.
  • Testing the features and operational behavior of a product to ensure they correspond to its specifications.
  • Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

Thursday, February 21, 2013

E

Glossary of Testing Terms A B C D E F G

Emulator: A device that duplicates (provides an emulation of) the functions of one system using a different system, so that the second system behaves like (and appears to be) the first system. This focus on exact reproduction of external behavior is in contrast to simulation, which can concern an abstract model of the system being simulated, often considering internal state.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End Testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, suing network communications, or interacting with other hardware, applications, or systems if appropriate.

Entry Point: The first executable statement within a component.

Equivalence Class: A mathematical concept, an equivalence class is a subset of given set induced by an equivalence relation on that given test. (If the given set is empty, then the equivalence relation is empty, and there are no equivalence classes; otherwise, the equivalence relation and its concomitant equivalence classes are all non-empty). Elements of an equivalence class are said to be equivalent, under the equivalence relation, to all the other elements of the same equivalence class.

Equivalence Partition: See Equivalence Class.

Equivalence Partitioning: Leverages the concept of "classes" of input conditions. A "class" of input could be "City Name" where testing one or several city names could be deemed equivalent to testing all city names. In other words each instance of a class in a test covers a large set of other possible tests.

Equivalence Partition Coverage: The percentage of equivalence classes generated for the component, which have been tested.

Equivalence Partition Testing: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Error: A mistake that produces an incorrect result.

Error Guessing: Error guessing involves making an itemized list of the errors expected to occur in a particular area of the system and then designing a set of test cases to check for these expected errors. Error guessing is more testing art than testing science but can very effective given a tester familiar with the history of the system.

Error Seeding: The process of injecting a known number of "dummy" defects into the program and then check how many of them are found by various inspections and testing. If, for example, 60% of them are found, the presumption is that 60% of other defects have been as well. See Debugging.

Evaluation Report: A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

Executable Statement: A statement which, when complied, is translated into oobject code, which will be executed procedurally when the program is running and may perform an action or program data.

Exercised: A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

Exit Point: The last executable statement within a component.

Expected Outcome: See Predicted Outcome.

Expert System: A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the knowledge base to respond to a user's request for advice.

Expertise: Specialized domain knowledge, skills, tricks, shortcuts and rules-of-thumb that provide an ability to rapidly and effectively solve problems in the problem domain.

Thursday, February 14, 2013

D

Glossary of Testing Terms A B C D E F G B


Data Case: Data relationship model simplified for data extraction and reduction purposes in order to create test data.

Data Definition: An executable statement where a variable is assigned a value.

Data Definition C-use Coverage: The percentage of data definition C-use pairs in a component that are exercised by a test case suite.

Data Definition C-use Pair: A data definition and computation data use, where the data uses the value defined in the data definition.

Data Definition P-use Coverage: The percentage of data definition P-use pairs in a component that is tested.

Data Definition P-use Pair: A data definition and predicate data use, where the data use uses the value defined in the data definition.

Data Definition-use Coverage: The percentage of data definition-use pairs in a component that are exercised by a test case suite.

Data Definition-use Testing: A test case design technique for a component in which test cases are designed to execute data definition-use pairs.

Data Dictionary: A database that contains definitions of all data items defined during analysis.

Data Driven Testing: A framework where test input and output values are read from data files and are loaded into variables in caputred or manually coded scripts. In this framework, variables are used for both input values and output values. Navigation through the program, reading of the data files, and logging of test status and information are all coded in the test script.

This is similar to Keyword-Driven Testing in that the test case is contained in the data file and not in the script; the script is just a "driver", or delivery mechanism, for the data. Unlike in table-driven testing, though, the navigation data isn't contained in the table structure. In data-driven testing, only test data is contained in the data files.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

Data Flow Coverage: Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.

Data Flow Testing: Data-flow testing looks at the lifecycle of a particular piece of data (i.e. a variable) in an application. By looking for patterns of data usage, risky areas of code can be found and more test cases can be applied.

Data Protection: Technique in which the condition of the underflying database is synchronized with the test scenario so that differences can be attributed to logical changes. This technique also automatically re-sets the database after tests - allowing for a constant data set if a test is re-run. See TestBench.

Data Protection Act: UK Legislation surrounding the security, use and access of an individual's information. May impact the use of live data used for testing purposes.

Data Use: An executable statement where the value of a variable is accessed.

Database Testing: The process of testing the functionality, security, and integrity of the database and the data held within.

Functionality of the database is one of the most critical aspects of an application's quality; problems with the database could lead to data loss or security breaches, and  may put a company at legal risk depending on the type of data being stored. For more information on database testing see TestBench.

Debugging: A methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece of electronic hardware thus making it behave as expected. Debugging tends to be harder when various subsystems are tightly coupled, as changes in one may cause bugs to emerge in another.

Decision: A program point at which the control flow has two or more alternative routes.

Decision Condition: A condition held within a decision.

Decision Coverage: The percentage of decision outcomes that have been exercised by a test case suite.

Decision Outcome: The result of a decision.

Defect: Nonconformance to requirements or functional/program specification.

Delta Release: A delta, or partial, release is one that includes only thse areas within the release unit that have actually changed or are new since the last full or delta release. For example, if the release unit is the program, a delta release contains only those modules that have changed, or are new, since the last full release of the program or the last delta release of certain modules.

Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Desk checking: The testing of software by the manual simulation of its execution.

Design-Based Testing: Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behavior of algorithms).

Dirty Testing: Testing which demonstrates that the system under test does not work. (Also known as Negative Testing)

Documentation Testing: Testing concerned with the accuracy of documentation.

Domain: The set from which values are selected.

Domain Expert: A person who has significant knowledge in a specific domain.

Domain Testing: Domain testing is the most frequently described test technique. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset.

Downtime: Total period that a service or component is not operational.

Dynamic Testing: Testing of the dynamic behavior of code. Dynamic testing involves working with the software, giving input values and checking if the output is as expected.

Dynamic Analysis: the examination of the physical response from the system to variables that are not constant and change with time.

Friday, February 8, 2013

C

Glossary of Testing Terms A B C D E F GB

Capture/Playback Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.

Capture/Replay Tool: See Capture/PlaybackTool.

CAST: Acronym for computer-aided software testing. Automated Software Testing in one or more phases of the software life-cycle. See also ASQ.

Cause-Effect Graph: A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

Capability Maturity Model for Software (CMM): The CMM is a process model based on software best-practices effective in large-scale, multi-person projects. The CMM has been used to assess the maturity levels of organization areas as diverse as software engineering, system engineering, project management, risk management, system acquisition, information technology (IT) or personnel management, against a scale of five key processes, namely: Initial, Repeatable, Defined, Managed and Optimized.

Capability Maturity Model Integration (CMMI): Capability Maturity Model Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization. CMMI helps integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes.

Seen by many as the successor to the CMM, the goal fo the cMMI project is to improve the usability of maturity models by integrating many different models into one framework.

Certification: The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use.

Chow's Coverage Metrics: See N-Switch Coverage.

Code Complete: A phase of developement where functionality is implemented in its entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: A measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that looks at the code directly and as such comes under the heading of White Boxz Testing.

To measure how well a program has been tested, there are a number of coverage criteria - the main ones being:
  • Functional Coverage - has each function in the program been tested?
  • Statement Coverage - has each line of the source code been tested?
  • Condition Coverage - has each evaluation point (i.e. a true/false decision) been tested?
  • Path Coverage - has every possible route through a given part of the code been executed?
  • Entry/exit Coverage - has every possible call and return of the function been tested?
Code-Based Testing: The principle of structural code based testing is to have each and every statement in the program executed at least once during the test. Based on the premise that one cannot have confidence in a section of code unless it has been exercised by tests, structural code based testing attempts to test all reachable elements in the software under the cost and time constraints. The testing process begins by first identifying areas in the program not being exercised by the current set of test cases, follow by creating additional test cases to increase the coverage.

Code-Free Testing: Next generation software testing technique from Original Software which does not require complicated scripting language to learn. Instead, a simple point and click interface is used to significantly simplify the process of test creation. See TestDrive-Gold.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program loic, analyzing the code with respect to a checklist of historically common programming errors, and anlyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program vairiables is manually monitored, to analyze the programmer's login and assumptions.

Coding: The generation of source code.

Compatibility Testing: The process of testing to understand if software is cimpatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Complete Path Testing: See Exhaustive Testing.

Component: A minimal software item for which a separate specification is available.

Component Testing: The testing of individual software components.

Component Specification: A description of a component's function in terms of its output values for specified input values under specified preconditions.

Computation Data Use: A data use not in a condition. Aslo called C-use.

Concurrent Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. See Load Testing.

Condition: A Boolean expresssion containing no Boolean operators. For instance, A<B is a condition but A and B is not.

Condition Coverage: See Branch Condition Coverage.

Condition Outcome: The evaluation of a condition to TRUE or FALSE.

Conformance Criterion: Some method of judging whether or not the component's action on a particular specified input value conforms to the specification.

Conformance Testing: The process of testing to determine whether a system meets some specified standard. To aid in this, may Test Procedures and test setups have been developed, either by the standard's maintainers or external organizations, specifically for testing conformance to standards.

Conformance testing is often performed by external organizations; sometimes the standards body itself, to give greater guarantees of compliance. Products tested in such a manner are then advertised as being certified by that external organization as complying with the standard.

Context Driven Testing: The context-driven school of software testing is similar to Agile Testing that advocated continuous and creative evaluation of testing oppurtunities in light of the potential information revealed and the value of that information to the organization right now.

Control Flow: An abstract representation of all possible sequences of events in a program's execution.

Control Flow Graph: The diagrammatic representation of the possible alternative control flow paths through a component.

Control Flow Path: See Path.

Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Correctness: The degree to which software conforms to its Specification.

Coverage: The degree, expressed as a percentage, to which a specified coverage item has been tested.

Coverage Item: An entity or property used as a basis for testing.

Cyclomatic Complexity: A software metric (measurement). It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program's source code.

Wednesday, February 6, 2013

B

Glossary of Testing Terms A B C D E F G B


Backus-Naur Form (BNF): A metasyntax used to express context-free grammars: that is, a formal way to describe formal languages.
BNF is widely used as a notation for the grammars of computer programming languages, instruction sets and communication protocols, as well as a notation for representing parts of natural language grammars. Many textbooks for programming language theory and/or semantics document the programming language in BNF.

Basic Block: A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing: A White Box Test case design technique that fulfills the requirements of branch testing and also tests all of the independent paths that could be used to construct any arbitrary path through the computer program.

Basis Test Set: A set of test cases derived from Basis Path Testing.

Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.

Bebugging: A popular software engineering technique used to measure test coverage. Known bugs are randomly added to a program source code and the programmer is tasked to find them. The percentage of the known bugs not found gives an indication of the real bugs that remain.

Behavior: The combination of input values and preconditions along with the required response for a function of a system. The full specification of a function would normally comprise one or more behaviors.

Benchmarck Testing: Benchmark testing is a normal part of the application development life cycle. It is a team effort that involves both application developers and database administrators (DBAs), and should be performed against your application in order to determine current performance and improve it. If the application code has been written as efficiently as possible, additional performance gains might be realized from tuning the database and database manager configuration parameters. You can even tune application parameters to meet the requirements of the application better.

You run different types of benchmark tests to discover specific kinds of information:
  • A transaction per second benchmark determines the throughput capabilities of the database manager under certain limited laboratory conditions.
  • An application benchmark tests the same throughput capabilities under conditions that are closer production conditions.
Benchmarking is helpful in understanding how the database manager responds under varying conditions. You can create scenarios that test deadlock handling, utility performance, different methods of loading data, transaction rate characteristics as more users are added, and even the effect on the application of using a new release of the product.

Benchmark Testing Methods: Benchmark tests are based on a repeatable environment so that the same test run under the same conditions will yield results that you can legitimately compare.

You might begin benchmarking by running the test application in a normal environment. As you narrow down a performance problem, you can develop specialized test cases that limit the scope of the function that you are testing. The specialized test cases need not emulate an entire application to obtain valuable information. Start with simple measurements, and increase the complexity only when necessary.

Characteristics of good benchmarks or measurements include:
  • Tests are repeatable.
  • Each iteration of a test starts in the same system state.
  • No other functions or applications are active in the system unless the scenario includes some amount of other activity going on in the system.
  • The hardware and software used for benchmarking match your production environment.
For benchmarking, you create a scenario and then applications in this scenario, capturing key information during each run. Capturing key information after each run is of primary importance in determining the changes that might improve performance of both the application and the database.

Beta Testing: Comes after Alpha Testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Big-Bang Testing: An inappropriate approach to integration testing in which you take the entire integrated system and test it as a unit. Can work well on small systems but is not favorable for larger systems because it may be difficult to pinpoint the exact location of the defect when a failure occurs.

Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Black Box Testing: Testing without knowledge of the internal working of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used.

Advantages of Black Box Testing
  • More effective on larger units of code than Glass Box Testing.
  • Tester needs no knowledge of implementation,
  • Tester and programmer are independent of each other.
  • Tests are done from a user's point of view.
  • Will help to expose any ambiguities or inconsistencies in the specifications.
  • Test cases can be designed as soon as the specifications are complete.
Disadvantages of Black Box Testing
  • Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever,
  • Without clear and concise specifications, test cases are hard to design.
  • There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried.
  • May leave many program paths untested.
  • Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone).
  • Most testing related research has been directed toward glass box testing.
Block Matching: Automated matching login applied to data and transaction driven websites to automatically detect block s of related data. This enables repeating elements to be treated correctly in relation to other elements in the block without the need for special coding. See TestDrive-Gold.

Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components.

Boundary Testing: Tests focusing on the boundary or limits of the software being tested.

Boundary Value: An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary.

Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domain, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values.

Boundary Value Coverage: The percentage of boundary values which have been exercised by a test case suite.

Branch: A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer to control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component.

Branch Condition Coverage: The percentage of branch condition outcomes in every decision that has been tested.

Branch Condition Combination Coverage: The percentage of combinations of all branch conditions outcomes in every decision that has been tested.

Branch Condition Combination Testing: A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.

Branch Condition Testing: A technique in which test cases are designed to execute branch condition outcome.

Branch Testing: A test case design technique for a component in which test cases are designed to execute branch outcomes.

Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

Bug: A fault in a program which causes the program to perform in an unintended. See Fault.

A

Glossary of Testing Terms A B C D E F GB

Acceptance Testing: Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component. Normally performed to validate the software meets a set of agreed acceptance criteria.

Accessibility Testing: Verfiying a product is accessible to the people having disabilities (visually impaired, hard of hearing etc.)

Actual Outcome: The actions that are produced when the object is tested under specific conditions.

Ad Hoc Testing: Testing carried out in an unstructured and improvised fashion. Performed without clear expected results, ad hoc testing is most often used as a compliment to other types of testing. See also Monkey Testing.

Alpha Testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to Beta Testing.

Arc Testing: See Branch Testing.

Agile Testing: Testing practice for projects using agile methodologies, treating developement as the customer of testing and emphasizing a test-first design philosophy. In agile developement testing is integrated throughout the lifecycle, testing the software throughout its developement. See also Test Driven Development.

Application Binary Interface (ABI): Describes the lowlevel interface between an application program and the operating system, between application and its libraries, or between component parts of the application. An ABI differs from an application programming interface (API) in that an API defines the interface between souce code and libraries, so that the same source code will compile on any system supporting that API, whereas an ABI allows compiled object code to function without changes on any system using a compatible ABI.

Application Development Lifecycle: The process flow during the various phases of the application development life cycle.

The Design Phase depicts the design phase up to the point of starting development. Once all of the requirements have been gathered, analyzed, verified, and a design has been produced, we are ready to pass on the programming requirements to the application programmers.
The programmers take the design documents (programming requirements) and then proceed with the iterative process of coding, testing, revising, and testing again, this is the Development Phase.

After the programs have been tested by the programmers, they will be part of a series of formal user and system tests. These are used to verify usability and functionality from a user point of view, as well as to verify the functions of the application within a large framework.

The final phase in the developement life cycle is to go to production and become a steady state. As a prerequisite to going to production, the developement team needs to provide documentation. This usually consists of user familiarizes the users with the new application. The operational procedures documentation enables Operations to take over responsibility for running the application on an ongoing bases.

In production, the changes and enhancement are handled by agroup (possible the same programming group) that performs the maintenance. At this point in the life cycle of the application, changes are tightly controlled and must be rigorously tested before being implemented into production.

Application Programming Interface (API): Provided by operating systems or libraries in response to support requests for services to be made of it by computer programs.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.

Automated Software Testing: The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions and other test control and test reporting functions, without manual intervention.

Automated Testing Tools: Software tools used by development teams to automate and streamline their testing and quality assurance process.

Tuesday, February 5, 2013

What is Software Testing?

Software Testing:
------------------
There are many ways of how each and every person explains what software testing means...I found few of them, so listing it here...
* Software Testing is a process used to identify the correctness, completeness, and quality of developed computer software. It includes a set of activities conducted with the intend of finding errors in software so that it could be corrected before the product is released to the end users.
* To make it simple, software testing is an activity to check whether the actual results match the expected results and to ensure that the software system is defect free.
* Software testing is more than just error detection:
Testing software is operating the software under controlled conditions, to (1) Verify that it behaves "as specified"; (2) to Detect Errors, and (3) to Validate that what has been specified in what the user actually wanted.

1. Verification: Is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements.
2. Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen they should.
3. Validation: Looks at the system correctness - i.e., is the process of checking that what has been specified is what the user actually wanted.