Sanity Testing: Brief test of major functional elements of a piece of software to determine if it's basically operational.
Scalability Testing: Performance Testing focused on ensuring the application under test gracefully handles increases in work load.
Schedule: A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in the order in which they are to be executed.
Scrambling: Data obfuscation routine to de-identify sensitive data in the test data environments to meet the requirements of the Data Protection Act and other legislation, See TestBench.
Scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is understandable.
Script: See Test Script.
Security: Preservation of availability, integrity and confidentiality of information:
- Availability is ensuring that authorized users have access to information and associated assets when required.
- Integrity is safeguarding the accuracy and completeness of information and processing methods.
- Confidentiality is ensuring that information is accessible only to those authorized to have access.
Security Testing: Process to determine the n IS (Information System) protects data and maintains functionality as intended.
The six basic concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorization, availability and non-repudiation.
Confidentiality
A security measure which protects against the disclosure of information to parties other than the intended recipient(s). Often ensured by means of encoding using a defined algorithm and some secret information known only to the originator of the information and the intended recipient(s) (a process known as cryptography) but that is by no means the only way of ensuring confidentiality.
Integrity
A measure intended to allow the receiver to determine that the information which it receives has not been altered in transit or by other than the originator of the information.
Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithm check rather than the encoding all of the communication.
Authentication
A measure designed to establish the validity of a transmission, message or originator.
Allows a receiver to have confidence that information it receives originated from a specific known source.
Availability
Assuring information and communication services will be ready for use when expected.
Information must be kept available to authorized persons when they need it.
Non-repudiation
A measure intended to prevent the later denial that an action happened, or a communication that took place etc. In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp.
Self-Healing Scrips: A next generation technique pioneered by Original Software which enables an existing test to be run over an updated or changed application, and intelligently modernize itself to reflect the changes in the application - all through a point-and-click interface.
Simple Subpath: A subpath of the control flow graph in which no program is executed more than necessary.
Simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system.
Simulator: A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs.
Smoke Testing: A preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release. Originated in the hardware testing practice of turning a new piece of hardware for the first time and considering it a success if it does not catch on fire. In the software world, the smoke is metaphorical.
Soak Testing: Involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use.
For example, in software testing, a system may behave exactly as expected when tested for 1 hour. However, when it is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.
Soak tests are used primarily to check the reaction of a subject under test under a possible simulated environment for a given duration and for a given threshold. Observations made during the soak test are used to improve the characteristics of the subject under test further.
Software Requirements specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
Software Testing: The process used to measure the quality of developed computer software. Usually, quality is constrained to such options as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but its not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person.
With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (SQA), which encompasses all business process areas, not just testing.
Today, software has grown in complexity and size. The software product developed by a developer is according to the System Requirement Specification. Every software product has a target audience. For example, a video game software has its audience completely different from banking software. Therefore, when an organization invests large sums in making a software product, it must ensure that the software product must be acceptable to the end users or its target audience. This is where Software Testing comes into play. Software testing is not merely finding defects or bugs in the software, it is completely dedicated discipline of evaluating the quality of the software.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also used to connote the dynamic analysis of the product-putting the product through its paces. Sometimes one therefore refers to reviews, Walkthroughs or inspections as "Static Testing", whereas actually running the program with a given set of test cases in a given development stage is often referred to as "Dynamic Testing", to emphasize the fact that formal review processes form part of the overall testing scope.
Specification: A description, in any suitable form, of requirements.
Specification Testing: An approach to testing wherein the testing is restricted to verifying the system/software meets an agreed specification.
Specified Input: An input for which the specification predicts an outcome.
State Transition: A transition between two allowable states of a system or Component.
State Transition Testing: A test case design technique in which test cases are designed to execute transitions.
Statement: An entity in a programming language which is typically the smallest indivisible unit execution.
Statement Coverage: The percentage of executable statements in a component that have been exercised by a test case suite.
Statement Testing: A test case design technique for a component in which test cases are designed to execute statements. Statement Testing is a structural or white box technique, because it is conducted with reference to the code. Statement testing comes under Dynamic Analysis.
In an ideal world every statement of every component would be fully tested. However, in the real world this hardly ever happens. In statement testing every possible statement is tested. Compare this to Branch Testing, where each branch is tested, to check that it can be traversed, whether it encounters a statement or not.
Static Analysis: Analysis of a program carried out without executing the program.
Static Analyzer: A tool that carries out static analysis.
Static Code Analysis: The analysis of computer software that is performed without actually executing programs built from that software. In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. The term is usually applied to the analysis performed by an automated tool, with human analysis being called program understanding or program comprehension.
Static Testing: A form of software testing where the software isn't actually used. This is in contrast to Dynamic Testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.
From the Black Box Testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation. Bugs discovered at this stage of development are normally less expensive to fix than later in the development cycle.
Statistical Testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.
Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage. See TestBench.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
Structural Coverage: Coverage measures based on the internal structure of the component.
Structural Test Case Design: Test case selection that is based on an analysis of the internal structure of the component.
Structural Testing: See structural test case design.
Structural Basis Testing: A test case design technique in which test cases are derived from the code logic to achieve % branch coverage.
Structural Walkthrough: See Walkthrough.
Stub: A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it.
Subgoal: An attribute which becomes a temporary intermediate goal for the inference engine. Subgoal values need to be determined because they are used in the premise of rules that can determine higher level goals.
Subpath: A sequence of executable statements within a component.
Suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives.
Suspension Criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items.
Symbolic Evaluation: See symbolic execution.
Symbolic Execution: A Static Analysis technique used to analyse if and when errors in the code may occur. It can be used to predict what code statements do to specified inputs and outputs. It is also important for considering path traversal. It struggles when dealing with statements which are not purely mathematical.
Symbolic Processing: Use of symbols, rather than numbers, combined with rules-of-thumb (or heuristics), in order to process information and solve problems.
Syntax Testing: A test case design technique for a component or system in which test case design is based upon the syntax of the input.
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components. System testing falls within the scope of Black Box Testing, and as such, should require no knowledge of the inner design of the code or logic.
As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.