Tuesday, March 19, 2013

S

Safety Testing: The process of testing to determine the safety of a software product.

Sanity Testing: Brief test of major functional elements of a piece of software to determine if it's basically operational.

Scalability Testing: Performance Testing focused on ensuring the application under test gracefully handles increases in work load.

Schedule: A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in the order in which they are to be executed.

Scrambling: Data obfuscation routine to de-identify sensitive data in the test data environments to meet the requirements of the Data Protection Act and other legislation, See TestBench.

Scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is understandable.

Script: See Test Script.

Security: Preservation of availability, integrity and confidentiality of information:
  • Availability is ensuring that authorized users have access to information and associated assets when required.
  • Integrity is safeguarding the accuracy and completeness of information and processing methods.
  • Confidentiality is ensuring that information is accessible only to those authorized to have access.
Security Requirements: A specification of the required security for the system or software.

Security Testing: Process to determine the n IS (Information System) protects data and maintains functionality as intended.
The six basic concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorization, availability and non-repudiation.

Confidentiality
A security measure which protects against the disclosure of information to parties other than the intended recipient(s). Often ensured by means of encoding using a defined algorithm and some secret information known only to the originator of the information and the intended recipient(s) (a process known as cryptography) but that is by no means the only way of ensuring confidentiality.

Integrity
A measure intended to allow the receiver to determine that the information which it receives has not been altered in transit or by other than the originator of the information.
Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithm check rather than the encoding all of the communication.

Authentication
A measure designed to establish the validity of a transmission, message or originator.
Allows a receiver to have confidence that information it receives originated from a specific known source.

Availability
Assuring information and communication services will be ready for use when expected.
Information must be kept available to authorized persons when they need it.

Non-repudiation
A measure intended to prevent the later denial that an action happened, or a communication that took place etc. In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp.

Self-Healing Scrips: A next generation technique pioneered by Original Software which enables an existing test to be run over an updated or changed application, and intelligently modernize itself to reflect the changes in the application - all through a point-and-click interface.

Simple Subpath: A subpath of the control flow graph in which no program is executed more than necessary.

Simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system.

Simulator: A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs.

Smoke Testing: A preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release. Originated in the hardware testing practice of turning a new piece of hardware for the first time and considering it a success if it does not catch on fire. In the software world, the smoke is metaphorical.

Soak Testing: Involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use.
For example, in software testing, a system may behave exactly as expected when tested for 1 hour. However, when it is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.
Soak tests are used primarily to check the reaction of a subject under test under a possible simulated environment for a given duration and for a given threshold. Observations made during the soak test are used to improve the characteristics of the subject under test further.

Software Requirements specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.

Software Testing: The process used to measure the quality of developed computer software. Usually, quality is constrained to such options as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but its not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person.
With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (SQA), which encompasses all business process areas, not just testing.

Today, software has grown in complexity and size. The software product developed by a developer is according to the System Requirement Specification. Every software product has a target audience. For example, a video game software has its audience completely different from banking software. Therefore, when an organization invests large sums in making a software product, it must ensure that the software product must be acceptable to the end users or its target audience. This is where Software Testing comes into play. Software testing is not merely finding defects or bugs in the software, it is completely dedicated discipline of evaluating the quality of the software.

There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also used to connote the dynamic analysis of the product-putting the product through its paces. Sometimes one therefore refers to reviews, Walkthroughs or inspections as "Static Testing", whereas actually running the program with a given set of test cases in a given development stage is often referred to as "Dynamic Testing", to emphasize the fact that formal review processes form part of the overall testing scope.

Specification: A description, in any suitable form, of requirements.

Specification Testing: An approach to testing wherein the testing is restricted to verifying the system/software meets an agreed specification.

Specified Input: An input for which the specification predicts an outcome.

State Transition: A transition between two allowable states of a system or Component.

State Transition Testing: A test case design technique in which test cases are designed to execute transitions.

Statement: An entity in a programming language which is typically the smallest indivisible unit execution.

Statement Coverage: The percentage of executable statements in a component that have been exercised by a test case suite.

Statement Testing: A test case design technique for a component in which test cases are designed to execute statements. Statement Testing is a structural or white box technique, because it is conducted with reference to the code. Statement testing comes under Dynamic Analysis.
In an ideal world every statement of every component would be fully tested. However, in the real world this hardly ever happens. In statement testing every possible statement is tested. Compare this to Branch Testing, where each branch is tested, to check that it can be traversed, whether it encounters a statement or not.

Static Analysis: Analysis of a program carried out without executing the program.

Static Analyzer: A tool that carries out static analysis.

Static Code Analysis: The analysis of computer software that is performed without actually executing programs built from that software. In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. The term is usually applied to the analysis performed by an automated tool, with human analysis being called program understanding or program comprehension.

Static Testing: A form of software testing where the software isn't actually used. This is in contrast to Dynamic Testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.
From the Black Box Testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation. Bugs discovered at this stage of development are normally less expensive to fix than later in the development cycle.

Statistical Testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.

Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage. See TestBench.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Coverage: Coverage measures based on the internal structure of the component.

Structural Test Case Design: Test case selection that is based on an analysis of the internal structure of the component.

Structural Testing: See structural test case design.

Structural Basis Testing: A test case design technique in which test cases are derived from the code logic to achieve % branch coverage.

Structural Walkthrough: See Walkthrough.

Stub: A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it.

Subgoal: An attribute which becomes a temporary intermediate goal for the inference engine. Subgoal values need to be determined because they are used in the premise of rules that can determine higher level goals.

Subpath: A sequence of executable statements within a component.

Suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives.

Suspension Criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items.

Symbolic Evaluation: See symbolic execution.

Symbolic Execution: A Static Analysis technique used to analyse if and when errors in the code may occur. It can be used to predict what code statements do to specified inputs and outputs. It is also important for considering path traversal. It struggles when dealing with statements which are not purely mathematical.

Symbolic Processing: Use of symbols, rather than numbers, combined with rules-of-thumb (or heuristics), in order to process information and solve problems.

Syntax Testing: A test case design technique for a component or system in which test case design is based upon the syntax of the input.

System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components. System testing falls within the scope of Black Box Testing, and as such, should require no knowledge of the inner design of the code or logic.
As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.

Friday, March 15, 2013

R

ROI: Return on Investment. A performance measure used to evaluate the efficiency of an investment or to compare the efficiency of a number of different investments. To calculate ROI, the benefit (return) of an investment is divided by the cost of the investment; the result is expressed as a percentage or a ratio.

                                                    (Gain from Investment - Cos of Investment)
                                         ROI = ---------------------------------------------
                                                                    Cost of Investment

Ramp Testing: Continuously raising an input signal until the system breaks down.

Random Testing: A Black-Box Testing approach in which software is tested by choosing an arbitrary subset of all possible input values. Random testing helps to avoid the problem of only testing what you know will work.

Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Recoverability: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure.

Recovery Testing: The activity of testing how well the software is able to recover from crashes, hardware failures and other similar problems. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed.

Examples of recovery testing:
  • While the application running, suddenly restart the computer and after the check the validness of application's data integrity.
  • While application receives data from the network, unplug and then in some time plug-in the cable, and analyze the application ability to continue receiving of data from that point, when network connection disappeared.
  • To restart the system while the browser will have definite number of sessions and after rebooting check, that it is able to recover all of them.
Recreation Materials: A script or set of results containing the steps required to reproduce a desired outcome.

Regression Testing: Any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired, stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged.

Experience has shown that as software is developed, this kind of reemergence of faults is quite common. Sometimes it occurs because a fix gets lost through poor revision control practices (or simple human error in revision control), but often a fix for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first observed but not in more general cases which may arise over the lifetime of the software. Finally, it has often been the case that when some features is redesigned, the same mistakes will be made in the redesign that were made in the original implementation of the feature.

Therefore, in most software development situations it is considered good practice that when a bug is located and fixed, a test that exposes the bug is recorded and regularly retested after subsequent changes to the program. Although this may be done through manual testing procedures using programming techniques, it is often done using Automated Testing Tools.

Such a 'test suite' contains software tools that allow the testing environment to execute all the regression test cases automatically; some projects even set up automated systems to automatically re-run all regression tests at specified intervals and report any regressions. Common strategies are to run such a system after every successful compile (for small projects), every night, or once a week. Those strategies can be automated by an external tool, such as TestDrive-Gold from Original Software.

Relational Operator: Conditions such as "is equal to" or "is less than" that link an attribute name with an attribute value in a rule's premise to form logical expressions that can be evaluated true or false.

Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

Release Note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase.

Reliability: The ability of the system/software to perform its required functions under stated conditions for a specified period of time, or for a specified number if operations.

Reliability Requirements: A specification of the required reliability for the system/software.

Reliability Testing: Testing to determine whether the system/software meets the specified reliability requirements.

Requirement: A capability that must be met or possessed by the system/software (requirements may be functional or non-functional).

Requirements-based Testing: An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements. For example: tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

Result: The consequence or outcome of a test.

Review: A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval.

Risk: A chance of negative consequences.

Risk Management: Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.

Robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.

Root Cause: An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.

Rule: A statement of the form: if X then Y else Z. The "if" part is the rule premise, and the "then" part is the consequent. The "else" component of the consequent is optional. The rule fires when the if part is determined to be true or false.

Rule Base: The encoded knowledge for an expert system. In a rule-based expert system, a knowledge base typically incorporated definitions of attributed and rules along with control information.

Tuesday, March 12, 2013

Q

Quality Assurance: The activity of providing evidence needed to establish confidence among all concerned, that quality-related activities are being performed effectively. All those planned or systematic actions necessary to provide adequate confidence that a product or service will satisfy given requirements for quality.

For software development organizations, TMM (Testing Maturity Model) standards are widely used to measure the Quality Assurance. These standards can be divided in to 5 steps, which a software development company can achieve by performing different quality improvement activities within the organization.

Quality Attribute: A feature of characteristic that effects an item's quality.

Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality to outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.

Quality Management: That aspect of the overall management function that determines and implements that quality policy. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement.

Quality Conundrum: Resource, risk and application time-to-market are often in conflict as IS teams strive to deliver quality applications within their budgetary constraints. This is quality conundrum.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

Query: A question. Often associated with an SQL query of values in a database.

Queuing Time: Incurred when the device, which a program wishes to use, is already busy. The program therefore has to wait in a queue to obtain service from that device.

P

Page Fault: A program interruption that occurs when a page that is marked 'not in real memory' is referred to by an active page.

Pair Programming: A software development technique that requires two programmers to participate in a combined development effort at one workstation. Each member performs the action the other is not currently doing: for example, while one types in unit tests, the other thinks about the class that will satisfy the test.

The person who is doing the typing is known as the driver while the person who is guiding is known as the navigator. It is often suggested for the two partners to switch roles at least every half-hour or after a unit test is made. It is also suggested to switch partners at least once a day.

Pair Testing: In much the same way as Pair Programming, two testers work together to find defects. Typically, they share one computer and trade control of it while testing.

Pairwise Testing: A combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm) test all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameters pairs. The number of tests is typically O (nm), where n and m are the number of possibilities for each of the two parameters with the most choices.

The reasoning behind all-pairs testing is this: the simplest Bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing. Bugs involving interactions between three or more parameters are progressively less common, whilst at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs.

Many testing methods regard all-pairs testing of a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order combinatorial testing methods, and less exhaustive methods which fail to exercise all possible pairs of parameters. Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as Unit Testing. See TestDrive-Gold.

Partial Test Automation: The process of automating parts but not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to help human testers perform their jobs more efficiently. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging etc., without necessarily automating tests in an end-to-end fashion.

Pass: Software has deemed to have passed a test if the actual results of the test matched the expected results.

Pass/Fail Criteria: Decision rules used to determine whether an item under test has passed or failed a test.

Path: A sequence of executable statements of a component, from an entry point to an exit point.

Path Coverage: The percentage of paths in a component exercised by a test case suite.

Path Sensitizing: Choosing a set of input values to force the execution of a component to take a given path.

Path Testing: Used as either Black Box or White Box testing, the procedure itself is similar to a walk-through. First, a certain path through the program is chosen. Possible inputs and the correct results are written down. Then the program is executed by hand, and its results is compared to the predefined. Possible Faults have to be written down at once.

Performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.

Performance Testing: A test procedure that covers a broad range of engineering or functional evaluations where a material, product, or system is not specified by detailed material or component specifications: Rather, emphasis is on the final measurable performance characteristics. Also known as Load Testing.

Portability: The ease with which the system/software can be transferred from one hardware of software environment to another.

Portability Requirements: A specification of the required portability for the system/software.

Portability Testing: The process of testing the ease with which a software component can be moved from one environment to another. This is typically measured in terms of the maximum amount of effort permitted. Results are expressed in terms of the time required to move the software and complete data conversion and documentation updates.

Postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

Positive Testing: Testing aimed at showing whether the software works in the way intended. See also Negative Testing.

Precondition: Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value.

Predicate: A logical expression which evaluated to TRUE or FALSE, normally to direct the execution path in code.

Predication: The choice to execute or not to execute a given instruction.

Predicted Outcome: The behavior expected by the specification of an object under specified conditions.

Priority: The level of business importance assigned to an individual item or test.

Prediction: The choice to execute or not to execute a given instruction.

Predicted Outcome: The behavior expected by the specification of a n object under specified conditions.

Priority: The level of business importance assigned to an individual item or test.

Process: A course of action which turns inputs into outputs or results.

Process Cycle Test: A Black Box test design technique in which test cases are designed to execute business procedures and processes.

Progressive Testing: Testing of new features after Regression Testing of previous features.

Project: A planned undertaking for presentation of results at a specified time in the future.

Prototyping: A strategy in system development in which a scaled down system or portion of a system is constructed in a short time, then tested and improved upon over several iterations.

Pseudo-Random: A series which appears to be random but is in fact generated according to some prearranged sequence.

Tuesday, March 5, 2013

O

Object: A software structure which represents an identifiable item that has a well-defined role in a problem domain.

Object Orientated: An adjective applied to any system or language that supports that use of objects.

Objective: The purpose of the specific test being under taken.

Operational Testing: Testing [erformed by the end-user on the software in its normal operating environment.

Oracle: A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test.

Outcome: The result or visible effect of a test.

Output: A variable (whether stored within a component or outside it) that is written to by the component.

Output Domain: The set of all possible outputs.

Output Value: An instance of an output.