Tuesday, March 12, 2013

P

Page Fault: A program interruption that occurs when a page that is marked 'not in real memory' is referred to by an active page.

Pair Programming: A software development technique that requires two programmers to participate in a combined development effort at one workstation. Each member performs the action the other is not currently doing: for example, while one types in unit tests, the other thinks about the class that will satisfy the test.

The person who is doing the typing is known as the driver while the person who is guiding is known as the navigator. It is often suggested for the two partners to switch roles at least every half-hour or after a unit test is made. It is also suggested to switch partners at least once a day.

Pair Testing: In much the same way as Pair Programming, two testers work together to find defects. Typically, they share one computer and trade control of it while testing.

Pairwise Testing: A combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm) test all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameters pairs. The number of tests is typically O (nm), where n and m are the number of possibilities for each of the two parameters with the most choices.

The reasoning behind all-pairs testing is this: the simplest Bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing. Bugs involving interactions between three or more parameters are progressively less common, whilst at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs.

Many testing methods regard all-pairs testing of a system or subsystem as a reasonable cost-benefit compromise between often computationally infeasible higher-order combinatorial testing methods, and less exhaustive methods which fail to exercise all possible pairs of parameters. Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as Unit Testing. See TestDrive-Gold.

Partial Test Automation: The process of automating parts but not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to help human testers perform their jobs more efficiently. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging etc., without necessarily automating tests in an end-to-end fashion.

Pass: Software has deemed to have passed a test if the actual results of the test matched the expected results.

Pass/Fail Criteria: Decision rules used to determine whether an item under test has passed or failed a test.

Path: A sequence of executable statements of a component, from an entry point to an exit point.

Path Coverage: The percentage of paths in a component exercised by a test case suite.

Path Sensitizing: Choosing a set of input values to force the execution of a component to take a given path.

Path Testing: Used as either Black Box or White Box testing, the procedure itself is similar to a walk-through. First, a certain path through the program is chosen. Possible inputs and the correct results are written down. Then the program is executed by hand, and its results is compared to the predefined. Possible Faults have to be written down at once.

Performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.

Performance Testing: A test procedure that covers a broad range of engineering or functional evaluations where a material, product, or system is not specified by detailed material or component specifications: Rather, emphasis is on the final measurable performance characteristics. Also known as Load Testing.

Portability: The ease with which the system/software can be transferred from one hardware of software environment to another.

Portability Requirements: A specification of the required portability for the system/software.

Portability Testing: The process of testing the ease with which a software component can be moved from one environment to another. This is typically measured in terms of the maximum amount of effort permitted. Results are expressed in terms of the time required to move the software and complete data conversion and documentation updates.

Postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

Positive Testing: Testing aimed at showing whether the software works in the way intended. See also Negative Testing.

Precondition: Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value.

Predicate: A logical expression which evaluated to TRUE or FALSE, normally to direct the execution path in code.

Predication: The choice to execute or not to execute a given instruction.

Predicted Outcome: The behavior expected by the specification of an object under specified conditions.

Priority: The level of business importance assigned to an individual item or test.

Prediction: The choice to execute or not to execute a given instruction.

Predicted Outcome: The behavior expected by the specification of a n object under specified conditions.

Priority: The level of business importance assigned to an individual item or test.

Process: A course of action which turns inputs into outputs or results.

Process Cycle Test: A Black Box test design technique in which test cases are designed to execute business procedures and processes.

Progressive Testing: Testing of new features after Regression Testing of previous features.

Project: A planned undertaking for presentation of results at a specified time in the future.

Prototyping: A strategy in system development in which a scaled down system or portion of a system is constructed in a short time, then tested and improved upon over several iterations.

Pseudo-Random: A series which appears to be random but is in fact generated according to some prearranged sequence.

No comments:

Post a Comment

Hi Friends,

As I am self taught.....this blog mainly acts as a reference to myself and to others who are new and learing. Would appreciate your valuable comments and suggestions and most welcome to participate in posts or discussions.

Thanks
Anu