Test and inspection
A technique that evaluates deliverables, components, or processes by executing tests and performing visual or measurement-based checks to confirm they meet specified requirements and acceptance criteria. It produces objective evidence for acceptance, rework, or change decisions.
Key Points
- Combines testing (operating and measuring performance) with inspection (examining without operation) to verify conformance.
- Planned through the quality management plan, test strategy, and defined acceptance criteria.
- Uses appropriate sampling, measurement, and traceability based on risk and cost of quality.
- Results provide objective evidence for acceptance, control, and trend analysis.
- Nonconformances lead to defect repair, retest, and possible change requests when requirements must change.
- May be destructive or non-destructive; ensure safety, calibration, and configuration control.
Quality Objective
Confirm that outputs meet specified requirements and are fit for purpose before acceptance or release. Reduce escapes to later phases by identifying defects early and enabling timely corrective action and learning.
Method Steps
- Define acceptance criteria, tolerances, and test/inspection methods in the quality plan.
- Select sampling approach, test cases, checklists, and measurement tools aligned to risk.
- Prepare environment, data, fixtures, and calibrate instruments; ensure staff competence and independence where needed.
- Execute tests and inspections per procedure; capture evidence such as readings, images, logs, or checklists.
- Compare results to criteria; classify outcomes as pass, fail, or conditional pass.
- Record nonconformances, analyze root cause, and apply defect repair or corrective action.
- Retest or reinspect after fixes; update records and traceability links.
- Obtain formal acceptance when criteria are met; archive results for audits and lessons learned.
Inputs Needed
- Requirements, specifications, and defined acceptance criteria.
- Quality management plan, test strategy, and procedures or checklists.
- Designs, configurations, and approved changes affecting what is tested.
- Measurement tools, calibration records, and environmental constraints.
- Risk register and criticality rankings to guide sampling and rigor.
- Applicable standards, regulations, and compliance obligations.
- Prior test results, defect logs, and lessons learned.
Outputs Produced
- Test and inspection reports with objective evidence and results.
- Defect or nonconformance records and corrective action plans.
- Change requests when requirements or baselines must be updated.
- Updated requirements traceability and configuration records.
- Acceptance sign-offs and release approvals when criteria are met.
- Quality metrics, trends, and lessons learned for continuous improvement.
Acceptance/Control Rules
- Use predefined pass/fail thresholds, tolerances, and sampling plans (for example, AQL) aligned to risk.
- Ensure measurement system adequacy and calibration before testing.
- Maintain independence for critical inspections and record traceability to requirements.
- Invoke stop rules for safety-critical or repeated major defects and escalate per governance.
- Require retest after defect repair; do not change criteria without formal change control.
- Control environment, data, and configurations to ensure repeatable results.
Example
A team verifies a new product module. They execute functional and stress tests with defined pass thresholds, then perform a visual inspection against a checklist for labeling and workmanship. Two defects are found: one functional failure and one cosmetic issue. The team records both, fixes the functional issue, submits a change request to relax a cosmetic tolerance that has no impact on use, and retests. After passing, they obtain formal acceptance and archive the test evidence.
Pitfalls
- Lack of clear, measurable acceptance criteria leading to subjective decisions.
- Inadequate sampling or insufficient test coverage, missing critical defects.
- Uncalibrated tools or uncontrolled environments producing unreliable results.
- Tester bias or conflicts of interest compromising objectivity.
- Over-inspection causing delays without addressing root causes.
- Poor traceability between requirements, tests, and results.
PMP Example Question
During quality control, a deliverable fails a critical test. What should the project manager do next?
- Accept the deliverable since most tests passed to protect the schedule.
- Record the nonconformance, analyze root cause, implement defect repair, and plan a retest per the quality plan.
- Update the acceptance criteria to match the delivered result and get sponsor approval.
- Skip the failed test and add more inspections later in production.
Correct Answer: B — Record the nonconformance, analyze root cause, implement defect repair, and plan a retest per the quality plan.
Explanation: Follow the defined test and inspection process: document, fix, and retest. Changing criteria requires formal change control, not ad hoc decisions.
HKSM