Features of Testing
- Testing is a planned activity
- Expected output has to be defined
- Peers within and outside the project group can do testing
- Test results undergo final inspection before delivery
- Test cases are written for both valid and invalid conditions
- Testability is one of the key features in the software design
- Testing is an extremely creative and intellectually challenging task
- A good test case is one that has a high probability of detecting an as-yet-undiscovered error
- A successful test case is one that detects an as-yet-undiscovered error.
Testing (Definition)
“TESING IS THE PROCESS OF EXECUTING A PROGRAM WITH THE INTENT OF FINDING ERRORS"
Testing Strategy
- Every project should develop a testing strategy in the initial stages and include it as a part of the software management plan.
- The testing strategy includes the following
- The stage at which testing activity will start,
- Types of testing,
- Risks associated with testing,
- Critical success factors,
Testing Strategy
- Testing objectives,
- Composition of the Test Team,
- Test completion criteria,
- Test stop criteria,
- Test tools to be used.
Types of Testing
- Unit testing,
- Integration Testing,
- Functional testing,
- System testing,
- Regression test,
- Acceptance test etc.
Risks associated with testing
- Some typical risks are
- Development team new to the environment and technology,
- No automated tools identified,
- May not work under a specified environment.
Critical Success Factors
- Delivery on time,
- Reasonable response time,
- Understanding the business context and project objectives.
Testing objectives
- To ensure user-friendliness,
- To ensure a good response time,
- To ensure reliability and usability
- To ensure that software is free from major defects.
Test Completion Criteria
- Identifies the point at which the testing activity gets completed successfully.
- Guidelines at arriving at test completion criteria
- When all test cases are executed without producing any errors
- Complete the testing activity when N number of errors have been found and corrected
- Testing gets completed when all the branches and statements are executed,
Test Stop Criteria
- If the number of errors exceed the acceptable limit then the testing activity should be abandoned. This limit is set at the beginning of the project.
Testing Levels
- Testing in general can be categorized into
- Black Box Testing, and
- White Box Testing.
Black Box Testing
- In Black Box Testing, the product is considered like a black box where the tester is not aware of the intricacies and only validates that the box functions properly as a whole,
- Technically speaking, it is not based on any knowledge of internal design of the product,
- Tests are based on requirements and functionality.
White Box Testing
- Also called Glass Box, structural, Clear box or Open box testing,
- In white box testing, the internal logic is given due importance,
- Tests are based on coverage of code statements, branches, and conditions,
- At the micro level, classified into statement coverage, branch coverage and condition coverage tests.
Black Box Testing
Advantages
- Efficient when used on larger systems,
- Tester and Developer work independently, hence test is balanced and unprejudiced,
- Testing and Tester can be non-technical,
- Tests are conducted from the end-users viewpoint,
- Identifies the vagueness and contradiction in functional aspects,
- Test cases can be designed as soon as functional specifications are complete.
Black Box Testing
Disadvantages
- Test cases are tough and challenging to design, without having clear functional specifications,
- It is difficult to identify tricky inputs, if the test cases are not developed from functional specifications,
- Limited testing time, hence difficult to identify all possible inputs,
- Chances of having unidentified paths during testing.
White Box Testing
- Advantages
- Test is accurate since the tester knows what individual programs are supposed to do,
- Deviation from the intended goals as regards the functioning of the program can be checked and verified accurately.
- Disadvantage
- Requires thorough knowledge of the programming code to examine the related inputs and outputs.
Types of Testing
Unit Testing
- It is the first level of Dynamic testing,
- Is the primary responsibility of the developers,
- Unit testing is performed after the expected test results are met or deviations/ differences are explainable and acceptable.
Functional Testing
- Black box testing used to test only functional requirements of an application,
- Performed by Tester(s).
Black Box Functional test techniques
- Functional Black Box test can be done by using the following techniques.
- Equivalence Class partitioning (EC), and
- Boundary Value Analysis (BAV)
Equivalence Class Partitioning
- In this technique, we divide the testing into equal valid and invalid classes.
- The valid classes check for correct data inputs and the invalid classes check for incorrect data inputs.
- The system should respond well to both valid and invalid classes.
Eg.
- In the example for marks assigned to a subject for a student;
- The valid classes is the range 0 – 100
- The Invalid classes are <> 100.
- When the user considers one input data from each class, the user will be able to test the complete functionality for the marks.
Boundary Value Analysis (BAV)
- In this technique, the tester is more concerned with testing at the boundaries of the classes.
- The probability of finding defects at the boundary of the classes are higher.
Boundary Value Analysis (BAV)
Eg.
- In the example for Marks, we could test at 0.2, -0.2, 99.89 and 100.03.
- In case of component testing, we have to partition the input and output values of the component into a number of ordered sets within identifiable boundaries.
- The values have to selected in such a way that if the system works correctly for these values, it will definitely work correctly for the values in the related range.
Integration Testing
- Upon completion of successful Unit testing, Integration testing is done,
- To ensure that distinct components of the application still work in accordance to client’s requirements,
- Test sets will be developed with the view to test the interfaces between the components.
Incremental Integration Testing
- To test various aspects of the application’s functionality be independent enough to work separately before all parts of the application are completed,
- Test stubs and Test drivers are needed to test individual components.
- Test Stub : A work code may call a simulated code called test stub, instead of the actual routine.
- Test Driver : A simulated code used to call a routine.
System Testing
- Testing done after Integration Testing,
- The complete system is configured and tested in a controlled environment to validate its accuracy and completeness in performing the desired functions,
- Testing team is responsible for System test,
- Prior to System test, the results of Unit and Integration test will be reviewed by SQA to ensure that all problem areas are resolved.
End-to-End Testing
- Similar to System Testing, but done at the “macro” level,
- The basic functionality is checked for in this testing (eg: interacting with database, network communication, interaction with other applications or network etc).
Regression Testing
- Testing done to check the impact on account of resolution of defects found in a previous version of the application,
- It is done to ensure that software remains intact on account of changes done,
Sanity Testing
- Testing performed whenever cursory testing is sufficient to prove the application is functioning according to specifications,
- Include a set of core tests of basic GUI functionality to demonstrate connectivity to database, application servers, printers etc.
Performance Testing
- Part of System testing, but treated as a different test,
- Will verify the load, volume and response times as defined by requirements.
Load testing
- Testing the application under heavy loads,
- Example of Load testing is testing of a web site under a range of loads (connected users) to determine at what point the systems response time deteriorates or fails.
Installation Testing
- Testing Full, Partial or Upgrade install/ uninstall process,
- Is conducted with the objective of demonstrating production readiness,
- The test is conducted after the application has been migrated to the client’s site,
- When necessary, a sanity will be performed after the Installation testing.
Security / Penetration Testing
- Testing how well the system protects against unauthorized access (internal or external),
- Requires sophisticated testing techniques.
Recovery/ Error Testing
- Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Compatibility testing
- Testing how well a systems performs in a particular hardware/ software/ Operating system/ network environment.
Comparison Testing
- Testing done to compare software strengths and weaknesses to other competing software products.
Acceptance Testing
- Gives the client the opportunity to verify the system functionality and usability prior to the system being moved to production,
- Responsibility of the Client with support from the Project Team,
- The Client and the Project Team will together prepare the acceptance criteria.
Parallel/ Audit Testing
- Testing where the user reconciles the output of the current system to verify the new system performs as per the old existing system.
Alpha Testing
- Testing of an application when nearing completion,
- Done by end-users or others, and not by programmers or testers,
- Done in a controlled environment at the development site.
Beta Testing
- Testing done to check for bugs and problems before the final release,
- Done by the end-users or others, and not by programmers or testers,
- Done at the client site.
Testing Team
- A testing team should comprise of the following
Test/ QA Team Lead
- Coordinates the testing activity,
- Communicates testing status to management,
- Manages the team.
Testers
- Develop test scripts, test case and test data,
- Execute test scripts,
- Evaluates results for different types of testing (System, integration and regression).
Test Build Manager/ System Administrator/ Database Manager
- Delivers current version of the software to the testing team for testing,
- Performs installation of application software and apply patches if required,
- Performs set-up, maintenance, back-up of test environment.
Technical Analyst/ Test Configuration Manager
- Performs testing assessment and validates system/ functional test requirements,
- Maintains the test environment, test scripts, software and test data.
Testing Methodology
Step 1 – Create Test Strategy
Inputs required
- Test Environment (Hardware and Software components), Test tool data,
- Description of roles and responsibilities of resources, and schedule constraints,
- Standards used,
- Functional and Technical requirements for the application,
- Description about System limitations.
Outputs
- Approved and signed off test strategy document, test plan and test cases,
- Testing issues requiring resolution (with co-operation from project management).
Step 2 – Create Test plan/ design
Inputs
- Approved Test Strategy document,
- Automated testware, test tools (if required),
- Documents for understanding software complexity (Software Design, Code)
Output
- Design problems, Code issues (feedback to developers)
- Approved test scenarios and scripts with test data.
Step 3 – Execute Tests
Inputs
- Approved test documents (Test plans, cases, procedures),
- Automated testware (if required),
- Change request (if any),
- Test data,
- Availability of Test and Project teams,
- General and Detailed design documents (Requirements, software design),
- Unit Tested code (from Configuration/ Build Manager),
- Test readiness document,
- Update documents (if any).
Outputs
- Test fixes (changes to the code),
- Test document problems,
- Documented problems with requirements, design, and Code issues,
- Problem tracking document,
- Tested Source and Object code (baselined and versioned),
- Test Report,
- Testing deliverables (approved and signed-off)
Test Plan
- It is a document that describes the objectives, scope, approach, and focus of a software testing effort.
- The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of the software product.
- It will help people outside the Test Group to understand the “WHY” and “HOW” of product validation.
Test Plan (Outline)
- Introduction
- Purpose
- Abbreviations
- Definitions
- Scope
- References
- Testing Process
- Test Objective
- Test Strategy
- Test Methodology used
- Test Environment
- Test team and their composition
- Risks associated with the testing
- Test Scripts and Test Cases
- Software Quality Assurance Plan
- Organization Elements Involved
- Involved Organizational Elements, Tasks & Responsibilities
- QA Audits
- Managerial reviews
- Matrices Required and formats.
Test Scripts
- A series of instructions written to test a particular component, software or a part of the software.
- The test scripts is comprised of number of test cases, designed to test the work product in every possible way, to eliminate the occurrence of defects/ deviations.
Preparing Test Script
- Note the steps below to prepare an effective test script.
- Decide which are your key functional areas to test,
- Devise test scripts to test this functional area from Start to Finish,
Practicalities in implementingTest Scripts
- Some work still remains after the agenda and preparation of scripts
- The scripts must be organized into a time bound schedule,
- Have a follow-up session at the start and at the end to pick up issues from the previous sessions.
- A facilitator needs to check with the team that a test case (point) has been adequately covered before proceeding.
Testing Re-visited and Highlighted
Essentials of Testing Process
- The quality of the testing process determines the success of the test effort,
- Prevent defect migration by using early lifecycle testing techniques,
- A person must take responsibility for improving the testing process,
- Cultivate a positive team attitude of creative destruction.
Attitude of Testers
- Testers hunt errors
- The focus on showing the presence of errors is the basic attitude of a good tester. It should give personal satisfaction to the team member(s) to find errors.
- Testers are creative destructors
- It takes imagination, persistence and patience to systematically locate the weaknesses in a complex structure and demonstrate its failures.
- Testers pursue errors, not people
- Errors are in the work product and not in the person who made the mistake. The errors are in the process that goes into producing the work product.
- Testers add value to the product
- They improve the overall quality of the product.
How to do testing?
- By examining the internal structure and design
- By examining the functional user interface
- By examining the design objectives
- By examining the user requirements
- By executing the Code
Testing Maturity Model (TMM)
- TMM is a set of levels the defines a testing maturity hierarchy.
- Each level represents a stage in the evaluation to a mature testing process,
- Graduating to an upper level implies that the lower level practices continue to be in place.
- Organizations will strive toward testing maturity by focusing on the goals defined for each level.
- TMM is designed to help software quality related activities.
- TMM maturity criteria will have a highly positive impact on software quality, software engineering productivity, and cycle time reduction efforts,
Why TMM is required?
- Testing is a critical component of a mature software development process,
- It is one of the most challenging and costly process activities, and contributes significantly towards the overall software quality,
- In spite of its importance, it has not been address by existing maturity models
- The TMM addresses issues important to test managers, test specialists and software quality assurance staff.
- The issues not address by existing maturity models
- Concept of testing maturity is not addressed,
- No adequate inclusion of testing practices as a process improvement mechanism,
- Quality related issues such as testability, test criteria, test planning and software certifications are not satisfactorily addressed.
Who uses TMM?
- Used by Internal assessment team to identify the current testing Capability State,
- By Upper management to initiate a testing improvement program,
- Development teams to improve testing capability,
- Users and clients to define their role in the testing process.
Levels of TMM
- Level 1: Initial
- Level 2:Phase definition
- Institutionalize basic testing,
- Initiate a test planning process,
- Develop testing and debugging goals.
Levels of TMM
- Level 3: Integration
- Control and monitor the test process,
- Integrate testing into the software lifecycle,
- Establish a technical training program,
- Establish a software test organization
Levels of TMM
- Level 4: Management and Measurement
- Software quality evaluation,
- Establish a test measurement program,
- Establish an organization-wide review program
Levels of TMM
- Level 5: Optimization, Defect Prevention and Quality Control
- Test process optimization,
- Quality control,
- Application of process data for defect prevention.
Testing Metrics
- A quantitative measure of the degree to which a system, component or process possesses a given attribute.
- Used to quantity the software, software development resource and the development process.
Test Metrics
- User participation (to find the involvement of the testers)
User Participation = User participation test time v/s Total test time.
Test Metrics
- Path tested (Extend of testing)
Number of path tested
= ----------------------------
Total number of paths
Test Metrics
- Acceptance Criteria tested (extent of testing)
= Acceptance Criteria verified v/s Total Acceptance Criteria.
Test Metrics
- Test Cost (Resources consumed in testing)
= Test Cost v/s Total system cost
Test Metrics
- Cost to Locate Defect (resources consumed in testing)
Test Cost
= ---------------------------------------------------
No. of defects located in the Testing.
Test Metrics
- Detected Production defect (Effectiveness of testing)
No. of defects detected in Production
= -----------------------------------------------------
Application System size.
Test Metrics
- Test Automation (Effectiveness of automated testing)
Cost of Automated Test Effort
= ----------------------------------
Total test cost
Test Reports
Functional Testing Status
- Will show percentages of the functions which have been
- Fully tested,
- Tested with Open defects,
- Not tested
- Expected verses Actual Defects detected
- Will provide an analysis between the number of defects being generated against the expected number of defects
Defects detected verses Corrected Gap
- Will show the number of defects uncovered verses the number of defects being corrected and accepted by the development team.
Defect Distribution
- Will show the defect distribution by function or module.
- Testing Action
- Show many different aspects of the testing. Examples are a) number of Severity defects b) tests behind schedule, and any other reports that would present an accurate picture.
Web Testing
- Web testing is divided into 6 categories
- User interface,
- Functionality,
- Interface testing,
- Compatibility
- Load/ Stress, and
- Security
User Interface
- The web browser page should be user friendly,
- Navigation should be properly addressed and be easy,
- Instructions in the web page should be self explanatory,
- The site should have a navigational bar (site map),
- Content should be aligned, with proper layout. Exact wording should be available. Content to be designed professionally.
- Color / Backgrounds – Right color and backgrounds towards a neat web page,
- Images – using image that are fast loading, and suits the bandwidth.
- Tables if used, are setup properly. Having the table within the page, Columns wide enough, wrap around rows.
- Wrap around – wrap around of text around the images and the text should properly indicated the image.
Functionality
- Links – verify that the link is directing you to the requested page, and the existence of the page.
- Forms – information submitted thru a form should work properly and the system is able to use this information
- Data verification – Verify the user input according to business rules i.e invalid values should be rejected.
- Cookies – Verify that the cookies are working.
- Application specific functional requirements – perform all functions that the business provides and check if they are working.
Interface testing
- Server interface – attempt transactions and view the server logs and verify that the information displayed on the browser is actually retrieved from the server.
- External interfaces – verify the external interfaces like credit card transactions bcoz they deal with an external server.
- Error handling – Verify that the system can handle all the errors. Eg . Disconnect user from the server and check whether the credit card info is added and saved.
Compatibility
- Verify that the application works on the machine that the users will be using,
- Try every operating system, browser, video settings, and modem speed.
- Operating system (win 95, 98, NT, 2K, XP), Browsers (IE, Netscape, Opera etc), Video Settings (640x400, 600x800 etc ), modem connection (dialup, DSL etc)
Load/ Stress
- Verify that the system can handle a large number of users simultaneously, (with large data from each user)
- Eg: Many users at same time, Large amount of data from each user, Long period of continuous use etc.
Security
- Susceptibility to hacking
- Proper setup of directories. (each directory should have an index.html or main.html so that a directory listing does not appear).
- Valid logins – system does not allow invalid logins, maximum number of failed logins before being locked out etc.
- Log files – verify that server logs are working properly. Does log track every transaction? (unsuccessful login attempts, IP address, username), what does the log store?
Unit Test checklist template
1) Project Name :
2) Release Date :
3) Peer Review Date :
4) Checklist users name/ Role:
5) Work product Author / Dept:
6) State of Product : Draft/ Intermediate/ Final:
7) Type of review : Meeting/ Coordination
8) Location of work product:
9) Supporting material and location:
10) Time duration:
11) Checklist given below
Hey there!!! I am really very eccentric as my blogname suggests. Wanna get a dose of eccentricity. C'mon. Welcome to the eccentric place of Abhilash. | Welcome to the Eccentric Place. | Such a lovely place, such a lovely place. | Plenty of room at this eccentric place. | You can check out anytime you like but you can never leave. | A place for Software Testing basics, Health Tips and Weight Training tips.
Wednesday, October 15, 2008
Subscribe to:
Post Comments (Atom)
Calorie Calculator
Calculate how much you expend in 1 hour of your favorite exercise.
Health Tips.
No comments:
Post a Comment
Drop in your comments/ feedback