Software Testing

1. Software Testing Intro 2. What is Testing 3. Why Software Testing necessary 4. Role of Testing 5. General Testing Principles 6. Types of Software Testing 7. Load,Performance & Stress Test 8. Software Development Process 9. Project/System Life Cycle 10. Difference between terms 11. Manual V/s Automated Testing 12. Economic of Bug/Error/Fault 13. Fundamental Test Process 14. Software Testing Tools 15. Intro.to HP- QTP 16. Installing QTP 9.2 17. Ex.01 Evaluating AUT 18. Ex.01 Answer 19. Ex.02 Learning AUT 20. QTP 9.2 / 10 21. QTP Window Layout 22. Test Object Model 23. Applying Test Object Model 24. Object SPY & Object Properties 25. Designing Test 26. Ex.03 Record & Running a Test 27. Ex.03 Answer 28. Saving a Test 29. Printing a Test 30. App. Record & Run Options 31. Ex.04 Sample Web Site 32. Web Record & Run options 33. Ex.05 Recording a Test 34. Working with Keyword View 35. Keyword View Description 36. Recorded Object Hierarchy 37. Ex.06 Identifying Objects 38. Ex.06 Answers 39. Keyword view columns 40. Microsoft VBScripting 41. Running VBScript 42. Msgbox Function 43. Variable 44. VBScript and QTP 45. Print statement 46. InputBox function 47. Operator Precedence 48. Cint( ) function 49. Data Types 50. VarType( ) Function 51. TypeName( ) Function 52. Cbool( ) Function 53. Cbyte( ) Function 54. Cdate( ) Function 55. CDbl( ) Function 56. Cint( ) Function 57. CLng( ) Function 58. CSng( ) Function 59. CStr( ) Function 60. If Then - End If statement 61. If Then -Else-End If statement 62. If-Elseif-Else-End If Statement 63. Len( ) Function 64. Left( ) Function 65. Right( ) Function 66. Mid( ) Function 67. Ltrim(), Rtrim(), Trim() Function 68. For-Next Statement 69. Array Function 70. ABS( ) Function 71. ASC( ) Function 72. Chr( ) Function 73. Date( ) Function 74. Now( ) Function 75. DateAdd( ) Function 76. Time( ) Function 77. DateDiff( ) Function 78. InStr( ) Function 79. InStrRev( ) Function 80. StrComp( ) Function 81. Lcase( ) Function 82. Ucase( ) Function 83. Rnd( ) & Randomizer 84. Round( ) Function 85. VBScript Procedure 86. Ex.07 QTP Logon App.Script 87. Ex.08 Synchronization 88. Ex.09 Synchronization I/O 89. Ex.10 Output Parameter 90. Ex.11 Create Input Parameter 91. Check Points 92. Ex.12 Add Standard Checkpoint 93. Ex.13 Checking Objects 94. Ex.14 Page Checkpoint 95. Ex.15 Checking Text 96. Ex.16 Checking Tables 97. Ex.17 Parameterization Test 98. Testing Interview Part1 99. Testing Interview Part2 100.Testing Interview Part3 101.Testing Interview Part4 102.Testing Interview Part5 103.Testing Interview Part6 104.Testing Interview Part7 105.Testing Interview Part8 106.Testing Interview Part9 107.Testing Interview Part10 108.Testing Interview Part11
Pr.Pg Next Pg

Types of Software Testing tutorials

 

Acceptance Testing

Acceptance Testing is often the responsibility of customer or end user of a system.

The goal is testing the system with the intent of confirming readiness of the product and customer acceptance.

Acceptance testing, which is Black box testing, will give the client the opportunity to verify the system functionality and usability prior to the system being moved to production.

The acceptance test will be the responsibility of the client; however, it will be conducted with full support from the project team.

The Test Team will work with the client to develop the acceptance criteria.

 

Ad Hoc Testing

Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

This means impromptu testing; that is, someone sits down and makes up test cases on the spur of the moment, as quickly as possible, as ideas occur to them. This kind of testing generally finds most bugs in new software in the shortest possible time.

It is usually not comprehensive, because it is difficult to think of everything in a short session, but it is often inventive, and frequently uncovers a myriad of problems. It is often exciting and rewarding because testers get satisfaction from making things fail!

 

Alpha Testing

Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

 

Automated Testing

Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.

This is testing carried out with a software testing tool that can issue keystrokes automatically, monitor the contents and position of windows, press buttons, etc. In general, an automated test tool is controlled with a text file that is either programmed directly by you, the tester, or is generated automatically by capturing your mouse movements and keyboard actions as you test something. Examples of such tools are IBM Rational Functional Tester and HP Mercury Quick Test Professional. .

The advantage of an automated test tool is that scripts can be generated for testing a variety of conditions, and then stored in a library. Later, when the software has been changed, this library of tests can be run automatically to verify the changes.

The disadvantage of an automated test tool is that most require some programming knowledge, as modifying the generated scripts is not trivial. Furthermore, if the user interface of the software under test changes substantially, then a lot of time can be wasted re-generating scripts.

Nevertheless, automated tools can substantially increase the amount of standard testing procedures performed prior to each release of a software product; they thus ensure that at least the tested functions work.

 

Beta Testing

Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will test product before it is released. Developer or Software vendor often want to get feedback from potential or existing customers in their market before the software product is put up for sale commercially. Alpha testing is performed at the developing organizationís site, where as Beta testing is performed by people at their own locations. In both the cases Alpha or Beta testing is performed by potential customers.

 

Black Box Testing

Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.

When you test something, you can do it with full knowledge of what it is and how it works ("White Box Testing"), or you can consider it a be a "black box" with unknown guts. Both methods have their advantages.

In White Box Testing, you make up test cases that are related to how the data is known to be processed by the program, as defined in the Detailed Design Document, or by inspection of the program source code.

In Black Box Testing, you make up test cases based on the known requirements for input, output, and data handling as specified by the Functional Specifications or Business Requirements, but with no knowledge of the actual detailed work performed by the program.

Thus White Box Testing is generally performed during program development and system testing, whereas Black Box Testing is usually done in Integration Testing and Acceptance Testing.

 

Compatibility Testing

Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.

 

Configuration Testing

Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.

 

End-to-End Testing

Similar to system testing, the 'macro' end of the test scale involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

 

Functional Testing

Testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

 

Independent Verification and Validation (IV&V)

The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.

 

Installation Testing

Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs. Testing full, partial, or upgrade install/uninstall processes. The installation test for a release will be conducted with the objective of demonstrating production readiness. This test is conducted after the application has been migrated to the client's site. It will encompass the inventory of configuration items (performed by the application's System Administration) and evaluation of data readiness, as well as dynamic tests focused on basic system functionality. When necessary, a sanity test will be performed following the installation testing.

 

Integration Testing

Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing)

 

Load Testing

Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation. Performance testing is executed to determine how fast a system or sub-system performs under a particular workload. Whereas Lost testing is primarily concerned with testing that can continue to operate under a specific load i.e. large quantities of data or large no of users.

 

Maintenance Testing

Once deployed, a system is often is service for years or even decades. During this time the system and its operational environment if often corrected, changed or extended. Testing that is executed during this life cycle phase is called "Maintenance Testing"

 

Parallel/Audit Testing

Testing where the user reconciles or compare the output of the new system to the output of the current system to verify the new system performs the operations correctly.

 

Performance Testing

Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.

This means that a system is subjected to increased transaction loads to see how it responds. If it can handle 10,000 transactions an hour, what happens if it gets 100,000 or 1,000,000?

If an ATM can access an account and display the balance in 3 seconds when there are no other ATMs accessing the mainframe, what happens to the response time when 1,000 ATMs are active simultaneously?

 

Pilot Testing

Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing)

 

Recovery/Error Testing

Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

 

Regression Testing

Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred.

This means that an application is re-tested to ensure it is still operating normally and has no bugs introduced by whatever has changed lately. This kind of testing is generally done by following formal scripts; it thus verifies that something is still working, but rarely finds new problems. All the test cases in a Regression Test would be executed every time a new version of software is produced and this makes them ideal candidates for "Automation".

 

Sanity Testing

Sanity testing will be performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It will normally include a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

 

Security Testing

Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.

 

Software Testing

The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation)

 

Stress Testing

Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.

This is another term for "Performance Testing". It means subjecting a system to high volumes ("stressing the system") to see what happens.

 

System Integration Testing

Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.

 

UAT testing?

UAT testing - UAT stands for 'User acceptance Testing. This testing is carried out with the user perspective and it is usually done before the release

 

Unit Testing

Unit Testing is the first level of dynamic testing and is first the responsibility of the developers and then of the testers. Unit testing is performed after the expected test results are met or differences are explainable / acceptable.

 

Usability Testing

Testing for 'user-friendliness'. Clearly this is subjective and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

 

White Box Testing

Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.

Pr.Pg border                                              Next Pg