Friday, March 30, 2012

Testing Concepts

Testing Concepts

Hi, I have a Bachelors degree in electronics engineering but I became Software tester by chance, and I am very happy about it. I love this field very much because kind of responsibility we feel while saying “software is ready for customer facing”….. No words for that. In my 5 years of software testing career, I worked with so many people whom I can consider software testing experts at the same time I met 100’s of new bees, fresher and even experienced testers who used to struggle a lot about fundamental of software testing. Today a new bee of software testing can find thousands of books, lot of websites for fundamental of software testing. But finding correct and accurate information at one place is very difficult. I decided to collect all correct and accurate information and create a blog for new bees in software testing. I am starting this blog with very good intention to help new bees in software testing so expert’s suggestion and corrections are welcome to improve this blog. Every week I will add some information in this blog
First Week:-
Terms:-
1:- Bug/Defect/Fault: - A flaw in a component or system that can cause the component or system to fail to perform its required function. For example; any incorrect statement or data definition. A Bug/Defect/Fault, if encountered during execution may cause a failure of the component or system.
2:- Error/Mistake: - A human action that produces an incorrect result.
3:- Failure: - Deviation of the component or system from its expected delivery, service or result.
4:- Anomaly: - Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards etc. or from someone’s perception or experience.
5:-Quality: - The degree to which a component or system or process meets specified requirements and / or user/customer needs and expectations.
6:- Risk: - A factor that could result in future negative consequences; usually expressed as impact and likelihood.
Why testing is necessary?
Software that does not work correctly can lead to many problems, including loss of money, time or business reputation, and could injury or death.
Causes of software defects:-
Human being --> Can make -> Error/Mistake -> Produces -> Defect/bug/fault -> Executed in code -> Causing a failure
*Note: - Defects in systems, software or documents may result in failures, but not all defects do so
Defects occur because:-
1:- Human beings are fallible.
2:- Time pressure.
3:- Complex code.
4:- Complexity of infrastructure.
5:- Changed technologies.
6:- Many System interactions.
*Note: - Failures can be caused by environmental conditions as well: radiation, magnetism, electronic fields, and pollution can cause faults in firmware or influence the execution of software by changing Hardware conditions.
Role of Software testing:-
1:- Rigorous testing of systems and documentation can help to reduce the risk of problems occurring during operation and contribute to the quality of the software system, if defects found are corrected before the system is released for operational use.
2:- Software testing may also be required to meet contractual or legal requirements, or industry-specific standards.
Testing and Quality:-
With the help of testing, it is possible to measure the quality of software in terms of defects found, for both functional and non-functional software requirements and characteristics (e.g. reliability, usability, efficiency, maintainability and portability).
Testing can give confidence in the quality of the software if it finds few or no defects.
A properly designed test that passes reduces the overall level of risk in a system. When testing does find defects, the quality of the software system increases when those defects are fixed.
*Note: - This is an aspect of quality assurance.
Lessons should be learned from previous projects. By understanding the root causes of defects found in other projects, processes can be improved, which in turn should prevent those defects from reoccurring and, as a consequence, improve the quality of future systems.
How much testing is enough?
Deciding how much testing is enough should take account of the level of risk, including technical and business product and project risks, and project constraints such as time and budget.
It is important to define the scope of your testing before any testing is actually started.
At any stage within the SDLC the relevant documentation will help determine the scope of your testing.
The testing strategy or high-level test plan document is quite often used as the exit criteria.
As well as actually saying what you test, the exit criteria would include other details such as how well that tests needs to have been done.
For example it may state that you need to have tested everything and have found no serious errors but minor errors are acceptable.
Testing is an iterative process and as it is impossible to test everything we could just keep going round the loop indefinitely.
If the exit criteria has been stated in advance then testing can stop when the exit criteria has been met.
In other words we can thus say that the exit criteria is the criteria to be met in other to stop testing.
The exit criteria will therefore depend on the following: -
· The risk to the business process of the project.
· The time constraints within the project.
· The resource constraints within the project.
· The budget of the project.
However in practice it is not easy to state all this requirements as testing seldom goes to plan. It may be that it takes longer to fix errors than originally scheduled, the implementation date may be delayed or the staff composition is far too low to begin testing.
Whatever is planned initially, test managers will need to make decisions during the testing phase to keep the project on track. It is the test managers responsibility to provide quantifiable information as to the test coverage and the problems found so that informed decisions can be made as to whether the project is ready for implementation or not. The decision must therefore be subjective.
Ultimately there may be still be outstanding problems and it may not have been possible to test lower risk areas but as long as the system is deemed fit for purpose then the job has been done.
Some organizations might decide that when 100% testing has been done and all critical defects fixed, then the system can be implemented.
No matter what criteria are used to decide the implementation date, it all still depends on the risk to the business process.
But testing should provide sufficient information to stakeholders to make informed decisions about the release of the software or system being tested, for the next development step or handover to customers
What is testing?
Terms:-
1:- Debugging: - The process of finding, analyzing and removing the causes of failures in software.
2:- Requirement: - A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
3:-Review: - An evaluation of a product or project status to find out discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
4:-Test case:- A set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
5:-Testing:- The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
6:-Test Objective: - A reason or purpose for designing and executing a test.
7:-Test basis: - All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Background: - A common perception of testing is that it only consists of running tests, i.e. executing the software. This is part of testing, but not all of the testing activities. Test activities exist before and after test execution.
Before: - Test Planning, Designing Test cases etc.
After: - Reporting a bug, Closure of bug etc.
1:-Static testing: -Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.
2:-Dynamic testing: -Testing that involves the execution of the software of a component or system.
Both dynamic testing and static testing can be used as a means for achieving similar objectives, and will provide information in order to improve both the system to be tested, and the development and testing processes.
There can be different test objectives:
1:- finding defects.
2:- gaining confidence about the level of quality and providing information.
3:- preventing defects.
*Note: - The thought process of designing tests early in the life cycle can help to prevent defects from being introduced into code. Reviews of documents (e.g. requirements) also help to prevent defects appearing in the code.
Different viewpoints in testing take different objectives into account.
For Example:-
1:-In development testing (e.g. component, integration and system testing), the main objective may be to cause as many failures as possible so that defects in the software are identified and can be fixed.
2:-In acceptance testing, the main objective may be to confirm that the system works as expected, to gain confidence that it has met the requirements.
3:-In some cases the main objective of testing maybe to assess the quality of the software (with no intention of fixing defects), to give information to stakeholders of the risk of releasing the system at a given time.
4:-Maintenance testing often includes testing that no new defects have been introduced during development of the changes.
5:-During operational testing, the main objective may be to assess system characteristics such as reliability or availability.
1:-Operational environment: - Hardware and software products installed at users’ or customers’ sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.
2:-Operational testing: -Testing conducted to evaluate a component or system in its operational environment.
Debugging and testing are different.
1:-Testing can show failures that are caused by defects.
2:-Debugging is the development activity that identifies the cause of a defect, repairs the code and checks that the defect has been fixed correctly.
3:-Re-testing orconfirmation testing: -Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Confirmation testing by a tester ensures that the fix does indeed resolve the failure. The responsibility for each activity is very different, i.e. testers test and developers debug.
General testing principles
Terms:-
1:-Exhaustive testing: -A test approach in which the test suite comprises all combinations of input values and preconditions also know as complete testing.
Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.
Principle 3 – Early testing
Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives.
Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing, or are responsible for the most operational failures.
Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.
Fundamental test process
Terms:-
1:-Exit criteria: - The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
2:-Incident: - Any event occurring that requires investigation.
3:-Regression testing: -Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
4:-Test condition: - An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
5:- Coverage: - The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
6:- Test data: - Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
7:- Test execution: - The process of running a test on the component or system under test, producing actual result(s).
8:- Test log: - A chronological record of relevant details about the execution of tests.
9:- Test plan: - A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
10:- Test procedure: - A sequence of actions for the execution of a test. Also known as test script or manual test script.
11:- Test policy: - A high level document describing the principles, approach and major objectives of the organization regarding testing.
12:- Test strategy: - A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects). Why do a Test Strategy? The Test Strategy is the plan for how you are going to approach testing. It is like a project charter that tells the world how you are going to approach the project. You may have it all in your head, and if you are the only person doing the work it might be OK. If however you do not have it all in your head, or if others will be involved, you need to map out the ground rules.
13:- Test suite: - A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one. Also know as test set.
14:- Test summary report: - A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
15:- Testware:- Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
16:- Test harness: - A test environment comprised of stubs and drivers needed to execute a test.
Background
The most visible part of testing is executing tests. But to be effective and efficient, test plans should also include time to be spent on planning the tests, designing test cases, preparing for execution and evaluating status.
The fundamental test process consists of the following main activities:
1:- Planning and control.
2:- Analysis and design.
3:- Implementation and execution.
4:- Evaluating exit criteria and reporting.
5:- Test closure activities.
*Note:-Although logically sequential, the activities in the process may overlap or take place concurrently.
Test Planning and control
Test planning:-
1:-Test planning is the activity of verifying the mission of testing.
2:-Defining the objectives of testing.
3:-Defining the specification of test activities in order to meet the objectives and mission.
Test control:-
1:-Test control is the ongoing activity of comparing actual progress against the plan.
2:-Reporting the status, including deviations from the plan.
3:-Involves taking actions necessary to meet the mission and objectives of the project.
In order to control testing, it should be monitored throughout the project.
*Note: - Test planning takes into account the feedback from monitoring and control activities.
Test analysis and design
Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test cases.
Test analysis and design has the following major tasks:
1:- Reviewing the test basis (such as requirements, architecture, design, interfaces).
2:- Evaluating testability of the test basis and test objects.
3:- Identifying and prioritizing test conditions based on analysis of test items, the specification, behavior and structure.
4:- Designing and prioritizing test cases.
5:- Identifying necessary test data to support the test conditions and test cases.
6:- Designing the test environment set-up and identifying any required infrastructure and tools.
Test implementation and execution
Test implementation and execution is the activity where test procedures or scripts are specified by combining the test cases in a particular order and including any other information needed for test execution, the environment is set up and the tests are run.
Test implementation and execution has the following major tasks:
1:- Developing, implementing and prioritizing test cases.
2:- Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.
3:- Creating test suites from the test procedures for efficient test execution.
4:- Verifying that the test environment has been set up correctly.
5:- Executing test procedures either manually or by using test execution tools, according to the planned sequence.
6:- Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and test ware.
7:- Comparing actual results with expected results.
8:- Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).
9:- Repeating test activities as a result of action taken for each discrepancy. For example, re execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing).
Evaluating exit criteria and reporting
Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:
1:- Checking test logs against the exit criteria specified in test planning.
2:- Assessing if more tests are needed or if the exit criteria specified should be changed.
3:- Writing a test summary report for stakeholders.
Test closure activities
Test closure activities collect data from completed test activities to consolidate experience, testware, facts and numbers. For example, when a software system is released, a test project is completed (or cancelled), a milestone has been achieved, or a maintenance release has been completed.
Test closure activities include the following major tasks:
1:- Checking which planned deliverables have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
2:- Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.
3:- Handover of testware to the maintenance organization.
4:- Analyzing lessons learned for future releases and projects, and the improvement of test maturity.
The psychology of testing
Terms:-
Error guessing:- A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
Independence of testing: - Separation of responsibilities, which encourages the accomplishment of objective testing.
The mindset to be used while testing and reviewing is different to that used while developing software. With the right mindset developers are able to test their own code, but separation of this responsibility to a tester is typically done to help focus effort and provide additional benefits, such as an independent view by trained and professional testing resources. Independent testing may be carried out at any level of testing.
A certain degree of independence (avoiding the author bias) is often more effective at finding defects and failures.
Several levels of independence can be defined:
1:- Tests designed by the person(s) who wrote the software under test (low level of independence).
2:- Tests designed by another person(s) (e.g. from the development team).
3:- Tests designed by a person(s) from a different organizational group (e.g. an independent test team) or test specialists (e.g. usability or performance test specialists).
4:- Tests designed by a person(s) from a different organization or company (i.e. outsourcing or certification by an external body).
Identifying failures during testing may be perceived as criticism against the product and against the author. Testing is, therefore, often seen as a destructive activity, even though it is very constructive in the management of product risks. Looking for failures in a system requires curiosity, professional pessimism, a critical eye, attention to detail, good communication with development peers, and experience on which to base error guessing.
If errors, defects or failures are communicated in a constructive way, bad feelings between the testers and the analysts, designers and developers can be avoided. This applies to reviewing as well as in testing.
The tester and test leader need good interpersonal skills to communicate factual information about defects, progress and risks, in a constructive way. For the author of the software or document, defect information can help them improve their skills. Defects found and fixed during testing will save time and money later, and reduce risks.
Communication problems may occur, particularly if testers are seen only as messengers of unwanted news about defects. However, there are several ways to improve communication and relationships between testers and others:
1:- Start with collaboration rather than battles – remind everyone of the common goal of better quality systems.
2:- Communicate findings on the product in a neutral, fact-focused way without criticizing the person who created it, for example, write objective and factual incident reports and review findings.
3:- Try to understand how the other person feels and why they react as they do.
4:- Confirm that the other person has understood what you have said and vice versa.
Second Week:-
Software development models
Terms:-
Commercial off-the-shelf software(COTS): - A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Bespoke software: - Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.
Iterative development model: - A development life cycle where a project is broken into a usually large number of iterations. Iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.
Validation: - Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
Verification: - Confirmation by examination and through provision of objective evidence that specified requirements has been fulfilled.
V-model: - A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle. Also know as sequential development model.
Software Development Life Cycle
The Software Development Life Cycle (SDLC) is the entire process of formal, logical steps taken to develop a software product. The phases of SDLC can vary somewhat but generally include the following:
• Requirements collection
• Feasibility & cost/benefits analysis
• Detailed specification of the software requirements
• Software design
• Programming
• testing
• Implementation
• And finally, maintenance
There are several methodologies or models that can be used to guide the software development lifecycle. Some of these include:
Waterfall - Software Development Model
This is also called as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This model has the following activities.
1. System/Information Engineering and Modeling
2. Software Requirements Analysis
3. Systems Analysis and Design
4. Code Generation
5. Testing
6. Maintenance
1) System/Information Engineering and Modeling
As software development is large process so work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. The view of this system is necessary when software must interface with other elements such as hardware, people and other resources. System is the very essential requirement for the existence of software in any entity. In some cases for maximum output, the system should be re-engineered and spruced up. Once the ideal system is designed according to requirement, the development team studies the software requirement for the system.
2) Software Requirement Analysis
Software Requirement Analysis is also known as feasibility study. In this requirement analysis phase, the development team visits the customer and studies their system requirement. They examine the need for possible software automation in the given software system. After feasibility study, the development team provides a document that holds the different specific recommendations for the candidate system. It also consists of personnel assignments, costs of the system, project schedule and target dates.
The requirements analysis and information gathering process is intensified and focused specially on software. To understand what type of the programs to be built, the system analyst must study the information domain for the software as well as understand required function, behavior, performance and interfacing. The main purpose of requirement analysis phase is to find the need and to define the problem that needs to be solved.
3) System Analysis and Design
In System Analysis and Design phase, the whole software development process, the overall software structure and its outlay are defined. In case of the client/server processing technology, the number of tiers required for the package architecture, the database design, the data structure design etc are all defined in this phase. After designing part a software development model is created. Analysis and Design are very important in the whole development cycle process. Any fault in the design phase could be very expensive to solve in the software development process. In this phase, the logical system of the product is developed.
4) Code Generation
In Code Generation phase, the design must be decoded into a machine-readable form. If the design of software product is done in a detailed manner, code generation can be achieved without much complication. For generation of code, Programming tools like Compilers, Interpreters, and Debuggers are used. For coding purpose different high level programming languages like C, C++, Pascal and Java are used. The right programming language is chosen according to the type of application.
5) Testing
After code generation phase the software program testing begins. Different testing methods are available to detect the bugs that were committed during the previous phases. A number of testing tools and methods are already available for testing purpose.
6) Maintenance
Software will definitely go through change once when it is delivered to the customer. There are large numbers of reasons for the change. Change could happen due to some unpredicted input values into the system. In addition to this the changes in the system directly have an effect on the software operations. The software should be implemented to accommodate changes that could be happen during the post development period.
V & V (Verification & Validation) - Software Development Model (V-Model)
Verification and Validation Activities:-
Verification is to ensure that products conform to their specified requirements:
"Are we building the product right?”
Validation is to ensure that software conforms to customer requirements, that is: validation is end-to-end verification:
"Are we building the right product?”
During the development life cycle a lot of verification activities take place.
Tracing
Tracing defines a relation between products of the development process, for example between a software requirement and user requirements.
•Forward traceability means that each input to a phase is traceable (can be related) to an output of that phase.
•Backward traceability means the opposite: each output of a phase is traceable to an input of that phase.
It is necessary to trace:
• Software requirements versus user requirements,
• design components versus software requirements,
• Unit tests versus detailed design modules,
• Integration tests versus architectural components,
• System tests versus software requirements,
• Acceptance tests versus user requirements.
TERMS:-
1:-Test Methodology: Test methodology is the technical way about how to test software. Typically, people refer to black-box and white- box for methodologies.
Testing Life Cycle
The lifecycle ensures that all the relevant inputs are obtained, the planning is adequately carried out and the executions are as per plan.
In addition the results are obtained reviewed and monitored. The lifecycle also defines the interfaces into the overall Quality management processes and also the Project delivery phases. The testing life cycle can be broadly classified into three different life cycle models depending upon the type of application and the test strategy used such as:
1. Application Testing Life Cycle
2. Automation Testing Life Cycle (Future Learning)
3. Package Testing Life Cycle (Future Learning)
Application Testing Life Cycle
Test Requirements
• Requirement Specification documents
• Functional Specification documents
• Design Specification documents (use cases, etc)
• Use case Documents
• Test Trace-ability Matrix for identifying Test Coverage
Test Planning
• Test Scope, Test Environment
• Different Test phase and Test Methodologies
• Manual and Automation Testing
• Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
• Evaluation & identification – Test, Defect tracking tools
Test Environment Setup
• Test Bed installation and configuration
• Network connectivity’s
• All the Software/ tools Installation and configuration
• Coordination with Vendors and others
Test Design
• Test Traceability Matrix and Test coverage
• Test Scenarios Identification & Test Case preparation
• Test data and Test scripts preparation
• Test case reviews and Approval
• Base lining under Configuration Management
Test Automation
• Automation requirement identification
• Tool Evaluation and Identification.
• Designing or identifying Framework and scripting
• Script Integration, Review and Approval
• Base lining under Configuration Management
Test Execution and
Defect Tracking
• Executing Test cases
• Testing Test Scripts
• Capture, review and analyze Test Results
• Raised the defects and tracking for its closure
Test Reports
and Acceptance
• Test summary reports
• Test Metrics and process Improvements made
• Build release
• Receiving acceptance
Advantages of this automated software using the above AST life cycle.
· High Quality to market
· Low Time to market
· Reduced testing time
· Consistent test procedures
· Reduced QA costs
· Improved testing productivity
· Improved product quality
AST Requirements
• Reqmt / Functional Specification documents
• Design Specification documents
• Test Traceability Matrix for identifying Test Coverage
• Functional/ Non-Functional and test data requirements
• Test phases to be automated and % of automation
AST Planning
• Automated Software Testing (AST) Scope
• Tool Evaluation and identification
• AST Methodologies and Framework
• Prepare and Base lining Scripting standard and ASTPlan
AST Environment Setup
• AST Test Bed installation and configuration
• Network connectivity’s
• All the Software/ tools Licenses, Installation and configuration
• Coordination with Vendors and others
AST Design
• Test Script and test data preparation
• Test scripts / test data review and unit testing
• Integration Testing Test scripts and testing
• Base lining under Configuration Management
AST Execution and
Defect Tracking
• Executing AST Test Suit
• Capture, review and analyze Test Results
• Defects reporting and tracking for its closure
AST Maintenance Reports
and Acceptance
• AST Results and summary reports
• Test Metrics and process Improvements made
• Base lining of AST Test suits/ scripts/ test date etc for maintenance phase
• Getting Acceptance
Package Testing Life Cycle
Testing life cycle followed for all the packaged applications like Oracle, SAP, Siebel, CRM tools, Supply Chain management applications, etc are detailed in the below diagram.
Project Preparation
• Identifying the business processes
• Organization of the project team
• Setting up the communication channel
• Kick start the project
• Identifying the infrastructure availability
• Reporting structure and project co-ordination
Business Blueprinting
• Requirement Study
• Identifying the business rules
• Mapping the business processes
• Identify the test conditions
• Setting up the test environment for the system
• Forms the input needed for the configurations
Realization
• Configuration & Customization
• Activating the business rules
• Development of certain flows
• Identifying certain flows not in the standard
• Forming the system configurations
• Unit Testing
Final Preparation
• Uploading the master data
• End user training
• Simulating all the flows
• Tie-up between interfaces
• Operational Readiness Testing and UAT
• Sign-off
Cut over, Go-live
and Support
• Migrate to new system
• Transfer all legacy business applications
• Communicate deployment status.
• Support new system
• Transfer ownership to system owner
• Take customer acceptance after production deployment
Testing throughout the software life cycle
  1. Testing does not exist in isolation; test activities are related to software development activities.
  2. Different development life cycle models need different approaches to testing.
Third Week:-
In any life cycle model, there are several characteristics of good testing:
  1. For every development activity there is a corresponding testing activity.
  2. Each test level has test objectives specific to that level.
  3. The analysis and design of tests for a given test level should begin during the corresponding development activity.
  4. Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle.
Test Level
TERMS:-
1:-Alpha testing: - Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.
2:-Beta testing/ Field testing: - Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
3:- Component: - A minimal software item that can be tested in isolation.
4:-Component testing: - The testing of individual software components. (Also known as unit, module or program testing).
5:- Driver: - A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
6:- Functional requirement: - A requirement that specifies a function that a component or system must perform.
7:-Integration: - The process of combining components or systems into larger assemblies.
8:-Integration testing: - Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. Also know as component integration testing, system integration testing.
9:- Non-functional requirement:- A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.
10:- Robustness: - The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
11:-Robustness testing: - Testing to determine the robustness of the software product.
12:- Stub: - A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
13:- System: - A collection of components organized to accomplish a specific function or set of functions.
14:-System testing: - The process of testing an integrated system to verify that it meets specified requirements.
15:- Test level: - A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test.
16:- Test driven development: - A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
17:- Test environment: - An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
18:- Acceptance testing: - Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Background
For each of the test levels, the following can be identified:-
  1. Their generic objectives.
  2. The work product(s) being referenced for deriving test cases (i.e. the test basis).
  3. The test objects (i.e. what is being tested).
  4. Typical defects and failures to be found.
  5. Test harness requirements.
  6. Tool support.
  7. Specific approaches and responsibilities.
Component testing:-
  1. Component testing searches for defects in, and verifies the functioning of, software (e.g. modules, programs, objects, classes, etc.) that are separately testable.
  2. It may be done in isolation from the rest of the system, depending on the context of the development life cycle and the system.
  3. Stubs, drivers and simulators may be used.
  4. Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behavior (e.g. memory leaks) or robustness testing, as well as structural testing (E.g. branch coverage). Test cases are derived from work products such as a specification of the component, the software design or the data model.
  5. Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test framework or debugging tool, and, in practice, usually involves the programmer who wrote the code.
  6. Defects are typically fixed as soon as they are found, without formally recording incidents.
  7. One approach to component testing is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development. This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests until they pass.
Integration testing:-
Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems.
There may be more than one level of integration testing and it may be carried out on test objects of varying size. For example:
1. Component integration testing tests the interactions between software components and is done after component testing.
2. System integration testing tests the interactions between different systems and may be done after system testing. In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as workflows may involve a series of systems. Cross-platform issues may be significant.
The greater the scope of integration, the more difficult it becomes to isolate failures to a specific component or system, which may lead to increased risk.
  1. Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or component.
  2. In order to reduce the risk of late defect discovery, integration should normally be incremental rather than “big bang”.
  3. Testing of specific non-functional characteristics (e.g. performance) may be included in integration testing.
  4. At each stage of integration, testers concentrate solely on the integration itself. For example, if they are integrating module A with module B they are interested in testing the communication between the modules, not the functionality of either module. Both functional and structural approaches may be used.
  5. Ideally, testers should understand the architecture and influence integration planning. If integration tests are planned before components or systems are built, they can be built in the order required for most efficient testing.
System testing:-
TERMS:-
  1. Decision table: - A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.
  2. Decision table testing: - A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) showed in a decision table.System testing is concerned with the behavior of a whole system/product as defined by the scope of a development project or programme.
1.
    • System testing is concerned with the behavior of a whole system/product as defined by the scope of a development project or programme.
    • In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing.
    • System testing may include tests based on risks and/or on requirements specifications, business processes, use cases, or other high level descriptions of system behavior, interactions with the operating system, and system resources.
    • System testing should investigate both functional and non-functional requirements of the system.
    • Requirements may exist as text and/or models.
    • Testers also need to deal with incomplete or undocumented requirements.
    • System testing of functional requirements starts by using the most appropriate specification-based (black-box) techniques for the aspect of the system to be tested. For example, a decision table may be created for combinations of effects described in business rules. Structure-based techniques (white-box) may then be used to assess the thoroughness of the testing with respect to a structural element, such as menu structure or web page navigation.
    • An independent test team often carries out system testing.

Types of system testing

The following examples are different types of testing that should be considered during System testing:
    • User interface testing
    • Usability testing
    • Performance testing
    • Compatibility testing
    • Error handling testing
    • Load testing
    • Volume testing
    • Stress testing
    • User help testing
    • Security testing
    • Scalability testing
    • Capacity testing
    • Sanity testing
    • Smoke testing
    • Exploratory testing
    • Adhoc testing
    • Regression testing
    • Reliability testing
    • Recovery testing
    • Installation testing
    • Idem potency testing
    • Maintenance testing
    • Accessibility testing
Acceptance testing:-
  1. Acceptance testing is often the responsibility of the customers or users of a system; other stakeholders may be involved as well.
  2. The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system.
  3. Finding defects is not the main focus in acceptance testing. Acceptance testing may assess the system’s readiness for deployment and use, although it is not necessarily the final level of testing. For example, a large-scale system integration test may come after the acceptance test for a system.
Acceptance testing may occur as more than just a single test level, for example:
  1. A COTS software product may be acceptance tested when it is installed or integrated.
  2. Acceptance testing of the usability of a component may be done during component testing.
  3. Acceptance testing of a new functional enhancement may come before system testing.
Typical forms of acceptance testing include the following:
User acceptance testing:-
Typically verifies the fitness for use of the system by business users.
Operational (acceptance) testing:-
The acceptance of the system by the system administrators, including:
  1. Testing of backup/restore.
  2. Disaster recovery.
  3. User management.
  4. Maintenance tasks.
  5. Periodic checks of security vulnerabilities.
Contract and regulation acceptance testing:-
Contract acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the contract is agreed.
Regulation acceptance testing is performed against any regulations that must be adhered to, such as governmental, legal or safety regulations.
Alpha and beta (or field) testing:-
Developers of market, or COTS, software often want to get feedback from potential or existing customers in their market before the software product is put up for sale commercially.
Alpha testing is performed at the developing organization’s site.
Beta testing, or field testing, is performed by people at their own locations. Both are performed by potential customers, not the developers of the product.
Test Type:-
TERMS:-
1:-Black-box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.
2:- Code coverage:- An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
3:-Functional testing: - Testing based on an analysis of the specification of the functionality of a component or system.
4:- Interoperability: - The capability of the software product to interact with one or more specified components or systems.
5:-Interoperability testing: - The process of testing to determine the interoperability of a software product.
6:-Load testing: - A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.
7:-Maintenance: - Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
8:-Maintenance testing:- Testing the changes to an operational system or the impact of a changed environment to an operational system.
9:-Maintainability: - The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.
10:-Maintainability testing: - The process of testing to determine the maintainability of a software product.
11:-Performance: - The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. Also know as efficiency.
12:-Performance testing: - The process of testing to determine the performance of a software product. Also know as efficiency testing.
13:-Portability: - The ease with which the software product can be transferred from one hardware or software environment to another.
14:-Portability testing: - The process of testing to determine the portability of a software product.
15:-Reliability: - The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.
16:-Reliability testing: - The process of testing to determine the reliability of a software product.
17:-Security:- Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data
18:-Security testing: - Testing to determine the security of the software product. See also functionality testing.
19:-Specification-based testing:-Testing, either functional or non-functional, without reference to the internal structure of the component or system.
20:- Stress testing: - Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
21:-White-box testing: - Testing based on an analysis of the internal structure of the component or system. Also know as structural testing.
22:-Usability:- The capability of the software to be understood learned, used and attractive to the user when used under specified conditions.
23:-Usability testing:- Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.
A group of test activities can be aimed at verifying the software system (or a part of a system) based on a specific reason or target for testing.
A test type is focused on a particular test objective, which could be the testing of a function to be performed by the software; a non-functional quality characteristic, such as reliability or usability, the structure or architecture of the software or system; or related to changes, i.e. confirming that defects have been fixed (confirmation testing) and looking for unintended changes (regression testing).
Testing of function (functional testing):-
  1. The functions that a system, subsystem or component are to perform may be described in work products such as a requirements specification, use cases, or a functional specification, or they may be undocumented.
  1. The functions are “what” the system does.
  1. Functional tests are based on functions and features (described in documents or understood by the testers) and their interoperability with specific systems, and may be performed at all test levels (e.g. tests for components may be based on a component specification).
  1. Specification-based techniques may be used to derive test conditions and test cases from the functionality of the software or system.
  1. Functional testing considers the external behavior of the software (black-box testing).
  1. A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating to detection of threats, such as viruses, from malicious outsiders.
  1. Another type of functional testing, interoperability testing, evaluates the capability of the software product to interact with one or more specified components or systems.
Testing of non-functional software characteristics (non-functional testing):-
  1. Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing.
  1. It is the testing of “how” the system works.
  1. Non-functional testing may be performed at all test levels. The term non-functional testing describes the tests required to measure characteristics of systems and software that can be quantified on a varying scale, such as response times for performance testing.
Testing of software structure/architecture (structural testing):-
  1. Structural (white-box) testing may be performed at all test levels.
  1. Structural techniques are best used after specification-based techniques, in order to help measure the thoroughness of testing through assessment of coverage of a type of structure.
  1. Coverage is the extent that a structure has been exercised by a test suite, expressed as a percentage of the items being covered.

  2. If coverage is not 100%, then more tests may be designed to test those items that were missed and, therefore, increase coverage.
  1. At all test levels, but especially in component testing and component integration testing, tools can be used to measure the code coverage of elements, such as statements or decisions.
  1. Structural testing may be based on the architecture of the system, such as a calling hierarchy.
  1. Structural testing approaches can also be applied at system, system integration or acceptance testing levels (e.g. to business models or menu structures).
Testing related to changes (confirmation testing (retesting) and regression testing):-
  1. After a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed. This is called confirmation. Debugging (defect fixing) is a development activity, not a testing activity.
  1. Regression testing is the repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the change(s). These defects may be either in the software being tested, or in another related or unrelated software component. It is performed when the software, or its environment, is changed.
  1. The extent of regression testing is based on the risk of not finding defects in software that was working previously.
  1. Tests should be repeatable if they are to be used for confirmation testing and to assist regression testing.
  1. Regression testing may be performed at all test levels, and applies to functional, non-functional and structural testing. Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation
Maintenance testing:-
TERMS:-
1:-Impact analysis: - The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.
2:-Maintenance: - Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
3:-Maintenance testing: - Testing the changes to an operational system or the impact of a changed environment to an operational system.
  1. Once deployed, a software system is often in service for years or decades. During this time the system and its environment are often corrected, changed or extended.
  1. Maintenance testing is done on an existing operational system, and is triggered by modifications, migration, or retirement of the software or system.
  1. Modifications include planned enhancement changes (e.g. release-based), corrective and emergency changes, and changes of environment, such as planned operating system or database upgrades, or patches to newly exposed or discovered vulnerabilities of the operating system.
  1. Maintenance testing for migration (e.g. from one platform to another) should include operational tests of the new environment, as well as of the changed software.
  1. Maintenance testing for the retirement of a system may include the testing of data migration or archiving if long data-retention periods are required.
  1. In addition to testing what has been changed, maintenance testing includes extensive regression testing to parts of the system that have not been changed. The scope of maintenance testing is related to the risk of the change, the size of the existing system and to the size of the change.
  1. Depending on the changes, maintenance testing may be done at any or all test levels and for any or all test types.
  1. Determining how the existing system may be affected by changes is called impact analysis, and is used to help decide how much regression testing to do.
  1. Maintenance testing can be difficult if specifications are out of date or missing.
Static techniques and the test process:-
TERMS:-
1:- Dynamic testing: - Testing that involves the execution of the software of a component or system.
2:- Static testing: - Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.
3:-Static technique:-The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
  1. Unlike dynamic testing, which requires the execution of software; static testing techniques rely on the manual examination (reviews) and automated analysis (static analysis) of the code or other project documentation.
  1. Reviews are a way of testing software work products (including code) and can be performed well before dynamic test execution.
  1. Defects detected during reviews early in the life cycle are often much cheaper to remove than those detected while running tests (e.g. defects found in requirements).
  1. A review could be done entirely as a manual activity, but there is also tool support. The main manual activity is to examine a work product and make comments about it. Any software work product can be reviewed, including requirements specifications, design specifications, code, test plans, test specifications, test cases, test scripts, user guides or web pages.
  1. Benefits of reviews include early defect detection and correction, development productivity improvements, reduced development timescales, reduced testing cost and time, lifetime cost reductions, fewer defects and improved communication.
  1. Reviews can find omissions, for example, in requirements, which are unlikely to be found in dynamic testing.
  1. Reviews, static analysis and dynamic testing have the same objective – identifying defects. They are complementary: the different techniques can find different types of defects effectively and efficiently.
  1. Compared to dynamic testing, static techniques find causes of failures (defects) rather than the failures themselves.
  1. Typical defects that are easier to find in reviews than in dynamic testing are:
    • Deviations from standards.
    • Requirement defects.
    • Design defects.
    • Insufficient maintainability.
    • Incorrect interface specifications.
Review Process:-
TERMS:-
Entry criteria: - The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.
Formal review: - A review characterized by documented procedures and requirements, e.g. inspection.
Informal review: - A review not based on a formal (documented) procedure.
Inspection: - A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure.
Measure: - The number or category assigned to an attribute of an entity by making a measurement.
Measurement: - The process of assigning a number or category to an entity to describe an attribute of that entity.
Measurement scale: - A scale that constrains the type of data analysis that can be performed on it.
Metric: - A measurement scale and the method used for measurement.
Moderator: The leader and main person responsible for an inspection or other review process.
Peer review: A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.
Review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
Reviewer: The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
Scribe: The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.
Technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.
Walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.
The different types of reviews vary from very informal (e.g. no written instructions for reviewers) to very formal (i.e. well structured and regulated). The formality of a review process is related to factors such as the maturity of the development process, any legal or regulatory requirements or the need for an audit trail.
The way a review is carried out depends on the agreed objective of the review (e.g. find defects, gain understanding, or discussion and decision by consensus).
Phases of a formal review:-
A typical formal review has the following main phases:
1. Planning: selecting the personnel, allocating roles; defining the entry and exit criteria for more formal review types (e.g. inspection); and selecting which parts of documents to look at.
2. Kick-off: distributing documents; explaining the objectives, process and documents to the participants; and checking entry criteria (for more formal review types).
3. Individual preparation: work done by each of the participants on their own before the review meeting, noting potential defects, questions and comments.
4. Review meeting: discussion or logging, with documented results or minutes (for more formal review types).
The meeting participants may simply note defects, make recommendations for handling the defects, or make decisions about the defects.
5. Rework: fixing defects found, typically done by the author.
6. Follow-up: checking that defects have been addressed, gathering metrics and checking on exit criteria (for more formal review types).
Roles and responsibilities:-
A typical formal review will include the roles below:
    1. Manager: decides on the execution of reviews, allocates time in project schedules and determines if the review objectives have been met.
    1. Moderator: the person who leads the review of the document or set of documents, including planning the review, running the meeting, and follow-up after the meeting. If necessary, the moderator may mediate between the various points of view and is often the person upon whom the success of the review rests.
    1. Author: the writer or person with chief responsibility for the document(s) to be reviewed.
    1. Reviewers: individuals with a specific technical or business background (also called checkers or inspectors) who, after the necessary preparation, identify and describe findings (e.g. defects) in the product under review. Reviewers should be chosen to represent different perspectives and roles in the review process, and should take part in any review meetings.
    1. Scribe (or recorder): documents all the issues, problems and open points that were identified during the meeting.
Looking at documents from different perspectives and using checklists can make reviews more effective and efficient, for example, a checklist based on perspectives such as user, maintainer, tester or operations, or a checklist of typical requirements problems.
Types of review:-
A single document may be the subject of more than one review. If more than one type of review is used, the order may vary. For example, an informal review may be carried out before a technical review, or an inspection may be carried out on a requirements specification before a walkthrough with customers. The main characteristics, options and purposes of common review types are:
Informal review
Key characteristics:
    1. No formal process.
    2. There may be pair programming or a technical lead reviewing designs and code.
    3. Optionally may be documented.
    4. May vary in usefulness depending on the reviewer.
    5. Main purpose: inexpensive way to get some benefit.
Walkthrough
Key characteristics:
    1. Meeting led by author.
    2. Scenarios, dry runs, peer group.
    3. Open-ended sessions.
    4. Optionally a pre-meeting preparation of reviewers, review report, list of findings and scribe (who is not the author).
    5. May vary in practice from quite informal to very formal.
    6. Main purposes: learning, gaining understanding, defect finding.
Technical review
Key characteristics:
    1. Documented, defined defect-detection process that includes peers and technical experts.
    2. May be performed as a peer review without management participation.
    3. Ideally led by trained moderator (not the author).
    4. Pre-meeting preparation.
    5. Optionally the use of checklists, review report, list of findings and management participation.
    6. May vary in practice from quite informal to very formal.
    7. Main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical problems and check conformance to specifications and standards.
Inspection
Key characteristics:
    1. Led by trained moderator (not the author).
    2. Usually peer examination.
    3. Defined roles.
    4. Includes metrics.
    5. Formal process based on rules and checklists with entry and exit criteria.
    6. Pre-meeting preparation.
    7. Inspection report, list of findings.
    8. Formal follow-up process.
    9. Optionally, process improvement and reader.
    10. Main purpose: find defects.
Walkthroughs, technical reviews and inspections can be performed within a peer group – colleagues at the same organizational level. This type of review is called a “peer review”.
Success factors for reviews
Success factors for reviews include:
    1. Each review has a clear predefined objective.
    2. The right people for the review objectives are involved.
    3. Defects found are welcomed, and expressed objectively.
    4. People issues and psychological aspects are dealt with (e.g. making it a positive experience for the author).
    5. Review techniques are applied that are suitable to the type and level of software work products and reviewers.
    6. Checklists or roles are used if appropriate to increase effectiveness of defect identification.
    7. Training is given in review techniques, especially the more formal techniques, such as inspection.
    8. Management supports a good review process (e.g. by incorporating adequate time for review activities in project schedules).
    9. There is an emphasis on learning and process improvement.
Static analysis by tools:-
TERMS:-
1:- Compiler: - A software tool that translates programs expressed in a high order language into their machine language equivalents.
2:- Complexity: - The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
3:- Control flow: - A sequence of events (paths) in the execution through a component or system.
4:- Data flow: - An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction.
5:- Static analysis: - Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.
Static analysis is performed without actually executing the software being examined by the tool; dynamic testing does execute the software code. Static analysis can locate defects that are hard to find in testing. As with reviews, static analysis finds defects rather than failures. Static analysis tools analyze program code (e.g. control flow and data flow), as well as generated output such as HTML and XML.
The value of static analysis is:
    1. Early detection of defects prior to test execution.
    2. Early warning about suspicious aspects of the code or design, by the calculation of metrics, such as a high complexity measure.
    3. Identification of defects not easily found by dynamic testing.
    4. Detecting dependencies and inconsistencies in software models, such as links.
    5. Improved maintainability of code and design.
    6. Prevention of defects, if lessons are learned in development.
Typical defects discovered by static analysis tools include:
1. referencing a variable with an undefined value;
2. inconsistent interface between modules and components;
3. variables that are never used;
4. unreachable (dead) code;
5. programming standards violations;
6. security vulnerabilities;
7. Syntax violations of code and software models.
Static analysis tools are typically used by developers (checking against predefined rules or programming standards) before and during component and integration testing, and by designers during software modeling. Static analysis tools may produce a large number of warning messages, which need to be well managed to allow the most effective use of the tool.
Compilers may offer some support for static analysis, including the calculation of metrics.
Fourth Week:-
Test design techniques:-
TEST DEVELOPMENT PROCESS:-
TERMS:-
1:- Test case specification: - A document, specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.
2:- Test condition: - An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
3:- Test design specification:- A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.
4:- Test procedure specification: - A document, specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.
5:- Test execution schedule: - A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.
6:- Test script: -Commonly used to refer to a test procedure specification, especially an automated one.
7:- Traceability: - The ability to identify related items in documentation and software, such as requirements with associated tests.
8:- Horizontal traceability: - The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification or test script).
9:- Vertical traceability: -The tracing of requirements through the layers of development documentation to components.
  1. Requirements traceability is the capacity to relate your requirements to one another and to aspects of other project artifacts. Its primary goal is to ensure that all of the requirements identified by your stakeholders have been met and validated.
  1. Vertical traceability identifies the origin of items (for example, customer needs) and follows these items as they evolve through your project artifacts, typically from requirements to design, the source code that implements the design, and the tests that validate the requirements.
  1. Horizontal traceability identifies relationships among similar items, such as between requirements or within your architecture. This enables your team to anticipate potential problems, such as two sub teams implementing the same requirement in two different components.
  1. Traceability is often maintained bidirectionally: You should be able to trace from your requirements to your tests and from your tests back to your requirements.
TEST DEVELOPMENT PROCESS:-
  1. During test analysis, the test basis documentation is analyzed in order to determine what to test, i.e. to identify the test conditions.
  1. A test condition is defined as an item or event that could be verified by one or more test cases (e.g. a function, transaction, quality characteristic or structural element).
  1. Establishing traceability from test conditions back to the specifications and requirements enables both impact analysis, when requirements change, and requirements coverage to be determined for a set of tests. During test analysis the detailed test approach is implemented to select the test design techniques to use, based on, among other considerations, the risks identified.
  1. During test design the test cases and test data are created and specified.
  1. A test case consists of a set of input values, execution preconditions, expected results and execution post-conditions, developed to cover certain test condition(s). The ‘Standard for Software Test Documentation’ (IEEE 829) describes the content of test design specifications (containing test conditions) and test case specifications.

IEEE Std 829-1998 IEEE Standard for Software Test Documentation -Description

Abstract: A set of basic software test documents is described. This standard specifies the form and content of individual test documents. It does not specify the required set of test documents.
Keywords: test case specification, test design specification, test incident report, test item transmittal report, test log, test plan, test procedure specification, test summary report

Content

  • 1. Scope
  • 2. References
  • 3. Definitions
  • 4. Test plan
    • 4.1 Purpose
    • 4.2 Outline
  • 5. Test design specification
    • 5.1 Purpose
    • 5.2 Outline
  • 6. Test case specification
    • 6.1 Purpose
    • 6.2 Outline
  • 7. Test procedure specification
    • 7.1 Purpose
    • 7.2 Outline
  • 8. Test item transmittal report
    • 8.1 Purpose
    • 8.2 Outline
  • 9. Test log
    • 9.1 Purpose
    • 9.2 Outline
  • 10. Test incident report
    • 10.1 Purpose
    • 10.2 Outline
  • 11. Test summary report
    • 11.1 Purpose
    • 11.2 Outline
  • Annex A Examples
    • A.1 Corporate payroll system test documentation
  • Annex B Implementation and usage guidelines
    • B.1 Implementation guidelines
    • B.2 Additional test-documentation guidelines
    • B.3 Usage guidelines
  • Annex C Guidelines for compliance with IEEE/EIA 12207.1-1997
    • C.1 Overview
    • C.2 Correlation
    • C.3 Document compliance—Test plan
    • C.4 Document compliance—Test procedure
    • C.5 Document compliance—Test report
  1. Expected results should be produced as part of the specification of a test case and include outputs, changes to data and states, and any other consequences of the test. If expected results have not been defined then a plausible, but erroneous, result may be interpreted as the correct one
  1. Expected results should ideally be defined prior to test execution.
  1. During test implementation the test cases are developed, implemented, prioritized and organized in the test procedure specification.
  1. The test procedure (or manual test script) specifies the sequence of action for the execution of a test. If tests are run using a test execution tool, the sequence of actions is specified in a test script (which is an automated test procedure).
  1. The various test procedures and automated test scripts are subsequently formed into a test execution schedule that defines the order in which the various test procedures, and possibly automated test scripts, are executed, when they are to be carried out and by whom.
  1. The test execution schedule will take into account such factors as regression tests, prioritization, and technical and logical dependencies.
Categories of test design techniques:-
TERMS:-
1:-Black-box test design technique: - Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
2:- Experienced-based test design technique: -Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.
3:- Specification-based test design technique:-Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
4:- White-box test design technique: -Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system. Also know asstructure-based test design technique.
It is a classic distinction to denote test techniques as black box or white box. Black-box techniques (which include specification-based and experienced-based techniques) are a way to derive and select test conditions or test cases based on an analysis of the test basis documentation and the experience of developers, testers and users, whether functional or non-functional, for a component or system without reference to its internal structure. White-box techniques (also called structural or structure-based techniques) are based on an analysis of the structure of the component or system.
Some techniques fall clearly into a single category; others have elements of more than one category.
Common features of specification-based techniques:
    1. Models, either formal or informal, are used for the specification of the problem to be solved, the software or its components.
    2. From these models test cases can be derived systematically.
Common features of structure-based techniques:
  1. Information about how the software is constructed is used to derive the test cases, for example, code and design.
  2. The extent of coverage of the software can be measured for existing test cases, and further test cases can be derived systematically to increase coverage.
Common features of experience-based techniques:
    1. The knowledge and experience of people are used to derive the test cases.
    2. Knowledge of testers, developers, users and other stakeholders about the software, its usage and its environment.
    3. Knowledge about likely defects and their distribution.
TERMS:-
1:-Boundary value:- An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.
2:-Boundary value analysis:- A black box test design technique in which test cases are designed based on boundary values.
3:-Decision table: - A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.
4:-Decision table testing: - A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) showed in a decision table.
5:-Equivalence partition: - A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
6:-Equivalence partitioning: - A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
7:-State transition: - A transition between two states of a component or system.
8:-State transition testing: - black box test design techniques in which test cases are designed to execute valid and invalid state transitions.
9:- Use case: - A sequence of transactions in a dialogue between a user and the system with a tangible result.
10:-Use case testing: -black box test design techniques in which test cases are designed to execute user scenarios.
Equivalence partitioning
  1. Inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be processed in the same way.
  2. Equivalence partitions (or classes) can be found for both valid data and invalid data, i.e. values that should be rejected.
  3. Partitions can also be identified for outputs, internal values, time-related values (e.g. before or after an event) and for interface parameters (e.g. during integration testing). Tests can be designed to cover partitions.
  4. Equivalence partitioning is applicable at all levels of testing.
  5. Equivalence partitioning as a technique can be used to achieve input and output coverage. It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing.
 
 
WHAT IS EQUIVALENCE PARTITIONING?
Concepts:   Equivalence partitioning is a method for deriving test cases.  In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. 
In this method, the tester identifies various equivalence classes for partitioning.  A class is a set of input conditions that are is likely to be handled the same way by the system.  If the system were to handle one case in the class erroneously, it would handle all cases erroneously.
 
WHY LEARN EQUIVALENCE PARTITIONING?
Equivalence partitioning drastically cuts down the number of test cases required to test a system reasonably.  It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.
 
DESIGNING TEST CASES USING EQUIVALENCE PARTITIONING
To use equivalence partitioning, you will need to perform two steps
 
1.     Identify the equivalence classes
2.     Design test cases
 
STEP 1: IDENTIFY EQUIVALENCE CLASSES
 
Take each input condition described in the specification and derive at least two equivalence classes for it.  One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class)
Following are some general guidelines for identifying equivalence classes:
a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence class’s inputs which are too low and inputs which are too high.  For example, if an item in inventory can have a quantity of - 9999 to + 9999, identify the following classes:   
 
1. One valid class:   (QTY is greater than or equal to -9999 and is less than or equal to 9999).   This is written as (- 9999 < = QTY < = 9999)
2. The invalid class (QTY is less than -9999), also written as (QTY < -9999)
3. The invalid class (QTY is greater than 9999) , also written as (QTY >9999)
 
b) If the requirements state that the number of items input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are, too many inputs.
 
For example, specifications state that a maximum of 4 purchase orders can be registered against anyone product.  The equivalence classes are: the valid equivalence class: (number of purchase orders are greater than or equal to 1 and less than or equal to 4, also written as (1   < = no. of purchase orders < =   4) the invalid class (no.  Of purchase orders> 4) the invalid class (no.  Of purchase orders <>
c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the same way, identify a valid class for values in the set and one invalid class representing values outside of the set.  For example, if the requirements state that a valid province code is ON, QU, and NB, then identify: the valid class   code is one of ON, QU, NB the invalid class   code is not one of ON, QU, NB 
d)  If the requirements state that a particular input item match one of a set of values and each case will be dealt with differently, identify a valid equivalence class for each element and only one invalid class for values outside the set.  For example, if a discount code must be input as P for preferred customer, R for standard reduced rate, or N for none, and if each case is treated differently, identify 
The valid class   code = P 
The valid class   code = R 
The valid class   code = N 
The invalid class   code is not one of P, R, and N 
e)  If you think any elements of an equivalence class will be handled differently than the others, divide the equivalence class to create an equivalence class with only these elements and an equivalence class with none of these elements.  For example, a bank account balance may be from $0 up to $ 1,000,000, and balances $ 1,000   or over are not subject to service charges.  Identify: the valid class:   ($ 0 < = balance < $ 1,000)     i.e., balance is between 0 and $ 1,000 - not including $ 1,000    the valid class:   ($ 1, 000   < = balance < = $ 1,000,000 i.e., balance is between $ 1,000    and $1,000,000    inclusive
The invalid class:   (balance < $ 0)
The invalid class:   (balance> $ 1,000,000)
 
A definition of Equivalence Partitioning from our software testing dictionary:
Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition. 
A technique in black box testing is equivalence partitioning. Equivalence partitioning is designed to minimize the number of test cases by dividing tests in such a way that the system is expected to act the same way for all tests of each equivalence partition. Test inputs would be selected from each partition.
Equivalence partitions are designed so that every possible input belongs to one and only one equivalence partition.
Boundary value analysis
  1. Behavior at the edge of each equivalence partition is more likely to be incorrect, so boundaries are an area where testing is likely to yield defects.
  2. The maximum and minimum values of a partition are its boundary values.
  3. A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value.
  4. Tests can be designed to cover both valid and invalid boundary values. When designing test cases, a test for each boundary value is chosen.
  5. Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defect finding capability is high; detailed specifications are helpful.
  6. This technique is often considered as an extension of equivalence partitioning. It can be used on equivalence classes for user input on screen as well as, for example, on time ranges (e.g. time out, transactional speed requirements) or table ranges (e.g. table size is 256*256). Boundary values ay also are used for test data selection.
WHAT IS BOUNDARY VALUE ANALYSIS IN SOFTWARE TESTING?
Concepts: Boundary value analysis is a methodology for designing test cases that concentrates software testing effort on cases near the limits of valid ranges Boundary value analysis is a method which refines equivalence partitioning. Boundary value analysis generates test cases that highlight errors better than equivalence partitioning. The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points when input values change from valid to invalid errors are most likely to occur. As well, boundary value analysis broadens the portions of the business requirement document used to generate tests. Unlike equivalence partitioning, it takes into account the output specifications when deriving test cases.
HOW DO YOU PERFORM BOUNDARY VALUE ANALYSIS?
Once again, you'll need to perform two steps:
1. Identify the equivalence classes.
2. Design test cases.
But the details vary. Let's examine each step.
STEP 1: IDENTIFY EQUIVALENCE CLASSES
Follow the same rules you used in equivalence partitioning. However, consider the output specifications as well. For example, if the output specifications for the inventory system stated that a report on inventory should indicate a total quantity for all products no greater than 999,999, then you d add the following classes to the ones you found previously:
6. The valid class (0 < = total quantity on hand < = 999,999 )
7. The invalid class (total quantity on hand <0)
8. The invalid class (total quantity on hand> 999,999)
STEP 2: DESIGN TEST CASES
In this step, you derive test cases from the equivalence classes. The process is similar to that of equivalence partitioning but the rules for designing test cases differ. With equivalence partitioning, you may select any test case within a range and any on either side of it with boundary analysis, you focus your attention on cases close to the edges of the range.
The detailed rules for generating test cases follow:
RULES FOR TEST CASES
1. If the condition is a range of values, create valid test cases for each end of the range and invalid test cases just beyond each end of the range. For example, if a valid range of quantity on hand is -9,999 through 9,999, write test cases that include:
1. The valid test case quantity on hand is -9,999,
2. The valid test case quantity on hand is 9,999,
3. The invalid test case quantity on hand is -10,000 and
4. The invalid test case quantity on hand is 10,000
You may combine valid classes wherever possible, just as you did with equivalence partitioning, and, once again, you may not combine invalid classes. Donot forget to consider output conditions as well. In our inventory example the output conditions generate the following test cases:
1. The valid test case total quantity on hand is 0,
2. The valid test case total quantity on hand is 999,999
3. The invalid test case total quantity on hand is -1 and
4. The invalid test case total quantity on hand is 1,000,000
2. A similar rule applies where the, condition states that the number of values must lie within a certain range select two valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just above the acceptable range .
3. Design tests that highlight the first and last records in an input or output file.
4. Look for any other extreme input or output conditions, and generate a test for each of them.
How boundary value analysis refines these test cases and derives others by examining output specifications as well as inputs. Using either of these techniques, preferably, the second, wherever possible, you'll be able to test your, system. But what if the system is complex? In that case, there are bound
to be many modules to test How do you plan the order in which to test them?
Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed
(Successfully) using requirements specifications and user documentation
Decision table testing
  1. Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design.
  2. They may be used to record complex business rules that a system is to implement.
  3. The specification is analyzed, and conditions and actions of the system are identified. The input conditions and actions are most often stated in such a way that they can either be true or false (Boolean).
  4. The decision table contains the triggering conditions, often combinations of true and false for all input conditions, and the resulting actions for each combination of conditions. Each column of the table corresponds to a business rule that defines a unique combination of conditions, which result in the execution of the actions associated with that rule.
  5. The coverage standard commonly used with decision table testing is to have at least one test per column, which typically involves covering all combinations of triggering conditions.
  6. The strength of decision table testing is that it creates combinations of conditions that might not otherwise have been exercised during testing. It may be applied to all situations when the action of the software depends on several logical decisions.
State transition testing
  1. A system may exhibit a different response depending on current conditions or previous history (its state). In this case, that aspect of the system can be shown as a state transition diagram. It allows the tester to view the software in terms of its states, transitions between states, the inputs or events that trigger state changes (transitions) and the actions which may result from those transitions.
  2. The states of the system or object under test are separate, identifiable and finite in number. A state table shows the relationship between the states and inputs, and can highlight possible transitions that are invalid. Tests can be designed to cover a typical sequence of states, to cover every state, to exercise every transition, to exercise specific sequences of transitions or to test invalid transitions.
  3. State transition testing is much used within the embedded software industry and technical automation in general. However, the technique is also suitable for modeling a business object having specific states or testing screen-dialogue flows (e.g. for internet applications or business scenarios).

Structure-based or white-box techniques
TERMS:-
1:-Code coverage: - An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
2:- Decision coverage: -The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.
3:- Statement: - An entity in a programming language, which is typically the smallest indivisible unit of execution.
4:-Statement coverage: -The percentage of executable statements that have been exercised by a test suite.
5:- Structure-based testing: - Testing based on an analysis of the internal structure of the component or system.
Background
Structure-based testing/white-box testing is based on an identified structure of the software or system, as seen in the following examples:
    1. Component level: the structure is that of the code itself, i.e. statements, decisions or branches.
    2. Integration level: the structure may be a call tree (a diagram in which modules call other modules).
    3. System level: the structure may be a menu structure, business process or web page structure.
In this section, two code-related structural techniques for code coverage, based on statements and decisions, are discussed. For decision testing, a control flow diagram may be used to visualize the alternatives for each decision.
Statement testing and coverage
  1. In component testing, statement coverage is the assessment of the percentage of executable statements that have been exercised by a test case suite.
  2. Statement testing derives test cases to execute specific statements, normally to increase statement coverage.
Decision testing and coverage
  1. Decision coverage, related to branch testing, is the assessment of the percentage of decision outcomes (e.g. the True and False options of an IF statement) that have been exercised by a test case suite.
  2. Decision testing derives test cases to execute specific decision outcomes, normally to increase decision coverage.
  3. Decision testing is a form of control flow testing as it generates a specific flow of control through the decision points. Decision coverage is stronger than statement coverage: 100% decision coverage guarantees 100% statement coverage, but not vice versa.
Other structure-based techniques
  1. There are stronger levels of structural coverage beyond decision coverage, for example, condition coverage and multiple condition coverage.
  2. The concept of coverage can also be applied at other test levels (e.g. at integration level) where the percentage of modules, components or classes that have been exercised by a test case suite could be expressed as module, component or class coverage.
  3. Tool support is useful for the structural testing of code.
Experience-based techniques
TERMS:-
1:-Exploratory testing:-An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
2:- Fault attack: - A commonly used experienced-based technique is error guessing. Generally testers anticipate defects based on experience. A structured approach to the error guessing technique is to enumerate a list of possible errors and to design tests that attack these errors. This systematic approach is called fault attack.
  1. Experienced-based testing is where tests are derived from the tester’s skill and intuition and their experience with similar applications and technologies. When used to augment systematic techniques, these techniques can be useful in identifying special tests not easily captured by formal techniques, especially when applied after more formal approaches. However, this technique may yield widely varying degrees of effectiveness, depending on the testers’ experience.
  2. A commonly used experienced-based technique is error guessing. Generally testers anticipate defects based on experience. A structured approach to the error guessing technique is to enumerate a list of possible errors and to design tests that attack these errors. This systematic approach is called fault attack. These defect and failure lists can be built based on experience, available defect and failure data, and from common knowledge about why software fails.
  3. Exploratory testing is concurrent test design, test execution, test logging and learning, based on a test charter containing test objectives, and carried out within time-boxes. It is an approach that is most useful where there are few or inadequate specifications and severe time pressure, or in order to augment or complement other, more formal testing. It can serve as a check on the test process, to help ensure that the most serious defects are found.
Choosing test techniques:-
The choice of which test techniques to use depends on a number of factors, including the type of system, regulatory standards, customer or contractual requirements, level of risk, type of risk, test objective, documentation available, knowledge of the testers, time and budget, development life cycle, use case models and previous experience of types of defects found.
Some techniques are more applicable to certain situations and test levels; others are applicable to all test levels.

.net Interview Questions and Answers

MVC   What is MVC? MVC is a framework methodology that divides an application’s implementation into three component roles: models, views, a...