Wednesday, April 27, 2011

Testing Types

There are different types testing done for different accation and  for diffrerent purpose .Broadly classifying the entire testing activities ,there are mainly two types
1.White box testing
2.Black box testing

Black box testing:

Functional Testing based on the requirements with no knowledge of internal program structure or data .Also know as closed box testing.

Eg - All testing types done in system testing level by testers are black box testing .

Sanity testing ,Regression testing , Alpha testing ,Beeta testing etc.


White box testing:
 
White box testing is performed based on the knowledge of how the system is implemented. White box testing includes analyzing data flow, control flow, information flow, coding practices, and exception and error handling within the system, to test the intended and unintended software behavior. White box testing can be performed to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities.
 
Also known as Open box testing ,Glass box testing
 
Eg-Unit testing


Gray box testing:

Grey box testing is the combination of black box and white box testing. Intention of this testing is to find out defects related to bad design or bad implementation of the system.

Eg-Integration testing

Sanity Testing:

The initial level of testing to make sure that basic/primary feature are there and working properly to start with the detailed level of testing.Sanity testing is done when the application is deployed into testing for the very first time .
Eg:     Open  URL and login ,navigate and check all menus and fields and buttons  are there in place.
          Try one or two very basic transactions.

Smoke Testing:

Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. Done especially during hardware upgradations.

Functional Testing:
Its a blackbox testing done in a system testing level to verify the functionality of the pplication .Functionality testing is performed to verify that a software application performs and functions correctly according to design specifications.

Regression Testing:

Testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program.
Testing conducted for the purpose of evaluating whether or not a change to the system  has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

Retesting :

Testing application with different inputs is normally known as retesting .Executing the same test case with different inputs .

Security Testing:

Testing how well system protects the unauthorised access , the primary aspects of security testing are authentication and authorisation

Authentication:First level of security testing to decide whether user is a valid user ,normally this can be done by testing the login requirement ie username and password.

Authorisation:This is next level of security where users are testing for their access rights ,this can be tested  by creating different users with different access rights and login and verify their access rights .

Installation Testing :

Installation is the first interaction of user with our product and it is very important to make sure that user do not have any trouble in installing the software.

Usually installers ask a series of questions and based on the response of the user, installation changes. It is always a good idea to create a Tree structure of all the options available to the user and cover all unique paths of installation if possible.


Person performing installation testing, should certainly have information on what to expect after installation is done. Tools to compare file system, registry. DLLs etc are very handy in making sure that installation is proper.

Most of the installers support the functionality of silent installation, this also need thorough testing. Main thing to look for here is the config file that it uses for installation. Any changes made in the config file should have proper effect on the installation.

If installation is dependent on some other components like database, server etc. test cases should be written specifically to address this.

Negative cases like insufficient memory, insufficient space, aborted installation should also be covered as part of installation testing. Test engineer should make sure that proper messages are given to the user and installation can continue after increasing memory, space etc.

Test engineer should be familiar with the installer technologies and if possible try to explore the defects or limitation of the installer itself.

Recovery Testing:

Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed.Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc.Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems


Examples of recovery testing:
 

While an application is running, suddenly restart the computer, and afterwards check the validness of the application's data integrity.
While an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application's ability to continue receiving data from the point at which the network connection disappeared.
Restart the system while a browser has a definite number of sessions. Afterwards, check that the browser is able to recover all of them.
Performance Testing:

Testing the application behaviour like reponse time under various load is known as performance testing.

Load Testing:

Testing application behaviour with defined load is known as load testing

Stress Testing:

Testing the application behaviour with behind the defined load is known as stress testing
Volume Testing :

Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system


Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.

User Acceptance Testing:

  Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal r external) to determine whether or not to accept a system.
Alpha Testing :

If the testers and real customers combinedly test the software in development site then it is called alpha testing.

Beta Testing :

If the testers and model customers combinedly test the software in customer site,then it is called beta
testing.

Adhoc Testing:

Testing done by testing team/non testing team as a end user without using any testplan and test cases  covering the very basic scenario. The tester tries to 'break' the system by randomly trying the system's functionality.

Monkey Testing :

Kind of adhoc testing done specially on game softwares to test the software ,here tester acts as a monkey and tries to break the software .

Localization Testing:

 

Testing software ability to adapt to local standards like local language , interface and voice etc.


     

Monday, April 25, 2011

Quality

What is Quality?
Quality is an abstract term , like many other abstract things in life viz music , beauty etc . We can’t define Quality but we can say that the product is of high quality or not.


Quality : Degree of conformance of the product features to its requirements. If, the degree of conformance is high we can say that the product is of high quality.

Quality is a function of user-satisfaction . Again, user-satisfaction depends upon many factors like

conforming product , delivered within budget and

schedule etc

Quality On Time :
 
Delivering the product on or before the delivery date of the product is known as Quality on Time .




Quality of Conformance :

Delivering the right product( conforming product ) is termed as Quality of conformance.



Quality Goal : Quality of conformance + Quality on Time.

Delivering the right product at the right time is the Ultimate Quality Goal .


Requirement:
 
Requirements can be of three types:




Explicit Requirements : These are user needs

and expectations of the product , explicitly stated by the customer.

Implicit Requirements : These requirements are often not stated by the customer due to his technical unawareness . These are to be taken care of the technical people in the organization .

Obligatory Requirements : Mandatory requirements

Imposed rules and regulations on the product by some government bodies , authorities etc.

Cost of Non Confirmance:
All costs in occured by repairing or reworking the Non-Confirming the product.

Impact of defect on the cost in the SDLC

Quality assurance :

Set of umbrella activities that must be applied throughout the software process to assure that the product coming out is of high quality.


In Quality Assurance we will check the process to ensure Quality .

Assurance activities can be broadly classified into

Assuring Process Quality

Assuring Product Quality – Quality Control .

Example of software quality assurance activities
 
Internal Quality Audits / Configuration Audits / Supplier Audits  / Formal Technical Reviews etc..
Quality Control :
• Checking , inspecting and reviewing the product itself whether it is conforming to its requirements or not.


In Quality Control we will check and test the product .

Examples of Quality Control processes are

Testing , code reviews and document reviews.

Demine Cycle:
 
According to Deming , continual improvement of an organization on a regular basis , can be achieved by P-D-C-A cycle. By , P-D-C-A  cycle  we mean Plan-Do-Check-Act .
 
Effectiveness :


• Extent to which planned activities are realized and planned results are achieved.

• To check the effectiveness we have to evaluate based on Analysis of DATA .

Improvement of the QMS

The organization shall continually improve the  effectiveness of the quality management system through the use of the quality policy, quality objectives, audit results, analysis of data, corrective and preventive actions and management review.

Quality Management Principles :


• These principles are derived and developed by ISO Technical Committee ( ISO/TC 176 ). These

principles are often used by the senior management as a framework to guide the organization towards continual improvement in the performance.These 8 principles are defined in ISO 9000:2000 QMS Fundamentals .

Principle 1 : Customer Focus


• Organizations depend on their customers and therefore should understand current and future customer needs and expectations. The Organization should meet customer expectations and strive to exceed their expectations.

 Principle 2  : Leadership

Leaders establish unity of purpose and unity of direction to the  organization They should create an internal environment so that people can become fully involved in achieving the organization's objective.

Principle 3: Process oriented approachPrinciple 4: Involvement of people


• Process is set of pre-defined actions or activities ( set of framework activities ) to produce a product.( its a road map whose goal is ultimate production of the product) .


Principle 4: Involvement of people

• People at all levels are the essence of the organization and their full involvement enables their abilities to be useful for the organization's benefit.

Principle 5 : System approach to Management


System is group of related things or parts that function together as a whole.System may consists of one or more processes.

Interrelated processes are identified , understood and managed as a system . This improves the organization's effectiveness and efficiency.

Principle 6: Continual improvement


• Continual improvement should be a permanent objective of the organization.





Sunday, April 24, 2011

Levels Of Testing

There are four levels of testing

Unit Testing
Integration Testing
System Testing
User Acceptance Testing

Unit Testing
A unit is smallest testable piece of software


–can be compiled, linked, loaded

–e.gfunctions/procedures, classes, interfaces

–normally done by programmer

–Test cases written after coding
 
Buddy Testing


Team approach to coding and testing

One programmer codes the other tests and vice versa

Test cases ‐written by tester(beforecoding starts). Better than single worker approach

Objectivity

cross‐training

Models program specification requirement
Normally in programmers IDE (comfort zone)


Find unit bugs

Wrong implementation of functional specs

Testing function procedures e.g. the ValidatePIN() procedure

Integration Testing

Test for correct interaction between system units


•systems ‐built by merging existing libraries

•modules coded by different people

•Mainly tests the interfaces among units

•Bottom up integration testing

•Use of drivers

•Top down integration testing

•Use of stubs
 
Who does integration testing and when is it done?


•Done by developers/testers

•Test cases written when detailed specification is ready

•Test continuous throughout project

•Where is it done?

•done on programmer’s workbench

•Why is it done?

•Discover inconsistencies in the combination of units.
 
System Testing
 
Test of overall interaction of components


•Find disparities between implementation and specification

•Usually where most resources go to

•Involves –load, performance, reliability and security testing
 
Who performs system testing and when is it done?


–Done by the test team

–Test cases written when high level design spec is ready

•Where is it done?

–Done on a system test machine

–Usually in a simulated environment e.g. vmware
 
User Acceptance Testing
 
•Demonstrates satisfaction of user


•Users are essential part of process

•Usually merged with System Testing

•Done by test team and customer

•Done in simulated environment/real environment

Unified Modeling Language

UML Model Components

UML Model Elements
Use Case = user Interaction with the system


Actor = Any system / user who interacts with the system.

Component = .dll, .exe, COM+, EJB etc

Package = Group of similar Classes / Use cases/ Actors

You can Import / Export any model element for Re-use

UML Views
 
4 views:


Use Case View


Logical View

Component View

Deployment View

UML Diagrams:

UML has 7 Diagrams to illustrate different aspects of the system:


Use Case Diagram

Sequential Diagram

Collaboration Diagram

Class Diagram

State Transition Diagram

Component Diagram

Deployment Diagram

Each diagram has a purpose and an intended audience.

Use Case Diagram

Use Case Diagram shows interactions between use cases and Actors.


Use Case Diagram represent the Requirements of the system from the user’s perspectives.

Intended Audience: Users, Project Manager, Analyst, Tester, Developers, Architect for understanding the system requirements.

Sequence Diagram For Money Withdrawal Use Case


Very important UML diagram from the point of view of requirements elaboration and preparing Test cases.


Sequence diagram shows flow of functionality (Sequence of operations) through a Use Case.


During the process of developing Sequence Diagram you can visualize / analyze normal and abnormal use of the system / use case.

Intended User:

Analyst : Check flow of process

Developers: Check objects that needs to be developed.

Testers : Check to develop Test Cases 
 
Collaboration Diagram
 
Collaboration diagrams show exactly he same interactions as the sequence diagram but from a different perspectives.


Architects use this diagram to check distribution of processing between objects.


Class Diagram:

Class diagrams show interactions between classes (Objects) in the system.
Classes contain information (attributes) and behavior( operations) that act on that information.
Intended User:
Developers : Check to develop classes. Generate skeletal code
Architects :Check the design of the system
If a class contains too many operations, the architect may split the class into multiple classes.

State Transition Diagram
Dynamic states of a bank account.




State Transition diagrams model the various states in which an object can exist.


While class diagrams show a static picture of the classes. State Transition diagrams show more dynamic behavior of a system.

State Transition diagrams are NOT created for every class. They are created for very complex class.

Component Diagram

Component diagram show a physical view of your model.


Shows relationship between software components like code libraries, .dll, Client.exe, Server.exe etc

Each of the classes in the rose model is mapped to a source code component. Once the component have been created they are added to the component diagram.

Intended user: Build engineer for creating builds.

Deployment Diagram:

Deployment diagram shows physical layout of the network where the various components will reside.


The physical deployment may differ from the logical architecture of the system. Ex: The system may have 3-tier architecture, but the deployment may be 2-tiered.ie The Client may be deployed in one machine and the Sever component and DB may be deployed in a Server machine.





 

Blackbox Testing Techniques

There are two important blackbox testing techniques which helps testers in writing test cases

1.Boundary Value Analysis(BVA)
2.Equivqlence Class Partitioning(ECP)

Boundary Value Analysis:Many systems have tendency to fail on boundary. So testing boundry values of application is important.Whenever requirement talk about boundaries to input fields the validating those boundaries will be challenging task .Boundary value analysis leads to a selection of test cases that exercise

bounding values .

Guidelines:


If an input condition specifies a range bounded by values a and b, test cases should be designed with value a and b, just above and below a and b.

Example: Integer D with input condition [-3, 10],


test values: -3, 10, 11, -2, 0

If an input condition specifies a number values, test cases should be developed to exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested.
 
Test min, min-1, max, max+1, typical values



Equivqlence Class Partitioning :Whenever requirement talks about different type of data then we think of ECP .ECP says identify all diffent kind/range of data for which system gives same response  as a partitions .Then create a test a cases by picking one or two members from each partition .
 
Equivalence classes can be defined using the following guidelines:


If an input condition specifies a range, one valid and two invalid equivalence class are defined.

If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.

If an input condition specifies a member of a set, one valid and one invalid equivalence classes are defined.

If an input condition is Boolean, one valid and one invalid classes are defined.

Examples:


area code: input condition, Boolean - the area code may or may not be present. input condition, range - value defined between 200 and 900

password: input condition, Boolean - a password nay or may not be present. input condition, value - six character string.

command: input condition, set - containing commands noted before.

Saturday, April 23, 2011

BugLifeCycle

The important life cycle towrds the end of the project realease lets learn in detail

Terminologies

Error|Bug|Defect

Error is a terminology which is associated with the code so used by developers in white box testing.
Bug is a terminology which is associated with the functionality and user interface used by tester in black box testing.
Defect is a terminology which is associated with the end product used by end user in live.

Definition :Any  unexpected and unwanted behaviour of the application can be called as a bug in tester prospective.

Reasons or sorces of bug:
Incorrect/Improper/Incomplete requirement.
Technology/Developer limitation to acheive 100% of the requirement.

Severity
Severity:Severity talks about the impact of the bug on the application

Severity levels:Actually there is no standard way of defining severity levels of the bugs still lets go with the following things

Critical:The instances where bugs are completely blocking your testing activities .The example instances are
system crash, massive performance degradation, data corruption, data loss, security violation


Major:The instances where major/important functionality failures.The example instances are operational error, data integrity, some performance degradation, loss of functionality (no workaround)
 
Minor:The minor functionality failures/Defect causes failure of non-critical aspects of the system.There is a reasonably satisfactory work around.
 
Trivial/Cosmetic:This is look and feel associated bugs ,there is no much concern about these things until unless its very primary requirement.
 
Priority:Priority is all about the prefernce in fixing the bugs
 
Priority levels:
 
Immidiate:
-Blocks further testing

- Currently very visible and/or deterimental to customers
- Possibly immediately detrimental to revenue or reputation
- Needed for time critical deadline

High:
- must fix before next release because of:


- numerous customer complaints about the issue
- critical area of the system
- will be very visible and/or deterimental when released
- does not conform to what was stated as a requirement for the release

Medium:
- should fix if time permits; not a critical areas of the system

- some customers are impacted by it but there is a workaround

- very few customer complaints logged about this issue
Low:

- would like to fix but can be released as is; trivial, cosmetic


- few customers even notice it much less are impacted by it

Buglife cycle flow diagram:

Bug Status:
1.New
2.Open
3.Fixed
4.Closed
5.Reopen
6.Duplicate
7.Deffered

Bug Report:

BUG REPORT FORM



YOURS COMPANY’S NAME CONFIDENTIAL PROBLEM REPORT



PROGRAM RELEASE VERSION



REPORT TYPE (1-6)___ SEVERITY ______________ ATTACHMENTS (Y/N)____





PROBLEM SUMMARY



CAN YOU REPRODUCE THE PROBLEM?(Y/N)___  STEPS TO REPRODUCE________________



PROBLEM AND HOW TO REPORT

SUGGESTED FIX (optional)

REPORTED BY DATE__/__/__


ITEMS BELOW ARE FOR USE ONLY BY THE DEVELOPMENT TEAM


FUNCTIONAL AREA ASSIGNED TO


COMMENTS


STATUS _____ PRIORITY (1_5)___


RESOLUTION (1-9)__ RESOLUTION VERSION


RESOLVED BY DATE__/__/__



RESOLUTION TESTED BY DATE__/__/__



TREAT AS DEFERRED (Y\N)




Thursday, April 21, 2011

Testplan

There is a saying  "Failing To Plan ,Planning To Fail"  so planning is very important to plan the things to execute smoothly.

Testplan is document which talks about the complete testing strategy and approach .The important content of the test plan are as below.

• Test plan identifier :Testplan Identifier according to company standard and format


• Introduction :Project Introduction

• Features to be tested :List of requirement needs to be tested

• Features not to be tested :List of requirements which are already tested or no need to test.

• Approach :Test methodologies and test types planned.

• Item pass/fail criteria :Criteria for pass and fail

• Entry criteria ,exit criteria ,suspension criteria and resumption requirements :Criteria's to enter/exit/suspend/resume into any testing phase or testing type

• Test deliverables:The list of deliverable like testcase ,tracebility matrics and test summary etc.

• Environmental needs : Hardware/Software versions and their configarations

• Responsibilities :Roles and Resposibilities for complete project team.

• Staffing and training needs : Number of resourses required and their skillset training programs.

• Schedule :Testing schedule for all testing activites

• Risks and contingencies :Posssible risks  and migitaions.

• Approvals :Approvals for all the testing deliverables.