Sunday, February 28, 2010

Software Quality (MANUAL TESTING MATERIAL)

1. Meet customer requirements in terms of functionality,

2. Meet customer expectations in terms of usability, performance and security.

3. Cost to purchase.

4. Time to release.

1&2 are technical.

3&4 are non_technical.

SQA (Software Quality Assurance): The monitoring and measuring the strength of development process is called SQA.

Conformance to explicitly stated and agreed functional and non functional (including performance) requirements

Process to provide confidence that quality requirement will be fulfilled.
Set of planned and systematic activities to provide confidence that products and services will conform to specified requirements and meets user needs.
Involves PREVENTING DEFECTS
Management by Inputs
Sets up measurement programs tot evaluate processes
Identifies weaknesses in a process and improves them
E.g.:
Life Cycle Testing:-


Information Gathering -> Analysis -> Design -> Coding -> Testing -> Maintenance.


Life Cycle Development:-


Information Gathering -> Analysis -> Design -> Coding -> Testing -> Maintenance.


LCD Vs LCT (Fish Model):



Analysis Design Coding System Testing Maintenance


Information

Gathering (BRS) SRS LLD & HLD Program Test S/W BBT


Reviews Reviews& WBT changes

Analysis


From Reviews to WBT Verification and BBT to Test S/W changes is Validation.


BRS (Business Requirement Specification)

BRS defines the requirements of the customer to be developed as a S/W.


SRS (Software Requirements Specification)

SRS defines functional requirements to be developed and System requirements (H/W or S/W) to be used.

Reviews:

It is a static testing technique. In this review responsible people are estimating the completeness (missing) and correctness (mistakes) of the corresponding data.

HLD (High Level Design Document)

HLD defines the overall hierarchy of all modules/functionalities. It is also known as external design.


LLD (Low Level Design Document)

LLD defines the internal logic of corresponding module/functionality.


Prototype:

A sample model of application without functionality is called prototype.

E.g.: Power Point slide show


White Box Testing:

It is a coding level testing technique. To verify the completeness and correctness of program structure, programmers are following this structure. It is also known as Glass Box Testing/Clear Box Testing (Program Logic).


Black Box Testing:

It is a build level testing technique. In this testing test engineers are validating every functionality depends on external interface (User Logic).


Build : -A finally integrated all modules set in a .exe form.


Software Testing : -The verification and validation of S/W application is called S/W testing. Primary role of Testing is not demonstration of correct performance but the exposure of hidden defects


Verification : - Whether the system is right/wrong?


Validation : - Whether the system is right/wrong with respect to the customer.


'V' Model : -It is an extensive process of Fish model. This model defines mapping between development & testing process.

'V' stands for Verification & Validation.



LCD LCT


Development Testing


-> Information Gathering * Assessment of Development plan
& Analysis * Prepare Test plan
* Requirements Phase testing


-> Design & coding * Design phase testing * Program phase testing


-> Form Build * Functional& System testing
* User acceptance testing User

* Test Documents testing


-> Release & maintenance * Port testing

* Test S/W changes

* Test Efficiency




Refinement from of 'V' Model:



LLD HLD S/W RS BRS



Coding


Unit Testing Integration Functional& User acceptance

Testing System testing testing




The real 'V' model is expensive to follow for small and medium scale organizations. Due to this reason, small and medium scale organizations are performing some changes in 'V' model. From that changes the organizations are maintaining separate testing team for functional & system testing phase because this phasing development is a bottleneck phase. For remaining stages of testing, organizations are taking the help of same developers.


I. REVIEWS DURING ANALYSIS:


In general S/W development starts with information gathering & analysis. In this phase business analyst category people are developing BRS & S/WRS documents. To estimate the completeness and correctness of a document they can conduct reviews.


BRS -> S/WRS


-> Are they right requirements?

-> Are they complete?

-> Are they achievable? (W.R.T. technology)

-> Are they reasonable? (W.R.T. time)

-> Are they testable?


II. REVIEWS DURING DESIGN:


After completion of analysis and review, designing category people concentrate on external design & internal design development. To estimate the completeness & correctness of the documents, when they are conducting reviews.


-> Are they understandable?

-> Are they met right requirements?

-> Are they complete?

-> Are they followable?

-> Does they handle errors?


III. UNIT TESTING:


After completion of design and their reviews, programmers are concentrating on coding to construct a S/W physically. In this stage programmers are testing every program through a set of white Box testing techniques.


a. Execution Testing :


Program -> Basis paths coverage (Every statement in program is correctly running)

-> Loops coverage(Termination of iterations)

-> Program technique coverage(Less member of memory cycles & CPU cycles during execution)


b. Operation Testing :


Whether our executed program is operatable on other customer expected platforms or not ?

Platform means that O/S, Compilers, Browsers and other system S/W.


c. Mutation Testing :


Mutation means that a complex change in logic. Programmers follow technique to estimate completeness and correctness of a program testing.


Tests Tests Tests

----- ----- -----

----- ----- -----

----- Changes changes

----- ----- -----

----- ----- -----


Passed Passed Passed (1 failed)

(In Complete) (Completeness)


If our test is incomplete on that program, then continue same program testing with new test otherwise continue other program testing.


IV. INTEGRATION TESTING:


After completion of dependent programs development and unit testing, programmers compose them to form a system. During this composition of programs, programmers are conducting integration testing to estimate completeness and correctness of control transmission in between that programs. There are 3 approaches to integrate such as:


a. Top-Down Approach :

conduct testing on main module without coming to some of the sub modules is called Top-Down Approach




Sub1<---Main --->Stub--x--Sub2


In the above program stub is a called program.


a. Bottom-Up Approach :

Conduct testing on sub modules with out coming from main module is called Bottom-Up Approach.



Main--x--Driver--->Sub1--->Sub2


In the above program driver is a calling program.


c. Hybrid Approach :

It is a combination of both Bottom-Up & Top-Down Approach.



Main--x--Driver--->Sub1--->Sub2--->Stub--x--Sub3


Above approach is also known as Sandwich approach.


V. FUNCTIONAL & SYSTEM TESTING:


After receiving build from development, separate testing team concentrate on Functional & System testing. In this phase of testing, testing team follows BBT techniques.

There are 4 divisions in BBT such as :


a. Usability Testing

b. Functional Testing

c. Performance Testing

d. Security Testing


a. Usability Testing :

In general the system testing process starts with usability testing. During this test, testing team concentrate on "User Friendliness" of screens. This usability testing classified into below subtests.

a.1. User Interface Testing :

->Ease of use (Understandability of screens)

->Look & Feel (Attractiveness of screens)

E.g.: Font, Style, Alignment, Color.

->Speed in interface (Short navigations to complete a task)

a.2. Manual Support Testing :

Whether the user manuals consists of context sensitive help / not ?



Receive build from developers

|

User Interface testing

|

Usability Testing--> Remaining Functional & System testing

|

Manuals Support testing


b. FUNCTIONAL TESTING:

It is a necessary (manitary) testing part in BBT. During this test, testing team concentrate on "Meet Customer Requirements".

This testing classified into below subtests.


a. Functionality Testing :

It is also known as Requirements Phase Testing. During this test, test engineer validate every functionality in terms of below coverages.


-> Behavioral Coverage (Changes in properties of object with respect to navigation)

-> Error-handling Coverage (preventing negative navigations)

-> Input Domain Coverage (Size & type of every input object)

-> Calculations Coverage (Correctness of O/P)

-> Back-end Coverage (Impact of front-end operations on back-end tables)

-> Service levels Coverage (Order of functionalities w.r.t. customer requirements)


b. Input Domain Testing :

It is a part of functionality testing but test engineers are giving some special treatment to this test with the help of two mathematical notations such as:

Boundary Value Analysis (BVA), Equivalence Class Partitions(ECP)


BVA (Size/Range)


Min ->Pass

Min-1 ->Fail

Min+1 ->Pass

Max ->Pass

Max-1 ->Pass

Max+1 ->Fail


ECP(Type)


Valid -> Pass

Invalid -> Fail


E.g.: A Login process allows userid & pwd to authorize users. Userid allows alphanumeric in lowercase from 4-16 characters long. Pwd allows alphabets in lowercase from 4-8 characters in long.

Prepare BVA & ECP for userid and pwd.


Userid

BVA (Size/Range)

Min -> 4 chars -> Pass

Min-1 -> 3 chars -> fail

Min+1 -> 5 chars -> pass

Max -> 16 chars -> pass

Max-1 -> 15 chars -> pass

Max+1 -> 17 chars -> fail


ECP(Type)

Valid -> (a-z)(0-9) ->pass

Invalid -> (A-Z) Special Characters and Blanks ->fail


Pwd

BVA (Size/Range)

Min -> 4 chars -> Pass

Min-1 -> 3 chars -> fail

Min+1 -> 5 chars -> pass

Max -> 8 chars -> pass

Max-1 -> 7 chars -> pass

Max+1 -> 9 chars -> fail


ECP (Type)

Valid -> (a-z) ->pass

Invalid -> (A-Z)(0-9)Special Characters and Blanks ->fail



c.Recovery Testing:

It is also known as Reliability testing. During this test, test engineers are validating that whether our application field change from abnormal state / not?

Abnormal state: Not able to continue.


--->Abnormal state----Back up & Recovery Procedures--->Normal.


d.Compatibility Testing:

It is also known as Portability Testing. During this test test engineers validates that whether our application build run on customer expected platforms or not ?

Platforms mean that O/S, Compilers, Browsers and other system S/W.


Forward Compatibility:

Build(VB 6.0)---->O/S(Unix, Linux)--x-->Build(VB 6.0)

Backward Compatibility:

Build (Oracle)--x-->O/S(Win 98)---->Build(Oracle)


e. Configuration Testing:

It is also known as Hardware Compatibility testing. During this test, test engineers are validating that whether our application build can support different technology H/W devices or not?

E.g.: Different technology printers.

Different technology LAN topologies.

Different technology LAN's...etc.,


f. Intersystem testing:

It is also known as End-to-End Testing/Penetration Testing. During this test, test engineers are validating whether our application build is correctly sharing the resources of other application or not?

E.g.: E-Seva


WBA Server

EBA -->Local DB --> Server

TBA (Common Resource) Server

IBA(New component) New Server


g. Installation Testing:



Build+Supported S/W -->Customer Expected Configured System

->Set up program execution (To start installation)

->Easy Interface (During installation)

->Occupied disk space (After installation)


h. Parallel Testing:

It is also known as Comparative Testing. During this test, test engineers are comparing our application build with old version of same application/competitive products in market to estimate competitiveness. This testing is only applicable to S/W products.


i. Sanitation Testing:

It is also known as Garbage Testing. During this test, test engineers are finding extra features in application build w.r.t. SRS.



3. PERFORMANCE TESTING:

It is an expensive testing technique in BBT. During this test, testing team concentrate on "Speed Of Processing".

This performance test classified into below subtests.

a. Load Testing:

The execution of our application under customer expected configuration and customer expected load to estimate performance is called Load Testing / Scalability Testing.

Load / Scale means that the no. of concurrent users are accessing our application.

b. Stress Testing:

The execution of our application under customer expected configuration & variating the peak loads to estimate performance is called Stress Testing.

c. Storage Testing:

The execution of our application under huge amount of resources to estimate peak limits of storage is called Storage Testing.

E.g.: MS-Access technology supports 2 GB D/B of maximum.

10 MHZ--100 Key strokes per second.

d. Data volume Testing:

The execution of our application under huge amount of resources to estimate peak limits of data in terms of no. of records is called Data volume Testing.

c & d are same but terminology is different.


4. SECURITY TESTING:

It is a complex testing technique to be applied. During this test, testing team concentrate on "Privacy Of Operations".

This testing classified into below sub tests.

a. Authorization Testing:

Whether our application allows valid users and prevent invalid users or not?

Above like observation is called Authorization testing.

E.g.: Login with userid, pwd, credit card number validation, pin number validation, fingerprint, and digital signatures.

b. Access Control Testing:

Whether a valid user have permissions to use specific services or not?

c. Encryption / Decryption Testing:

Whether the code conversions are checkable / not in between client process and server process.


Client-- (Request) --> Encryption -- (Cipher text) --> Decryption --> Server -->

(Response) --> Encryption -- (Cipher text) --> Decryption --> Client


Note: In small-scale organizations test engineers are covering Authorization testing & Access control testing. Developers are covering the Encryption / Decryption testing.


VI. USER ACCEPTANCE TESTING:

After completion of functional & system testing project management concentrate on user acceptance testing to collect feedback from customer side people.

There are two approaches to conduct this test such as:

Alpha Test: # S/W applications

# In development site

# By real customers

Beta Test: # S/W products

# In customer site like environment

# By customer site like people


Collect feedback.


VII. TESTING DURING MAINTAINANCE:

After completion of user acceptance test and their modifications, project management concentrate release team formation with few developers, testers, H/W Engg., This release team is coming to customer site and conduct port testing.

During this port testing release team concentrate on below factors in customer site.

-> Compact Installation.

-> Overall Functionality.

-> I/P devices handling.

-> O/P devices handling.

-> Secondary storage devices handling.

-> Co-existence with other S/W to share common resources.

-> O/S error handling.

After completion of port testing, release team provides training sessions to customer site people.

During initialization of that S/W, customer site people are sending change request to our organization. There are two types of change requests to be solved.


Enhancement-->Impact Analysis-->Perform S/W changes-->Test S/W changes



Change Request -->


Missed defect-->Impact Analysis-->Perform S/W changes-->Test S/W changes-->

Improve testing process capability


C.C.B: Change Control Board


TESTING TERMINOLOGY:


1. Monkey Testing:

A test engineer conduct a test on application build through the coverage of main activities only is called monkey testing / chimpanzee testing.

2. Exploratory Testing:

A tester conduct testing on an application builds through the coverage of activities to level by level.



3. Ad-hoc Testing:

A tester conduct a test on application build w.r.t. predetermined ideas is called Ad-hoc testing.

4. Big-Bang Testing:

An organization is conducting a single stage of testing after completion of entire modules development is called Big Bang Testing / Informal Testing (Single Stage Testing).

5. Incremental Testing:

An organization follows the multiple stages of testing process from document level to system level is called Incremental Testing.

E.g.: LCT

6. Sanity Testing:

Whether the development team released builds is stable for complete testing to be applied or not?

This observation is called Sanity Testing / Tester Acceptance Testing / Build Verification Testing.

7. Smoke Testing:

An extra shake-up in Sanity Testing is called Smoke Testing. In this stage test Engg., try to find the reason, when that build is not working before start testing.

Sanity Testing is manitary & Smoke Testing is optional.

8. Static vs. Dynamic Testing:

A tester conduct a test on application build without running, during testing is called Static Testing.

E.g.: Usability Testing.

A tester conduct a test through the execution of our application build is called Dynamic Testing.

E.g.: Functional, Performance, Security Testing.

9. Manual Vs Automation Testing :

A test engineer conduct a test on application build without using any third party testing tool is called manual testing.

A tester conduct a test on application build with the help of a testing tool is called test automation.


Manual-----Build----->Test Engineer


Automation-----Build----->Testing Tools----->Test Engineer


Impact:

A test impact indicates test repetition with multiple test data.

E.g.: Functionality Testing

Critically:

A test critically indicates that complexity to execute test manually.

E.g.: Load Testing

10. Re-testing:

The re-execution of a test on some application build with multile test data is called Re-testing.

E.g.: Multiply:

i/p1: __

i/p2: __ OK Result: ___



Expected Result=i/p1*i/p2


Test Data:

i/p 1 i/p 2


min min

max max

min max

max min

value 0

0 value

--- ---


11. Regression Testing:

The execution of selected tests on modified build to ensure bug fix work and occurrences of side effects is called Regression Testing.


Passed<---Build--Related Passed Tests / Failed Tests <----Modified build<--( R.T)

| |

| |

Failed------>Defect Reports---------------------->Developers


Here R.T. is Remaining tests

12. Error, Defect, Bug:

A mistake in coding is called Error.

A test engineer found mismatch due to mistakes in coding during testing is called Defect / Issue.

A defect accepted to be solved is called Bug.

Observe Your Interviewer

Step 1:
Do not sit until your interviewer has done so or asks you to take a seat.

Step 2:
Note your interviewer's body language. If he or she seems distracted or is fidgeting, lighten the atmosphere by telling an interesting but relevant story about your qualifications.

Step 3:
3Never interrupt an interviewer. Wait until he or she completes a sentence or question before responding or asking a question of your own. If you don't understand a question or statement, ask the interviewer to explain or repeat it.

Make Your Introduction

Step 1:
Stand and step forward to properly introduce yourself to a potential employer. Firmly shake hands, right hand only, even if you're left-handed.

Step 2:
Look your interviewer in the eye and introduce yourself. If it's a first meeting, use the interviewer's surname. For example, "Hello, Ms. Coleman. It's nice to meet you. I'm Jane"

Step 3:
Respond in kind to your interviewer's comments. If he or she says, "It's nice to meet you," then you should say, "Thank you. It's nice to meet you as well." Be polite, and your nerves will loosen up during the interview.