Tuesday, July 6, 2010

Despite vs In spite of

Despite

Despite means "even though," "notwithstanding," or "regardless of." It's the opposite of "because of/due to," and can be used with a noun or gerund.

She had difficulty communicating in French despite all her years of study.

We lost the game, despite the fact that we practiced all week.

Despite not having an umbrella, I walked home in the rain.


In spite of

In spite of means exactly the same thing and is used exactly the same way as "despite."

She had difficulty communicating in French in spite of all her years of study.

We lost the game, in spite of the fact that we practiced all week.

In spite of not having an umbrella, I walked home in the rain.


The Bottom Line

The English terms despite and in spite of are synonyms. Despite might be a tiny bit more formal, but the two terms are interchangeable. Just be careful not to say something like "despite of" or "in despite" - it's always either the three words in spite of, or just the single word despite.

Friday, June 25, 2010

How to find fifth highest salary

eg1:

select remaining_funds from election a where 4=(select count(distinct remaining_funds) from election b
where a.remaining_funds < b.remaining_funds)
eg2:
select top 1 salary from (select top 5 salary from tbl_Employee order by salary desc)


Top 5 salaries:
select distinct(remaining_funds) from election a where 4>=(select count(distinct remaining_funds) from election b
where a.remaining_funds < b.remaining_funds) order by remaining_funds desc

Saturday, June 19, 2010

What is the difference between Get and Post methods

What is proxy server?

What is meant by trunc and tag in SVN tortoise?

what is RSS?

what is the difference between https and http?

What is SSL?

The Secure Socket Layer protocol was created by Netscape to ensure secure transactions between web servers and browsers. The protocol uses a third party, a Certificate Authority (CA), to identify one end or both end of the transactions. This is in short how it works.

1.A browser requests a secure page (usually https://).
2.The web server sends its public key with its certificate.
3.The browser checks that the certificate was issued by a trusted party (usually a trusted root CA), that the certificate is still valid and that the certificate is related to the site contacted.
4.The browser then uses the public key, to encrypt a random symmetric encryption key and sends it to the server with the encrypted URL required as well as other encrypted http data.
5.The web server decrypts the symmetric encryption key using its private key and uses the symmetric key to decrypt the URL and http data.
6.The web server sends back the requested html document and http data encrypted with the symmetric key.
7.The browser decrypts the http data and html document using the symmetric key and displays the information.

Several concepts have to be understood here.

Private Key/Public Key:

The encryption using a private key/public key pair ensures that the data can be encrypted by one key but can only be decrypted by the other key pair. This is sometime hard to understand, but believe me it works. The keys are similar in nature and can be used alternatively: what one key emcrypts, the other key pair can decrypt. The key pair is based on prime numbers and their length in terms of bits ensures the difficulty of being able to decrypt the message without the key pairs. The trick in a key pair is to keep one key secret (the private key) and to distribute the other key (the public key) to everybody. Anybody can send you an encrypted message, that only you will be able to decrypt. You are the only one to have the other key pair, right? In the opposite , you can certify that a message is only coming from you, because you have encrypted it with you private key, and only the associated public key will decrypt it correctly. Beware, in this case the message is not secured you have only signed it. Everybody has the public key, remember!

One of the problem left is to know the public key of your correspondent. Usually you will ask him to send you a non confidential signed message that will contains his publick key as well as a certificate.

Message-->[Public Key]-->Encrypted Message-->[Private Key]-->Message

The Certificate:

How do you know that you are dealing with the right person or rather the right web site. Well, someone has taken great length (if they are serious) to ensure that the web site owners are who they claim to be. This someone, you have to implicitly trust: you have his/her certificate loaded in your browser (a root Certificate). A certificate, contains information about the owner of the certificate, like e-mail address, owner's name, certificate usage, duration of validity, resource location or Distinguished Name (DN) which includes the Common Name (CN) (web site address or e-mail address depending of the usage) and the certificate ID of the person who certifies (signs) this information. It contains also the public key and finally a hash to ensure that the certificate has not been tampered with. As you made the choice to trust the person who signs this certificate, therefore you also trust this certificate. This is a certificate trust tree or certificate path. Usually your browser or application has already loaded the root certificate of well known Certification Authorities (CA) or root CA Certificates. The CA maintains a list of all signed certificates as well as a list of revoked certificates. A certificate is insecure until it is signed, as only a signed certificate cannot be modified. You can sign a certificate using itself, it is called a self signed certificate. All root CA certificates are self signed.

As You may have noticed, the certificate contains the reference to the issuer, the public key of the owner of this certificate, the dates of validity of this certificate and the signature of the certificate to ensure this certificate hasen't been tampered with. The certificate does not contain the private key as it should never be transmitted in any form whatsoever. This certificate has all the elements to send an encrypted message to the owner (using the public key) or to verify a message signed by the author of this certificate.


The Symmetric key:

Well, Private Key/Public Key encryption algorithms are great, but they are not usually practical. It is asymmetric because you need the other key pair to decrypt. You can't use the same key to encrypt and decrypt. An algorithm using the same key to decrypt and encrypt is deemed to have a symmetric key. A symmetric algorithm is much faster in doing its job than an asymmetric algorithm. But a symmetric key is potentially highly insecure. If the enemy gets hold of the key then you have no more secret information. You must therefore transmit the key to the other party without the enemy getting its hands on it. As you know, nothing is secure on the Internet. The solution is to encapsulate the symmetric key inside a message encrypted with an asymmetric algorithm. You have never transmitted your private key to anybody, then the message encrypted with the public key is secure (relatively secure, nothing is certain except death and taxes). The symmetric key is also chosen randomly, so that if the symmetric secret key is discovered then the next transaction will be totally different.

Difference between SOA and SOAP

what is needed by the company to stay back in the company

Best answer:
There is no scope to learn much. Where i am looking for different domain and technologies to learn.

Difference between cookies and sessions

As far as my knowledge is concerned, cookies are stored on client side where as sessions are server variables. The storage limitations are also there (like IE restricts the size of cookie to be not more than 4096 bytes). We can store only a string value in a cookie where as objects can be stored in session variables. The client will have to accept the cookies in order to use cookies, there is no need of user's approval or confirmation to use Session variables cos they are stored on server. The other aspect of this issue is cookies can be stored as long as we want(even for life time) if the user accepts them, but with session variables we can only store something in it as long as the session is not timed out or the browser window is not closed which ever occurs first.

Coming to usage you can use both cookies and session in the same page.

We should go for cookies to store something that we want to know when the user returns to the web page in future (eg. remember me on this computer check box on login pages uses cookies to remember the user when he returns). Sessions should be used to remember something for that particular browser session (like the user name, to display on every page or where ever needed)

Cookies
- stored on CLIENT machine
- amount of data to be stored is LIMITED
- it can only store STRINGS
- quite FASTER than a session

Session
- stored on SERVER machine
- amount of data to be stored is NOT LIMITED
- it can store OBJECTS
- quite SLOWER as compared to cookies

Sessions: are basically tokens which are generated when a
user proceeds with a login mechanism. Each time when a user
logged into a website a new and unique token is generated
and it will destroyed whenever he/she logged out from that
site or power goes off. However, session information is
temporary and will be deleted after the user has left the
website.

Cookies:are temporary files which are store in users hard
disk. A cookie is often used to identify a user. Suppose a
user enters into a website and without logging off he/she
closed the page, next time when he/she open the page he/she
found himself/ herself logged in. This is because of
cookies, they store the user information. We can set the
cookies by setCookie() function. The syntax if
setCookie function is setCookie(name, value, expire,
path, domain);
.

Sunday, June 6, 2010

Nice conversation 1

what's up, i don't wanna annoy you, I just found you're profile in the search and thought you seemed cool

So, my name is Rajesh. I think we should be friends, cause you seem pretty nice, and maybe even cute! (it's so tough to tell in this digital world

Friday, June 4, 2010

What is SEI? CMM? ISO? IEEE? ANSI?

SEI = ‘Software Engineering Institute’ at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.

· CMM = ‘Capability Maturity Model’, developed by the SEI. It’s a model of 5 levels of organizational ‘maturity’ that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.

Level 2 – software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.

Level 3 – standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is in place to oversee software processes, and training programs are used to ensure understanding and compliance.

Level 4 – metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.

Level 5 – the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

· ISO = ‘International Organization for Standards’ – The ISO 9001, 9002, and 9003 standards concern quality systems that are assessed by outside auditors, and they apply to many kinds of production and manufacturing organizations, not just software. The most comprehensive is 9001, and this is the one most often used by software development organizations. It covers documentation, design, development, production, testing, installation, servicing, and other processes. ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development organizations. The U.S. version of the ISO 9000 series standards is exactly the same as the international version, and is called the ANSI/ASQ Q9000 series. The U.S. version can be purchased directly from the ASQ (American Society for Quality) or the ANSI organizations. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO 9000 certification does not necessarily indicate quality products – it indicates only that documented processes are followed.

· IEEE = ‘Institute of Electrical and Electronics Engineers’ – among other things, creates standards such as ‘IEEE Standard for Software Test Documentation’ (IEEE/ANSI Standard 829), ‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), ‘IEEE Standard for Software Quality Assurance Plans’ (IEEE/ANSI Standard 730), and others.

· ANSI = ‘American National Standards Institute’, the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

Friday, May 21, 2010

Batch processing Application in .net

just make a console app and use windows scheduler to run it.

Batch file

In DOS, OS/2, and Microsoft Windows, a batch file is a text file containing a series of commands intended to be executed by the command interpreter. When a batch file is run, the shell program (usually COMMAND.COM or cmd.exe) reads the file and executes its commands, normally line-by-line. Batch files are useful for running a sequence of executables automatically and are often used by system administrators to automate tedious processes.[1] Unix-like operating systems (such as Linux) have a similar type of file called a shell script.[2]

DOS batch files have the filename extension .bat. Batch files for other environments may have different extensions, e.g. .cmd or .bat in the Microsoft Windows NT-family of operating systems and OS/2, or .btm in 4DOS and 4NT related shells. The Windows 9x family of operating systems only recognize the .bat extension.

Console Applications

Console Applications are command-line oriented applications that allow us to read characters from the console, write characters to the console and are executed in the DOS version. Console Applications are written in code and are supported by the System.Console namespace.

Tuesday, May 18, 2010

what is difference between Test life cycle and defect life cycle

what is Agile Methodology with respect to Software Testing? What is Sprint ? What is Scrum ? What is the Purpose of this Method?

Agile Methodology wrt Software Testing:-

-Agile Methodology refers to the selection criteria used to measure the effectivness and efficiency of the testing process.

-It identifies the key individuals in the activitiees for compressing software testing time

Methodology says :-

1.IT is far better to change the current process than to aquire/build build and implement and entirely new process.

2.Focussing on time compression(ie,reducing the time required to perform a task)has,as its by product,testing effectivenss and process agility.

3.The quickest way to compree time ina testing process is to reduce process variability.

4.It is more important to determine that ideas are implementable than to select thhe best idea,which may not be doable.

4.Continuous small improvements are superior to few major improvements.

5.Dont make any improvement until you know that the organization,and those involved,will support the improvement(ie,dont begin the task that you know has a high probability of failure.)

The four key factors of Agile Methodologu are :

1.Individuals and interactions over processes and tool :- Teams of people build software system,to do that they need to work together effectively,including but not limited to programmers,testers,project managers,modelers and your customers.

2.Working Software over comprehensive ocumenation :-Documentation has its place,Written properly ,it is a valuable guide for people understanding of how and why a system is built and how to work with the system.

3.Customer collaboration over contract negotiation :-Only your customer can tell you what they want.Successfull developers work closely with their customers,they invest the effort to discover what their customers need ,and they educate their customers along the way.

4.Responding to change over following a plan : -as work progresses on your system,your project stakehoolder\\\'s understanding of the problem domain and of what you are building changes.There must be a room to change it as your situation changes,otherwise your plan quickly becomes irrelevant.

SPRINT :- A Sprint is a longer feedback loop in which the software developed in the last iteration is demonstrated to project sponsors and end users.Each iteration, called a Sprint, lasts about four weeks. A Sprint is a block of time in which development of software is completed.

SCRUM:-It is a project and requirement managment methodology which is often tailored into other agile methods,It sets out simple rules for authority and responsibility - if you are on the team,you are a pig that has the responsibility and the authority to get the job done.If you are not on the team,you are a chicken who provides information when requested but otherwise get out of the way.

SCRUM software testing can be more effective using an iterative lifecycle, feedback loops approach. Scrum is nothing but a process, which is based on iterative, incremental practices to manage software development.

The Scrum methodology can pose a challenge for software testers who are used to more traditional waterfall-inspired development processes.

what is the entry criteria for testing

entry criteria for testing is dependent on what SDLC model is being followed. For example, in waterfall (streamline) model, testing entry criteria is when the s/w is developed completely, then testing can begin.

Difference between Usability and GuI Testing

GUI testing is functional testing - ensuring that all interactions, navigation, links, pop-ups, content, etc all work as required. Every aspect of the interface must be tested and this can usually be done by developing tests based on your product's functional requirements (if these are documented).

GUI testing is done to ensure the GUI conforms to design specifications - e.g. colours, fonts, font sizes, placement of data labels and fields, icons, buttons, links, etc. are displayed as specified.

Usability testing is non-functional testing, which involves testing against non-functional requirements such as standards or development guidelines. These non-functional requirements place certain design constraints on the development activity, but don't actually explicitly state how the product should function.

Usability testing is done to ensure that the GUI is well designed and easy to use - e.g. are mandatory fields displayed first, is the cursor positioned at the right field on initial entry, is tabbing done in the right order, is the text easy to read against the background colour, etc

These will typically be things like design consistency, ease of use, informative feedback, easy reversal of actions and learnability - all things that you should test by observing actual user behavior in the field, not by a tester from your company.

what are 6 Microsoft rules used for user interface testing?

Microsoft 6 rules are:
1.Controls should be InitCap
2.OK,Cancel buttons existence
3.System Menu existence
4.Controls should be visible
5.Controls should be aligned
6.Controls should not be overlapped.

What is GUI Testing

GUI Testing: graphical user interface Testing to test look and feel ...just to check graphical things(ex:-...logo.. image.. font..

what is Slippage Ratio?

Slippage Ratio= B-A where

B=extra time taken to complete the work and A = actual time given

useful links to expertise in testing

1.http://kuldeepse.wordpress.com/2008/02/04/1-declaring-variables/

What is the difference between end to end testing & system testing?

System Testing can be functional as well as non functional testing of the System, but about end-to-end testing, all the interfaces, sub-systems we test. All the end-to-end scenarios should be considered and executed before deployment of the system into actual environment

Testing that comes under system testing

System testing is nothing but testing the whole application which was integrated all the modules.
In system testing there are two type of testing
1.Functionality testing 2.Non-functionality testing
Functionality testing means test whether application functioning stated requirement or not.
In non-functionality testing it has more no. of types
ie. Load stress performance reliability security usability configuration compatibility(forward & backward) volume scalability Localization and internalization testing

difference between client server application testing & web application testing

Client- server applications: Application is loaded at the server. In every client machine, an exe is loaded to call this application.

Web Based application : Application is loaded at the server.but,No exe is installed at the client machine. We have to call the application through browser.

Client server Technology:
1. Number of clients is predicted or known
2. Client and server are the entities to be tested
3. Both server and client locations are fixed and known to the user
4. Server to server interaction is prohibited
5. Low multimedia type of data transaction
6. Designed and implemented on intranet environment

Web based Technology
1. Number of clients is difficult to predict (millions of clients)
2. Client, Server and network are the entities to be tested
3. Server location is certain, client locations are not certain
4. Server to server interaction is normal
5. Rich multimedia type of data transaction
6. Designed and implemented on internet environment
Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote

Monday, May 17, 2010

What is test bed?Explain with an example

Test bed is the environment that is required to test software.

This include requirement of H/W,S/W, Memory, cpu speed, operating system etc.

What is Golden bug?

Golden Bug: The bug that is occurred in every instances of the application with severity level high and with high priority.

Golden bugs are bugs that may affect the critical functionality of the system.

What is Exhaustive Testing?

Testing which covers all combination of input values and preconditions for an element of the software under test.

What is a Gap Analysis

Gap Analysis is used to find out the gap between what is implemented and what exactly customer expects(requires)

What is RTM??

RTM - Requirement Tracability Matrix

In testing this is the main part, this matrix will help us to crosscheck whether the test cases has covered all the requirement specifications.

What is Latent bugs?

An uncovered or unidentified bug which is existing in the system over a period of time is referred as Latent Bug. The bug may exists in the system for one or more versions of thesoftware and may also be identified after its release.

The problems caused by the Lateral bugs will not cause damage as of now, but they are just waiting to reveal themselves later.

One good example of a Latent bug the reason for the Y2K problem. At the beginning the year was given only 2 numeric fields, but actually it needs 4 numeric fields. The problem prevails in the system for a long time and identified later and then fixed. Also the problem does not cause the damage all of a sudden and it caused only by the year 2000, which certainly needs 4 numeric field.

It is very diffucult to identify the Latent Bug by using conventional testing techniques, it can be identified by doing code review or by usability testing which foresees the forth coming problems earlier.

What is Test Driver and Test Stub?

Test driver is a program that replaces a high level module(HLM) while performing the bottom up approach of incremental testing.

Whereas test stub is a program that replaces a low level module (LLM) while performing the top down approach of incremental testing.

difference between bug reporting and bug tracking

Bug reporting means reporting bugs to developer

Bug tracking means finding bug from application working on

what is difference between application server & web server? plz. give with example?

Taking a big step back, a Web server serves pages for viewing in a Web browser, while an application server provides methods that client applications can call. A little more precisely, we can say that: A Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols. Let’s examine each in more detail.

The Web server

A Web server handles the HTTP protocol. When the Web server receives an HTTP request, it responds with an HTTP response, such as sending back an HTML page. To process a request, a Web server may respond with a static HTML page or image, send a redirect, or delegate the dynamic response generation to some other program such as CGI scripts, JSPs (JavaServer Pages), servlets, ASPs (Active Server Pages), server-side JavaScripts, or some other server-side technology. Whatever their purpose, such server-side programs generate a response, most often in HTML, for viewing in a Web browser. Understand that a Web server’s delegation model is fairly simple. When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn’t provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program usually provides for itself such functions as transaction processing, database connectivity, and messaging. While a Web server may not itself support transactions or database connection pooling, it may employ various strategies for fault tolerance and scalability such as load balancing, caching, and clustering—features oftentimes erroneously assigned as features reserved only for application servers.

The application server

As for the application server, according to our definition, an application server exposes business logic to client applications through various protocols, possibly including HTTP. While a Web server mainly deals with sending HTML for display in a Web browser, an application server provides access to business logic for use by client application programs. The application program can use this logic just as it would call a method on an object (or a function in the procedural world). Such application server clients can include GUIs (graphical user interface) running on a PC, a Web server, or even other application servers. The information traveling back and forth between an application server and its client is not restricted to simple display markup. Instead, the information is program logic. Since the logic takes the form of data and method calls and not static HTML, the client can employ the exposed business logic however it wants. In most cases, the server exposes this business logic through a component API, such as the EJB (Enterprise JavaBean) component model found on J2EE (Java 2 Platform, Enterprise Edition) application servers. Moreover, the application server manages its own resources. Such gate-keeping duties include security, transaction processing, resource pooling, and messaging. Like a Web server, an application server may also employ various scalability and fault-tolerance techniques

An example

As an example, consider an online store that provides real-time pricing and availability information. Most likely, the site will provide a form with which you can choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. I’ll show you one scenario that doesn’t use an application server and another that does. Seeing how these scenarios differ will help you to see the application server’s function.

Scenario 1: Web server without an application server

In the first scenario, a Web server alone provides the online store’s functionality. The Web server takes your request, then passes it to a server-side program able to handle the request. The server-side program looks up the pricing information from a database or a flat file. Once retrieved, the server-side program uses the information to formulate the HTML response, then the Web server sends it back to your Web browser. To summarize, a Web server simply processes HTTP requests by responding with HTML pages.

Scenario 2: Web server with an application server

Scenario 2 resembles Scenario 1 in that the Web server still delegates the response generation to a script. However, you can now put the business logic for the pricing lookup onto an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server’s lookup service. The script can then use the service’s result when the script generates its HTML response. In this scenario, the application server serves the business logic for looking up a product’s pricing information. That functionality doesn’t say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server’s lookup service, the service simply looks up the information and returns it to the client. By separating the pricing logic from the HTML response-generating code, the pricing logic becomes far more reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checks out a customer. In contrast, in Scenario 1 the pricing lookup service is not reusable because the information is embedded within the HTML page. To summarize, in Scenario 2′s model, the Web server handles HTTP requests by replying with an HTML page while the application server serves application logic by processing pricing and availability requests.

So, difference between AppServer and a Web server
(1) Webserver serves pages for viewing in web browser, application server provides exposes businness logic for client applications through various protocols

(2) Webserver exclusively handles http requests.application server serves bussiness logic to application programs through any number of protocols.

(3) Webserver delegation model is fairly simple,when the request comes into the webserver,it simply passes the request to the program best able to handle it(Server side program). It may not support transactions and database connection pooling.

(4) Application server is more capable of dynamic behaviour than webserver. We can also configure application server to work as a webserver.Simply applic! ation server is a superset of webserver.

What is mutation testing

a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (’bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources.

Mutation testing (or Mutation analysis or Program mutation) is a method of software testing, which involves modifying programs' source code or byte code in small ways.[1] In short, any tests which pass after code has been mutated are considered defective. These so-called mutations, are based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as driving each expression to zero). The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution.

Tests can be created to verify the correctness of the implementation of a given software system. But the creation of tests still poses the question whether the tests are correct and sufficiently cover the requirements that have originated the implementation. (This technological problem is itself an instance of a deeper philosophical problem named "Quis custodiet ipsos custodes?" ["Who will guard the guards?"].) In this context, mutation testing was pioneered in the 1970s to locate and expose weaknesses in test suites. The theory was that if a mutation was introduced without the behavior (generally output) of the program being affected, this indicated either that the code that had been mutated was never executed (redundant code) or that the testing suite was unable to locate the injected fault. In order for this to function at any scale, a large number of mutations had to be introduced into a large program, leading to the compilation and execution of an extremely large number of copies of the program. This problem of the expense of mutation testing has reduced its practical use as a method of software testing.

Mutation testing was originally proposed by Richard Lipton as a student in 1971,[2] and first developed and published by DeMillo, Lipton and Sayward. The first implementation of a mutation testing tool was by Timothy Budd as part of his PhD work (titled Mutation Analysis) in 1980 from Yale University.

Recently, with the availability of massive computing power, there has been a resurgence of mutation analysis within the computer science community, and work has been done to define methods of applying mutation testing to object oriented programming languages and non-procedural languages such as XML, SMV, and finite state machines.

In 2004 a company called Certess Inc. extended many of the principles into the hardware verification domain. Whereas mutation analysis only expects to detect a difference in the output produced, Certess extends this by verifying that a checker in the testbench will actually detect the difference. This extension means that all three stages of verification, namely: activation, propagation and detection are evaluated. They have called this functional qualification.

Fuzzing is a special area of mutation testing. In fuzzing, the messages or data exchanged inside communication interfaces (both inside and between software instances) are mutated, in order to catch failures or differences in processing the data. Codenomicon[3] (2001) and Mu Dynamics (2005) evolved fuzzing concepts to a fully stateful mutation testing platform, complete with monitors for thoroughly exercising protocol implementations.

Mutation testing overview

Mutation testing is done by selecting a set of mutation operators and then applying them to the source program one at a time for each applicable piece of the source code. The result of applying one mutation operator to the program is called a mutant. If the test suite is able to detect the change (i.e. one of the tests fails), then the mutant is said to be killed.

For example, consider the following C++ code fragment:

if (a && b)
c = 1;
else
c = 0;

The condition mutation operator would replace '&&' with '||' and produce the following mutant:

if (a || b)
c = 1;
else
c = 0;

Now, for the test to kill this mutant, the following condition should be met:

* Test input data should cause different program states for the mutant and the original program. For example, a test with a=1 and b=0 would do this.
* The value of 'c' should be propagated to the program's output and checked by the test.

Weak mutation testing (or weak mutation coverage) requires that only the first condition is satisfied. Strong mutation testing requires that both conditions are satisfied. Strong mutation is more powerful, since it ensures that the test suite can really catch the problems. Weak mutation is closely related to code coverage methods. It requires much less computing power to ensure that the test suite satisfies weak mutation testing than strong mutation testing.
[edit] Equivalent mutants

Many mutation operators can produce equivalent mutants. For example, consider the following code fragment:

int index=0;
while (...)
{
. . .;
index++;
if (index==10)
break;
}

Boolean relation mutation operator will replace "==" with ">=" and produce the following mutant:

int index=0;
while (...)
{
. . .;
index++;
if (index>=10)
break;
}

However, it is not possible to find a test case which could kill this mutant. The resulting program is equivalent to the original one. Such mutants are called equivalent mutants.

Equivalent mutants detection is one of biggest obstacles for practical usage of mutation testing. The effort, needed to check if mutants are equivalent or not, can be very high even for small programs.[4]
[edit] Mutation operators

A variety of mutation operators were explored by researchers. Here are some examples of mutation operators for imperative languages:

* Statement deletion.
* Replace each boolean subexpression with true and false.
* Replace each arithmetic operation with another one, e.g. + with *, - and /.
* Replace each boolean relation with another one, e.g. > with >=, == and <=.
* Replace each variable with another variable declared in the same scope (variable types should be the same).

These mutation operators are also called traditional mutation operators. Beside this, there are mutation operators for object-oriented languages[5] , for concurrent constructions[6], complex objects like containers[7] etc. They are called class-level mutation operators. For example the MuJava tool offers various class-level mutation operators such as: Access Modifier Change, Type Cast Operator Insertion, Type Cast Operator Deletion.

what is test data

Test Data is the input data which we are using to execute test cases. It is not the original data, tester will prepare some dummy data similar to actual data

What will you do when you find a Defect in the product that you are Testing?

first step is to write defect report with defect id,testcase id,defect procedure,defect status,defect identified by,defect assigned to,defect raised date,defect closed date,defect severity,defect priority.
(identify bug and report to the developer with proof(snap shots) that is reproducing the bug.
check if the bug has been fixed.if fixed then do retesting & regression testing.)

What would you do when there is no requirement document available or have very poor doc for testing?

We do Exploratory testing (one method of Ad-hoc Testing)
this is performing by test eng,s due to lack of Documentation this is also calling as Artistic testing

What is Soak Testing?

Soak testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use.

For example, in software testing, a system may behave exactly as expected when tested for 1 hour. However, when it is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.

Soak tests are used primarily to check the reaction of a subject under test under a possible simulated environment for a given duration and for a given threshold. Observations made during the soak test are used to improve the characteristics of the subject under test further.

In electronics, soak testing may involve testing a system up to or above its maximum ratings for a long period of time. Some companies may soak test a product for a period of many months, while also applying external stresses such as elevated temperatures.

What is Feasibility study

Feasibility study is OR should be the first phase of the SDLC where all the higher officials of the project will sit together and study that whether the present infrastructure, man power,funds,tools,technology what the organization has will this help us to complete the proposed project in the scheduled time slot

Sunday, May 16, 2010

What is emulation?

Emulation, in a software context, is the use of an application program or device to imitate the behavior of another program or device.

Common uses of emulation include:

* Running an operating system on a hardware platform for which it was not originally engineered.
* Running arcade or console-based games upon desktop computers.
* Running legacy applications on devices other than the ones for which they were developed.
* Running application programs on different operating systems other than those for which they were originally written.

A common example of that last type of emulation is running Windows applications on Linux computers.Virtual PC is another example of an emulator that allows Macs to run Windows XP, though the addition of Boot Camp to next-generation Intel-based Macs has removed the need for that application in the Macintosh environment in the future.

What is install/uninstall testing

testing of full, partial, or upgrade install/uninstall processes.

Why we include Non Functional Test cases in our Test Plan? What is the Use of it?

test plan consists both functional and non-functional test cases.test cases for performance testing,load testing,security testing,stress testing and installation testing comes under non-functional test cases.

what is the sixsigma

Six Sigma at many organizations is a measure of quality that strives for near perfection. Six Sigma is a disciplined, data-driven philosophy and methodology for eliminating defects (driving towards six standard deviations between the mean and the nearest specification limit) in any process.

The statistical representation of Six Sigma describes quantitatively how a process is performing. To achieve Six Sigma, a process must not produce more than 3.4 defects per million opportunities. A Six Sigma defect is defined as anything outside of customer specifications. A Six Sigma opportunity is then the total quantity of chances for a defect.

What is Test Bed?

An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

What is Branch Testing?

Testing in which all branches in the program source code are tested at least once.

what is yellow box testing

Yellow Box Testing is done to check the error messages

What is the difference between QC and QA?

Quality assurance is the process where the documents for the product to be tested is verified with actual requirements of the customers. It includes inspection, auditing , code review , meeting etc. Quality control is the process where the product is actually executed and the expected behavior is verified by comparing with the actual behavior of the software under test. All the testing types like black box testing, white box testing comes under quality control. Quality assurance is done before quality control.

What is the difference between version and build.

Build :Its a release from the development team towards the testing team.
Version : It shows how much iterations each build has taken throughout.

What are the Configuration Management tools available ?

1. Visual source safe(Microsoft product)
2. Rational clear case(Rational Corp Product)
3. Concurrent Version system

What is the difference between authorization and authentication in security testing?

AUTHORIZATION:
Terms and conditions to assign the s/w
AUTHENTICATION:
Giving specific user name and password.

what is cookie testing

it is the part of unit testing(white box testing)

sometimes for some applications the developers maintain cookies which all capture the logins or userids and which are useful to reuse in the next coming pages, for eg.,while using the gmail or any mail applications we can observe that the user name is displaying in all the pages which is possible thru the cookies only.

In testing of the cookies we have to observe all these things

and we have to test the level of security also which is the major thing concerned with the cookies

what is the difference between testing methods and testing techniques

well it is such a interesting question.....

In testing methods we will use different types of testing methods such as functional,integration,system testing etc...

In testing techniques we basically use bva techniques,error guessing, equivalence partition etc...to write a test cases...

What is priority and severity?

Priority : How soon the bug should be fixed.

Severity : How critical the bug is

ISTQB Exam Questions on Equivalence partitioning and Boundary Value Analysis

Here are few sample questions for practice from ISTQB exam papers on Equivalence partitioning and BVA. (Ordered: Simple to little complex)

Question 1
One of the fields on a form contains a text box which accepts numeric values in the range of 18 to 25. Identify the invalid Equivalence class.

a) 17
b) 19
c) 24
d) 21

Solution
The text box accepts numeric values in the range 18 to 25 (18 and 25 are also part of the class). So this class becomes our valid class. But the question is to identify invalid equivalence class. The classes will be as follows:
Class I: values < 18 => invalid class
Class II: 18 to 25 => valid class
Class III: values > 25 => invalid class

17 fall under invalid class. 19, 24 and 21 fall under valid class. So answer is ‘A’

Question 2
In an Examination a candidate has to score minimum of 24 marks in order to clear the exam. The maximum that he can score is 40 marks. Identify the Valid Equivalence values if the student clears the exam.

a) 22,23,26
b) 21,39,40
c) 29,30,31
d) 0,15,22

Solution
The classes will be as follows:
Class I: values < 24 => invalid class
Class II: 24 to 40 => valid class
Class III: values > 40 => invalid class

We have to indentify Valid Equivalence values. Valid Equivalence values will be there in Valid Equivalence class. All the values should be in Class II. So answer is ‘C’

Question 3
One of the fields on a form contains a text box which accepts alpha numeric values. Identify the Valid Equivalence class
a) BOOK
b) Book
c) Boo01k
d) Book

Solution
Alpha numeric is combination of alphabets and numbers. Hence we have to choose an option which has both of these. A valid equivalence class will consist of both alphabets and numbers. Option ‘c’ contains both alphabets and numbers. So answer is ‘C’

Question 4
The Switch is switched off once the temperature falls below 18 and then it is turned on when the temperature is more than 21. When the temperature is more than 21. Identify the Equivalence values which belong to the same class.

a) 12,16,22
b) 24,27,17
c) 22,23,24
d) 14,15,19

Solution
We have to choose values from same class (it can be valid or invalid class). The classes will be as follows:

Class I: less than 18 (switch turned off)
Class II: 18 to 21
Class III: above 21 (switch turned on)

Only in Option ‘c’ all values are from one class. Hence the answer is ‘C’. (Please note that the question does not talk about valid or invalid classes. It is only about values in same class)

Question 5
A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of the following input values cover all of the equivalence partitions?

a. 10,11,21
b. 3,20,21
c. 3,10,22
d. 10,21,22

Solution
We have to select values which fall in all the equivalence class (valid and invalid both). The classes will be as follows:

Class I: values <= 9 => invalid class
Class II: 10 to 21 => valid class
Class III: values >= 22 => invalid class

All the values from option ‘c’ fall under all different equivalence class. So answer is ‘C’.

Question 6
A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of the following covers the MOST boundary values?

a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21

Solution
We have already come up with the classes as shown in question 5. The boundaries can be identified as 9, 10, 21, and 22. These four values are in option ‘b’. So answer is ‘B’

Question 7
In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free.
The next £1500 is taxed at 10%.
The next £28000 after that is taxed at 22%.
Any further amount is taxed at 40%.

To the nearest whole pound, which of these groups of numbers fall into three DIFFERENT equivalence classes?
a) £4000; £5000; £5500
b) £32001; £34000; £36500
c) £28000; £28001; £32001
d) £4000; £4200; £5600

Solution
The classes will be as follows:
Class I : 0 to £4000 => no tax
Class II : £4001 to £5500 => 10 % tax
Class III : £5501 to £33500 => 22 % tax
Class IV : £33501 and above => 40 % tax

Select the values which fall in three different equivalence classes. Option ‘d’ has values from three different equivalence classes. So answer is ‘D’.

Question 8
In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free.
The next £1500 is taxed at 10%.
The next £28000 after that is taxed at 22%.
Any further amount is taxed at 40%.

To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?
a) £28000
b) £33501
c) £32001
d) £1500

Solution
The classes are already divided in question # 7. We have to select a value which is a boundary value (start/end value). 33501 is a boundary value. So answer is ‘C’.

Question 9
Given the following specification, which of the following values for age are in the SAME equivalence partition?

If you are less than 18, you are too young to be insured.
Between 18 and 30 inclusive, you will receive a 20% discount.
Anyone over 30 is not eligible for a discount.
a) 17, 18, 19
b) 29, 30, 31
c) 18, 29, 30
d) 17, 29, 31

Solution
The classes will be as follows:
Class I: age < 18 => not insured
Class II: age 18 to 30 => 20 % discount
Class III: age > 30 => no discount

Here we cannot determine if the above classes are valid or invalid, as nothing is mentioned in the question. (But according to our guess we can say I and II are valid and III is invalid. But this is not required here.) We have to select values which are in SAME equivalence partition. Values from option ‘c’ fall in same partition. So answer is ‘C’.

What is Boundary value analysis and Equivalence partitioning?

Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing.

Equivalence Partitioning:

In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.

In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.

E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.

Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.

So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs.

Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.

2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.

3) Input data with any value greater than 1000 to represent third invalid input class.

So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.

We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised.

Equivalence partitioning uses fewest test cases to cover maximum requirements.

Boundary value analysis:

It’s widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain.

Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes.

Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.

2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.

3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.

Boundary value analysis is often called as a part of stress and negative testing.

Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments.

E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes.

This should be a very basic and simple example to understand the Boundary value analysis and Equivalence partitioning concept.

what is the difference between Regression testing and re-testing?

Regression Testing means re-testing the whole system again after a new release whereas

Re- Testing means testing the issue only which has been fixed for which new release has been given.

Briefly Explain the CMM level?

cmmi means CAPABILITY MATURITY MODEL INTEGRATION ,

cmmi have a five level

1, initial 2 repitable 3 managed 4 optimizing 5 defined

1 initial : its a adhoc approach

2 repitable: its a project level approach

3 defined : its a organization level approach

4 managed : its a management level approach

5 optimizing: its a different

Difference between smoke testing and sanity testing

Smoke Testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details. Sanity Testing is a cursory testing,it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

What is volume testing

Testing the application behavior with huge amounts of data

what is the difference between application server and database server?

Application Server: An Application Server has applications installed on it which users on the network can then run as if they were installed on the workstation they are using.

Database Server: A Database Server has programs installed that allow it to provide database services over a network.The data,queries,report generations etc are all stored on the server,while the client machine use a front end to access those services.

What’s an ‘inspection’?

Inspection is a formal meeting in an organization
Walk-through is Informal meeting in an organization

which model is used by most of the companies

verification and validation model i.e V-MODEL

what is difference between monkey testing and Gorilla testing?

monkey testing: testing the application here and therei.e random way of testing
gorilla testing: more detailed testing on particular module/functionality

What is Database testing

In the database testing we check for the data integrity,redundancy and mainly we focus on the client requirements for the data base testing,here we write the queries for the client requirements and compare the result set with the actual values

how many types of testings can be done manually?

there are a lots of type for Manual testing. but main types are two i. e. Black Box testing and White Box testing

which document is used for test execution in manual testing

Test case is the document which is used for test execution in manual testing

what are the daily activities as a tester?

Generally a tester involves in peer to peer test case review, preparing and executing test cases, posting bugs and verifying once the dev says they are fixed

what is a test plan

A test plan is a road map of all testing activities to be followed by testers. It contains scope of the application, objective, both software and hardware resources required, time schedule, areas to be tested, areas not to be tested, risk factors etc.

Why should we hire you?

you should hire me becz i sincerely believe that i am the right candidate for this post because i do possess those qualities which you all are looking for this post in a candidate.like i know how to groom myself ,how to deal with the pax and i strongly belive in my self .this is a very big opportunity for me to be a part of such a great airline and fulfill my dreams.if i'll get shortlisted for being a part of this company then i'll prove myself and i'll do dis job wid full of dedication.

Can you work well under deadlines or pressure?How?

Working under pressure, its like a challenge. Creating a good environment and planning will lead to success that's what I believe the most.

what is your memorable day

That was a great memorable on the first day to start my working career

how to impress the interviewer?

First Impression is the Last Impression..

So always Greet the interviewer.

Keep Smiling & Be Confident.

Do not ignore his questions,properly answer them.

Don't be Over-reacted or Over smart.

Lastly,greet him,shake hand & tell him/her that "It was nice meeting you"

what is your goal of life?

i want to give the happiness to my parents.

where do you see yourself in five(5) or ten(10) years from now?

When an interviewer asks this question He wants to see/know
-Where you see your self within the company
-Are you the person who set the goals and plans ahead?
-DO your goals matches to the position or company

My answer example:- I will be working in this company as a team leader/manager(one or two levels up from your current position, and by that time I will be finishing my next level degree or certification required for the next levels.

what will you do if you dont get this job

I have prepared more for getting this job. definitely i'll get job. If i didn't get this job, next time i'll prepare more, and i'll analyse where ever i have done a mistake and i'll correct them for next interview.

What is your future Plans?

I wanna be excel in my professional life & like to keep me updated, which will help me to achieve new goals in my career.

Why we should not hire you?

i think you don't have any reasons for you not to hire me because so far as i know, i met all your qualifications and i believe that i will be very confident to do all the tasks that would be given to me.. i can finish my job wholeheartedly, with full dedication and best efforts.

Saturday, May 15, 2010

Types of Testing

Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.

Automated Testing:

* Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
* The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

B

Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.

Basic Block: A sequence of one or more consecutive, executable statements containing no branches.

Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.

Basis Set: The set of tests derived using basis path testing.

Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.

Benchmark Testing:Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.

Beta Testing: Testing of a rerelease of a software product conducted by customers.

Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".

Branch Testing: Testing in which all branches in the program source code are tested at least once.

Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.

C

CAST: Computer Aided Software Testing.

Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Coding: The generation of source code.

Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Component: A minimal software item for which a separate specification is available.

Component Testing: See Unit Testing.

Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.

D

Data Dictionary: A database that contains definitions of all data items defined during analysis.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

Debugging: The process of finding and removing the causes of software failures.

Defect: Nonconformance to requirements or functional / program specification

Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it. See also Static Testing.

E

Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Error: A mistake in the system under test; usually but not always a coding mistake on the part of the developer.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

F

Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.

Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.

Functional Testing: See also Black Box Testing.

* Testing the features and operational behavior of a product to ensure they correspond to its specifications.
* Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

G

Glass Box Testing: A synonym for White Box Testing.

Gorilla Testing: Testing one particular module, functionality heavily.

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

H

High Order Tests: Black-box tests conducted once the software has been integrated.

I

Independent Test Group (ITG): A group of people whose primary responsibility is software testing.

Inspection:A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

J

K

L

Load Testing: See Performance Testing.

Localization Testing: This term refers to making software specifically designed for a specific locality.

Loop Testing: A white box testing technique that exercises program loops.

M

Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

Mutation Testing: Testing done on the application where bugs are purposely added to it.

N

Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.

O

P

Path Testing: Testing in which all paths in the program source code are tested at least once.

Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

Q

Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.

Quality Management: That aspect of the overall management function that determines and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

R

Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

Ramp Testing: Continuously raising an input signal until the system breaks down.

Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

S

Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

Software Testing: A set of activities conducted with the intent of finding errors in software.

Static Analysis: Analysis of a program carried out without executing the program.

Static Analyzer: A tool that carries out static analysis.

Static Testing: Analysis of a program carried out without executing the program.

Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

T

Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Testing:

* The process of exercising software to verify that it satisfies specified requirements and to detect errors.
* The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
* The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

Test Automation: See Automated Testing.

Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case:

* Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
* A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development:Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.

Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.

Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

Test Procedure: A document providing detailed instructions for the execution of one or more test cases.

Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are to be executed.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.

Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.

U

Usability Testing: Testing the ease with which users can learn and use a product.

Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.

Unit Testing: Testing of individual software components.

V

Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.

Verification: The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

W

Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.

Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

Contents of test report

There are two documents,which should be prepared at
particular phase.
1.Test Results document.
2.Test Report document.

Test Results doc will be prepared at the phase of each type
of Testing like FULL FUNCTIONAL TEST PASS,REGRESSION TEST
PASS,SANITY TEST PASS etc...Test case execution against
the application.Once you prepared this doc,we will send the
doc to our TL and PM.By seeing the Test Results doc ,TL
will come to know the coverage part of the test case.Here I
am giving you the contents used in the Test Results doc.

1.Build No
2.Version Name
3.Client OS
4.Feature set
5.Main Feature
6.Defined Test cases on each feature.
7.QA engineer Name
8.Test cases executed.(Includes pass and fail)
9.Test cases on HOLD(Includes blocking test cases and
deferred Test cases)
10.Coverage Report(Which includes the coverage ratings in
% ,like % of test cases covered,% of test cases failed)

Coming to Test report,generally we will prepare Test
report ,once we rolled out the product to our client.This
document will be prepared by TL and delivered to
the client.Mainly,this document describes the what we have
done in the project,achievements we have reached,our
leanings in throughout the project etc...The other name
for Test report is Project Closure Report and we will
summarize the all the activities,which have taken place in
through out the project.Here I am giving your the contents
covered in the Test Report.
1.Test Environment(Should be covered the OS,Application or
web servers,Machine names,Database,etc...)
2.Test Methods(Types of Tests,we have done in the project
like Functional Testing,Platform Testing,regression
Testing,etc..
3.Major areas Covered.
4.Bug Tracking Details.(Includes inflow and outflow of the
bus in our delivered project)
5.Work schedule(When we start the testing and we finished)
6.Defect Analysis
6.1 Defects logged in different types of tests like
Functional Test,regression Test as per area wised.
6.2 State of the Defects at end of the Test cycle.
6.3 Root cause analysts for the bugs marked as NOT A BUG.
7.QA observations or learning through the life cycle.

what is the diff b/n test plan ,test strategy,Test scenario and test case

Test Plan:
Test Plan is a document which describes the scope, approach and schedule of testing activities or you can also say that it contains "what we will test in software", "how we will test it", "when we test it as timing of testing" and "who will perform testing". It is the responsibility of test lead to develop this document.

Usually a test plan contains following things.
(1) Scope of testing
(2) Testing approach
(3) Time frame
(4) Objectives of testing
(5) What will be the environment
(6) What will be the deliverables
(7) Risk factors
(8) Features to be tested

Test strategy:
Test strategy is also a document and it is the responsibility of the project manager to develop it. It contains which techniques we will use and what will be the module to test. Here technique means type of testing. As different testing techniques can be applied to test software based on our goals. For example stress and load testing is applied on web based applications. We have to decide which testing approach we will use to test a module as following testing approaches can be used.

(1)Black Box Testing (2) White Box Testing (3) Ad-hoc testing (4) Acceptance Testing (5) Recovery testing (6) Sanity testing (7) Smoke testing (8) Regression testing (9) End to End Testing (10) System Testing (11) Functional Testing (12) Unit Testing (13) Alpha Testing
(14) Beta Testing (15) Exploratory Testing etc.

Test Scenario : A set of test cases that ensure that the business process flows are tested from end to end. They may be independent tests or a series of tests that follow each other, each dependent on the output of the previous one.

A test scenario specifies "What to test?"

Test Case : Is a document which describes the input, Action/Event and expected response to determine whether the particular functionality of the application is working fine.

A test case specifies "How to test?"

Order of STLC:

Test Strategy, Test Plan, Test Scenario, Test Cases.

When a bug posted by QA is rejected by Dev and Dev says is not a bug. How to convince Dev that it is a valid bug?

case1: Requirement exists for the reported bug

If Dev says it is an invalid issue, then the tester need to
give him reference no of the requirement.

case2: Requirement not exists for the reported bug.

Sometimes we may encounter some issues while testing which
do not have requirements. In that case we still have right
to submit a bug with issue category as requirements. If the
developer says that it is an invalid issue then we have to
move that bug to business analyst asking his comments. Once
we receive the comments from business analyst we will move
back the bug to developers court.

What is 2-Tier and 3-Tier Architecture?

The easiest way to explain this is, as you suggest, by an example. So I'll give you an example.

Let's suppose I'm going to write a piece of software that students at a school can use to find out what their current grade is in all their classes. I structure the program so that a database of grades resides on the server, and the application resides on the client (the computer the student is physically interacting with).

When the student wants to know his grades, he manipulates my program (by clicking buttons, menu options, etc). The program fires off a query to the database, and the database responds with all the student's grades. Now my application uses all this data to calculate the student's grade, and displays it for him.

This is an example of a 2-tier architecture. The two tiers are:

1. Data server: the database serves up data based on SQL queries submitted by the application.
2. Client application: the application on the client computer consumes the data and presents it in a readable format to the student.

Now, this architecture is fine, if you've got a school with 50 students. But suppose the school has 10,000 students. Now we've got a problem. Why?

Because every time a student queries the client application, the data server has to serve up large queries for the client application to manipulate. This is an enormous drain on network resources.

So what do we do? We create a 3-tier architecture by inserting another program at the server level. We call this the server application. Now the client application no longer directly queries the database; it queries the server application, which in turn queries the data server.

What is the advantage to this? Well, now when the student wants to know his final grade, the following happens:

1. The student asks the client application.
2. The client application asks the server application.
3. The server application queries the data server.
4. The data server serves up a record set with all the student's grades.
5. The server application does all the calculations to determine the grade.
6. The server application serves up the final grade to the client application.
7. The client application displays the final grade for the student.

It's a much longer process on paper, but in reality it's much faster. Why? Notice step 6. Instead of serving up an entire record set of grades, which has to be passed over a network, the server application is serving up a single number, which is a tiny amount of network traffic in comparison.

There are other advantages to the 3-tier architecture, but that at least gives you a general idea of how it works.

Incidentally, this website is a 3-tier application. The client application is your web browser. The server application is the ASP code which queries the database (the third tier) for the question-and-answer you requested.

I hope that helps!

verification & validation

verification- are we building the product right

verification is a prevention activity.
verification can be done by reviews and walk through.

Verification is a process which involves Reviews and Meetings to evaluate documents, Plans, Code, Requirements and Specifications; this can be done with Checklists, Issues lists, Walkthroughs and Inspection meetings.

------------------

validation- are we building the right product
validation: it is the process of finding bugs to make sure
that the application is work under customer expectations.
validation is a correction activity.

Validation is nothing but execution of Test cases, a process to check whether the Expected and Actual results are same.

What is Traceability Matrix?

It is document which maps requirements with test cases. By preparing Traceability matrix we can ensure that we have covered all functionality in our test cases. The following is the sample template

Sno RequirementID Testcase ID

Sunday, February 28, 2010

Software Quality (MANUAL TESTING MATERIAL)

1. Meet customer requirements in terms of functionality,

2. Meet customer expectations in terms of usability, performance and security.

3. Cost to purchase.

4. Time to release.

1&2 are technical.

3&4 are non_technical.

SQA (Software Quality Assurance): The monitoring and measuring the strength of development process is called SQA.

Conformance to explicitly stated and agreed functional and non functional (including performance) requirements

Process to provide confidence that quality requirement will be fulfilled.
Set of planned and systematic activities to provide confidence that products and services will conform to specified requirements and meets user needs.
Involves PREVENTING DEFECTS
Management by Inputs
Sets up measurement programs tot evaluate processes
Identifies weaknesses in a process and improves them
E.g.:
Life Cycle Testing:-


Information Gathering -> Analysis -> Design -> Coding -> Testing -> Maintenance.


Life Cycle Development:-


Information Gathering -> Analysis -> Design -> Coding -> Testing -> Maintenance.


LCD Vs LCT (Fish Model):



Analysis Design Coding System Testing Maintenance


Information

Gathering (BRS) SRS LLD & HLD Program Test S/W BBT


Reviews Reviews& WBT changes

Analysis


From Reviews to WBT Verification and BBT to Test S/W changes is Validation.


BRS (Business Requirement Specification)

BRS defines the requirements of the customer to be developed as a S/W.


SRS (Software Requirements Specification)

SRS defines functional requirements to be developed and System requirements (H/W or S/W) to be used.

Reviews:

It is a static testing technique. In this review responsible people are estimating the completeness (missing) and correctness (mistakes) of the corresponding data.

HLD (High Level Design Document)

HLD defines the overall hierarchy of all modules/functionalities. It is also known as external design.


LLD (Low Level Design Document)

LLD defines the internal logic of corresponding module/functionality.


Prototype:

A sample model of application without functionality is called prototype.

E.g.: Power Point slide show


White Box Testing:

It is a coding level testing technique. To verify the completeness and correctness of program structure, programmers are following this structure. It is also known as Glass Box Testing/Clear Box Testing (Program Logic).


Black Box Testing:

It is a build level testing technique. In this testing test engineers are validating every functionality depends on external interface (User Logic).


Build : -A finally integrated all modules set in a .exe form.


Software Testing : -The verification and validation of S/W application is called S/W testing. Primary role of Testing is not demonstration of correct performance but the exposure of hidden defects


Verification : - Whether the system is right/wrong?


Validation : - Whether the system is right/wrong with respect to the customer.


'V' Model : -It is an extensive process of Fish model. This model defines mapping between development & testing process.

'V' stands for Verification & Validation.



LCD LCT


Development Testing


-> Information Gathering * Assessment of Development plan
& Analysis * Prepare Test plan
* Requirements Phase testing


-> Design & coding * Design phase testing * Program phase testing


-> Form Build * Functional& System testing
* User acceptance testing User

* Test Documents testing


-> Release & maintenance * Port testing

* Test S/W changes

* Test Efficiency




Refinement from of 'V' Model:



LLD HLD S/W RS BRS



Coding


Unit Testing Integration Functional& User acceptance

Testing System testing testing




The real 'V' model is expensive to follow for small and medium scale organizations. Due to this reason, small and medium scale organizations are performing some changes in 'V' model. From that changes the organizations are maintaining separate testing team for functional & system testing phase because this phasing development is a bottleneck phase. For remaining stages of testing, organizations are taking the help of same developers.


I. REVIEWS DURING ANALYSIS:


In general S/W development starts with information gathering & analysis. In this phase business analyst category people are developing BRS & S/WRS documents. To estimate the completeness and correctness of a document they can conduct reviews.


BRS -> S/WRS


-> Are they right requirements?

-> Are they complete?

-> Are they achievable? (W.R.T. technology)

-> Are they reasonable? (W.R.T. time)

-> Are they testable?


II. REVIEWS DURING DESIGN:


After completion of analysis and review, designing category people concentrate on external design & internal design development. To estimate the completeness & correctness of the documents, when they are conducting reviews.


-> Are they understandable?

-> Are they met right requirements?

-> Are they complete?

-> Are they followable?

-> Does they handle errors?


III. UNIT TESTING:


After completion of design and their reviews, programmers are concentrating on coding to construct a S/W physically. In this stage programmers are testing every program through a set of white Box testing techniques.


a. Execution Testing :


Program -> Basis paths coverage (Every statement in program is correctly running)

-> Loops coverage(Termination of iterations)

-> Program technique coverage(Less member of memory cycles & CPU cycles during execution)


b. Operation Testing :


Whether our executed program is operatable on other customer expected platforms or not ?

Platform means that O/S, Compilers, Browsers and other system S/W.


c. Mutation Testing :


Mutation means that a complex change in logic. Programmers follow technique to estimate completeness and correctness of a program testing.


Tests Tests Tests

----- ----- -----

----- ----- -----

----- Changes changes

----- ----- -----

----- ----- -----


Passed Passed Passed (1 failed)

(In Complete) (Completeness)


If our test is incomplete on that program, then continue same program testing with new test otherwise continue other program testing.


IV. INTEGRATION TESTING:


After completion of dependent programs development and unit testing, programmers compose them to form a system. During this composition of programs, programmers are conducting integration testing to estimate completeness and correctness of control transmission in between that programs. There are 3 approaches to integrate such as:


a. Top-Down Approach :

conduct testing on main module without coming to some of the sub modules is called Top-Down Approach




Sub1<---Main --->Stub--x--Sub2


In the above program stub is a called program.


a. Bottom-Up Approach :

Conduct testing on sub modules with out coming from main module is called Bottom-Up Approach.



Main--x--Driver--->Sub1--->Sub2


In the above program driver is a calling program.


c. Hybrid Approach :

It is a combination of both Bottom-Up & Top-Down Approach.



Main--x--Driver--->Sub1--->Sub2--->Stub--x--Sub3


Above approach is also known as Sandwich approach.


V. FUNCTIONAL & SYSTEM TESTING:


After receiving build from development, separate testing team concentrate on Functional & System testing. In this phase of testing, testing team follows BBT techniques.

There are 4 divisions in BBT such as :


a. Usability Testing

b. Functional Testing

c. Performance Testing

d. Security Testing


a. Usability Testing :

In general the system testing process starts with usability testing. During this test, testing team concentrate on "User Friendliness" of screens. This usability testing classified into below subtests.

a.1. User Interface Testing :

->Ease of use (Understandability of screens)

->Look & Feel (Attractiveness of screens)

E.g.: Font, Style, Alignment, Color.

->Speed in interface (Short navigations to complete a task)

a.2. Manual Support Testing :

Whether the user manuals consists of context sensitive help / not ?



Receive build from developers

|

User Interface testing

|

Usability Testing--> Remaining Functional & System testing

|

Manuals Support testing


b. FUNCTIONAL TESTING:

It is a necessary (manitary) testing part in BBT. During this test, testing team concentrate on "Meet Customer Requirements".

This testing classified into below subtests.


a. Functionality Testing :

It is also known as Requirements Phase Testing. During this test, test engineer validate every functionality in terms of below coverages.


-> Behavioral Coverage (Changes in properties of object with respect to navigation)

-> Error-handling Coverage (preventing negative navigations)

-> Input Domain Coverage (Size & type of every input object)

-> Calculations Coverage (Correctness of O/P)

-> Back-end Coverage (Impact of front-end operations on back-end tables)

-> Service levels Coverage (Order of functionalities w.r.t. customer requirements)


b. Input Domain Testing :

It is a part of functionality testing but test engineers are giving some special treatment to this test with the help of two mathematical notations such as:

Boundary Value Analysis (BVA), Equivalence Class Partitions(ECP)


BVA (Size/Range)


Min ->Pass

Min-1 ->Fail

Min+1 ->Pass

Max ->Pass

Max-1 ->Pass

Max+1 ->Fail


ECP(Type)


Valid -> Pass

Invalid -> Fail


E.g.: A Login process allows userid & pwd to authorize users. Userid allows alphanumeric in lowercase from 4-16 characters long. Pwd allows alphabets in lowercase from 4-8 characters in long.

Prepare BVA & ECP for userid and pwd.


Userid

BVA (Size/Range)

Min -> 4 chars -> Pass

Min-1 -> 3 chars -> fail

Min+1 -> 5 chars -> pass

Max -> 16 chars -> pass

Max-1 -> 15 chars -> pass

Max+1 -> 17 chars -> fail


ECP(Type)

Valid -> (a-z)(0-9) ->pass

Invalid -> (A-Z) Special Characters and Blanks ->fail


Pwd

BVA (Size/Range)

Min -> 4 chars -> Pass

Min-1 -> 3 chars -> fail

Min+1 -> 5 chars -> pass

Max -> 8 chars -> pass

Max-1 -> 7 chars -> pass

Max+1 -> 9 chars -> fail


ECP (Type)

Valid -> (a-z) ->pass

Invalid -> (A-Z)(0-9)Special Characters and Blanks ->fail



c.Recovery Testing:

It is also known as Reliability testing. During this test, test engineers are validating that whether our application field change from abnormal state / not?

Abnormal state: Not able to continue.


--->Abnormal state----Back up & Recovery Procedures--->Normal.


d.Compatibility Testing:

It is also known as Portability Testing. During this test test engineers validates that whether our application build run on customer expected platforms or not ?

Platforms mean that O/S, Compilers, Browsers and other system S/W.


Forward Compatibility:

Build(VB 6.0)---->O/S(Unix, Linux)--x-->Build(VB 6.0)

Backward Compatibility:

Build (Oracle)--x-->O/S(Win 98)---->Build(Oracle)


e. Configuration Testing:

It is also known as Hardware Compatibility testing. During this test, test engineers are validating that whether our application build can support different technology H/W devices or not?

E.g.: Different technology printers.

Different technology LAN topologies.

Different technology LAN's...etc.,


f. Intersystem testing:

It is also known as End-to-End Testing/Penetration Testing. During this test, test engineers are validating whether our application build is correctly sharing the resources of other application or not?

E.g.: E-Seva


WBA Server

EBA -->Local DB --> Server

TBA (Common Resource) Server

IBA(New component) New Server


g. Installation Testing:



Build+Supported S/W -->Customer Expected Configured System

->Set up program execution (To start installation)

->Easy Interface (During installation)

->Occupied disk space (After installation)


h. Parallel Testing:

It is also known as Comparative Testing. During this test, test engineers are comparing our application build with old version of same application/competitive products in market to estimate competitiveness. This testing is only applicable to S/W products.


i. Sanitation Testing:

It is also known as Garbage Testing. During this test, test engineers are finding extra features in application build w.r.t. SRS.



3. PERFORMANCE TESTING:

It is an expensive testing technique in BBT. During this test, testing team concentrate on "Speed Of Processing".

This performance test classified into below subtests.

a. Load Testing:

The execution of our application under customer expected configuration and customer expected load to estimate performance is called Load Testing / Scalability Testing.

Load / Scale means that the no. of concurrent users are accessing our application.

b. Stress Testing:

The execution of our application under customer expected configuration & variating the peak loads to estimate performance is called Stress Testing.

c. Storage Testing:

The execution of our application under huge amount of resources to estimate peak limits of storage is called Storage Testing.

E.g.: MS-Access technology supports 2 GB D/B of maximum.

10 MHZ--100 Key strokes per second.

d. Data volume Testing:

The execution of our application under huge amount of resources to estimate peak limits of data in terms of no. of records is called Data volume Testing.

c & d are same but terminology is different.


4. SECURITY TESTING:

It is a complex testing technique to be applied. During this test, testing team concentrate on "Privacy Of Operations".

This testing classified into below sub tests.

a. Authorization Testing:

Whether our application allows valid users and prevent invalid users or not?

Above like observation is called Authorization testing.

E.g.: Login with userid, pwd, credit card number validation, pin number validation, fingerprint, and digital signatures.

b. Access Control Testing:

Whether a valid user have permissions to use specific services or not?

c. Encryption / Decryption Testing:

Whether the code conversions are checkable / not in between client process and server process.


Client-- (Request) --> Encryption -- (Cipher text) --> Decryption --> Server -->

(Response) --> Encryption -- (Cipher text) --> Decryption --> Client


Note: In small-scale organizations test engineers are covering Authorization testing & Access control testing. Developers are covering the Encryption / Decryption testing.


VI. USER ACCEPTANCE TESTING:

After completion of functional & system testing project management concentrate on user acceptance testing to collect feedback from customer side people.

There are two approaches to conduct this test such as:

Alpha Test: # S/W applications

# In development site

# By real customers

Beta Test: # S/W products

# In customer site like environment

# By customer site like people


Collect feedback.


VII. TESTING DURING MAINTAINANCE:

After completion of user acceptance test and their modifications, project management concentrate release team formation with few developers, testers, H/W Engg., This release team is coming to customer site and conduct port testing.

During this port testing release team concentrate on below factors in customer site.

-> Compact Installation.

-> Overall Functionality.

-> I/P devices handling.

-> O/P devices handling.

-> Secondary storage devices handling.

-> Co-existence with other S/W to share common resources.

-> O/S error handling.

After completion of port testing, release team provides training sessions to customer site people.

During initialization of that S/W, customer site people are sending change request to our organization. There are two types of change requests to be solved.


Enhancement-->Impact Analysis-->Perform S/W changes-->Test S/W changes



Change Request -->


Missed defect-->Impact Analysis-->Perform S/W changes-->Test S/W changes-->

Improve testing process capability


C.C.B: Change Control Board


TESTING TERMINOLOGY:


1. Monkey Testing:

A test engineer conduct a test on application build through the coverage of main activities only is called monkey testing / chimpanzee testing.

2. Exploratory Testing:

A tester conduct testing on an application builds through the coverage of activities to level by level.



3. Ad-hoc Testing:

A tester conduct a test on application build w.r.t. predetermined ideas is called Ad-hoc testing.

4. Big-Bang Testing:

An organization is conducting a single stage of testing after completion of entire modules development is called Big Bang Testing / Informal Testing (Single Stage Testing).

5. Incremental Testing:

An organization follows the multiple stages of testing process from document level to system level is called Incremental Testing.

E.g.: LCT

6. Sanity Testing:

Whether the development team released builds is stable for complete testing to be applied or not?

This observation is called Sanity Testing / Tester Acceptance Testing / Build Verification Testing.

7. Smoke Testing:

An extra shake-up in Sanity Testing is called Smoke Testing. In this stage test Engg., try to find the reason, when that build is not working before start testing.

Sanity Testing is manitary & Smoke Testing is optional.

8. Static vs. Dynamic Testing:

A tester conduct a test on application build without running, during testing is called Static Testing.

E.g.: Usability Testing.

A tester conduct a test through the execution of our application build is called Dynamic Testing.

E.g.: Functional, Performance, Security Testing.

9. Manual Vs Automation Testing :

A test engineer conduct a test on application build without using any third party testing tool is called manual testing.

A tester conduct a test on application build with the help of a testing tool is called test automation.


Manual-----Build----->Test Engineer


Automation-----Build----->Testing Tools----->Test Engineer


Impact:

A test impact indicates test repetition with multiple test data.

E.g.: Functionality Testing

Critically:

A test critically indicates that complexity to execute test manually.

E.g.: Load Testing

10. Re-testing:

The re-execution of a test on some application build with multile test data is called Re-testing.

E.g.: Multiply:

i/p1: __

i/p2: __ OK Result: ___



Expected Result=i/p1*i/p2


Test Data:

i/p 1 i/p 2


min min

max max

min max

max min

value 0

0 value

--- ---


11. Regression Testing:

The execution of selected tests on modified build to ensure bug fix work and occurrences of side effects is called Regression Testing.


Passed<---Build--Related Passed Tests / Failed Tests <----Modified build<--( R.T)

| |

| |

Failed------>Defect Reports---------------------->Developers


Here R.T. is Remaining tests

12. Error, Defect, Bug:

A mistake in coding is called Error.

A test engineer found mismatch due to mistakes in coding during testing is called Defect / Issue.

A defect accepted to be solved is called Bug.