Tuesday, August 27, 2013

What is the importance of HWND property

Handle is that unique identification no. given by Windows at runtime,whenever any control container (i.e form ,etc) are loaded. From this Handle property (HWND) we can easily identify any control. If you use object spy on any of the system controls displayed on desktop , you can get the HWND property of that control. In the same way similar tool is provided with Microsoft Visual Studio (Spy++) , which also displays the detail properties of any control displayed.

We're so used to Windows working "out of the box" (until it crashes), that we rarely stop to think how do all the OS's everyday tasks work.

For example, when you click on the maximize button of a certain window, how does Windows know to maximize that specific window? Similarly, when pressing Alt+Tab to switch between windows, how does Windows connects the "icon" we've stopped on to the actual window it represents?

All these task are extremely complex, and they all relay on the ability to uniquely identify each window and control in the OS.
This is done mostly through the use of WHND - the window's handle. A simple way to think of a handle is as a unique arrow which points the OS to the exact window you're referring to.

This is what sometimes makes it a very good identification property within a specific test-run- it's guaranteed to be unique, and consistent through the window's life cycle. However, as a different handle is generated every time the window is constructed, it's not a very good property to relay on between test-runs.

The Microsoft Windows operating environment identifies each form in an application by assigning it a handle, or hWnd. The hWnd property is used with Windows API calls. Many Windows operating environment functions require the hWnd of the active window as an argument.
The handle returned by the hWnd property is assigned to the form at run time. Therefore, the handle might be a different value every time the form runs, but the value remains constant during the life of the form. If the same form runs multiple times, each instance of the form can have a different hWnd value.
hWnd is available in user-defined Forms and Toolbar objects and is read-only at both run and design-time.

Sunday, July 7, 2013

Testing methodologies - Part4



GrayBox Testing
Ø       Gray Box Testing is a software testing method which is a combination of Black Box Testing method and White Box Testing method. In Black Box Testing, the internal structure of the item being tested is unknown to the tester and in White Box Testing the internal structure in known. In Gray Box Testing, the internal structure is partially known. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level.

  An example of Gray Box Testing would be when the codes for two units/modules are studied (White Box Testing method) for designing test cases and actual tests are conducted using the exposed interfaces (Black Box Testing method).

Testing Methodologies - Part 3



 Black-box Testing

            Black box testing is a type of testing in which the functionality of the software is tested without any reference to the internal design, code, or algorithm used in the program. The test cases are generally built around the requirements and specifications of the software application. Black box testing is sometimes also called "Opaque Testing", "Functional/Behavioral Testing" and "Closed Box Testing".

         The base of black box testing strategy lies in the selection of appropriate data as per the functionality of the system and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. Nowadays, many companies outsource their testing work to third parties to obtain accurate results. This is because the developer of the system is well-aware of the internal logic and coding of the system, which makes it unsuitable for him to test the application.

 In order to implement the black box testing strategy, the tester needs to have a thorough understanding about the requirement specifications of the system and how it should behave in response to a specific action.

 Various testing types that fall under this strategy are: functional testing, stress testing, recovery testing, volume testing, user acceptance testing (also known as UAT), sanity testing, smoke testing, load testing, usability testing, exploratory testing, ad-hoc testing, alpha testing, beta testing, etc.

These testing types are again divided into two groups: a) Testing in which the user plays the role of a tester and b) User is not required.

 The definition mentions both functional and non-functional testing. Functional testing is concerned with what the system does its features or functions. Non-functional testing is concerned with examining how well the system does. Non-functional testing like performance, usability, portability, maintainability, etc.

          Specification-based techniques are appropriate at all levels of testing (component testing through to acceptance testing) where a specification exists. For example, when performing system or acceptance testing, the requirements specification or functional specification may form the basis of the tests.

There are four specification-based or black-box technique:

Equivalence  partitioning.

      Boundary value analysis.

           Decision tables.

       State transition testing.

Advantages: 

   The test is unbiased because the designer and the tester are independent of each other.
The tester does not need to acquire knowledge of any specific programming languages.
The test is done from the point of view of the user, not the designer.
Test cases can be designed as soon as the specifications are complete.

Disadvantages:

 The test can be redundant if the software designer has already run a test case.
 The test cases are difficult to design.
 Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.

Types of Testing under  Balck box testing

Functional Testing:

          In functional testing basically the testing of the functions of component or system is done. It refers to activities that verify a specific action or function of the code. Functional test tends to answer the questions like “can the user do this” or “does this particular feature work”. This is typically described in a requirements specification or in a functional specification.

                         In this type of testing, the software is tested for the functional requirements i.e. what the system is supposed to do. The test cases are written to check if the application behaves/functions as expected.

Integration testing:

Testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them.

Integration testing tests integration or interfaces between components, interactions to different parts of the system such as an operating system, file system and hardware or interfaces between systems.
        Integration testing is done by a specific integration tester or test team.

Big Bang integration testing:

In Big Bang integration testing all components or modules are integrated simultaneously, after which everything is tested as a whole.
         Big Bang testing has the advantage that everything is finished before integration testing starts.
         The major disadvantage is that in general it is time consuming and difficult to trace the cause of failures because of this late integration.

System testing:

         In system testing the behavior of whole system/product is tested as defined by the scope of the development project or product.
        It may include tests based on risks and/or requirement specifications, business process, use cases, or other high level descriptions of system behavior, interactions with the operating systems, and system resources.
        System testing is most often the final test to verify that the system to be delivered meets the specification and its purpose.

        System testing is carried out by specialists testers or independent testers.

         System testing should investigate both functional and non-functional requirements of the testing.

Sanity Testing:

Brief test of major functional elements of a piece of software to determine if its basically operational, so that the build can be accepted for further testing. Only basic tests on major functionalities are performed without bothering with finer details. Also referred as Smoke testing.

 Also known as narrow regression test, the sanity test checks for the behavior of the application and determines if it is working fine after making minor changes in the code or functionality without introducing any new errors.

Functionality testing;

 Functionality testing is performed to verify that a software application performs and functions correctly according to design specifications. During functionality testing we check the core application functions, text input, menu functions and installation and setup on localized machines, etc.
      In this type of testing, the software is tested for the functional requirements i.e. what the system is supposed to do. The test cases are written to check if the application behaves/functions as expected.

Usability testing:

 In usability testing basically the testers tests the ease with which the user interfaces can be used. It tests that whether the application or the product built is user-friendly or not.

Usability Testing is a black box testing technique.

Usability testing also reveals whether users feel comfortable with your application or Web site according to different parameters - the flow, navigation and layout, speed and content - especially in comparison to prior or similar applications.

 This testing is also called 'Testing for User-Friendliness'. It is done to check if the intended user can meet his requirements using the system being tested.

Compatibility testing:

 It is a type of non-functional testing.
        Compatibility testing is a type of software testing used to ensure compatibility of the system/application/website built with various other objects such as other web browsers, hardware platforms, users (in case if it’s very specific type of requirement, such as a user who speaks and can read only a particular language), operating systems etc. This type of testing helps find out how well a system performs in a particular environment that includes hardware, network, operating system and other software etc.

It is basically the testing of the application or the product built with the computing environment.
       It tests whether the application or the software product built is compatible with the hardware,  operating system, database or other system software or not.

Performance testing:

Performance testing is testing that is performed, to determine how fast some aspect of a system performs under a particular workload.

       It can serve different purposes like it can demonstrate that the system meets performance criteria.

        It can compare two systems to find which performs better. Or it can measure what part of the system or workload causes the system to perform badly.

                some of the aspects includes
·         Connection time
·         Response time
·         Send time
·         Process time
·         Transaction time

Load testing: 

 A load test is type of software testing which is conducted to understand the behavior of the application under a specific expected load.

              Load testing is performed to determine a system’s behavior under both normal and at peak conditions.

          The application is tested against heavy loads or inputs, such as testing the entire database, to find out the maximum operational capacity as well as the constrictions which degrades its performance

Stress testing:

 It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.

                            It is a form of software testing that is used to determine the stability of a given system.

            It  put  greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances.

Acceptance Testing:

       After the system test has corrected all or most defects, the system will be delivered to the user or customer for acceptance testing.

              Acceptance testing is basically done by the user or customer although other stakeholders may be involved as well.

        The goal of acceptance testing is to establish confidence in the system.

Alpha testing;

This test takes place at the developer’s site. Developers observe the users and note problems.

             Alpha testing takes place at the development center, where the system is tested by the users or customers to check if all the requirements have been met. Any type of abnormal behavior in the system is noted by the developers and rectified accordingly.

                          Alpha testing is testing of an application when development is about to complete. Minor design changes can still be made as a result of alpha testing.

Beta testing:

 It is also known as field testing. It takes place at customer’s site. It sends the system to users who install it and use it under real-world working conditions.

                                      In this type of testing, the software is distributed as a beta version to the users. They test the software at their site and record any bugs or defects that they may encounter during the process. These are then reported to the developers at regular intervals.

                                     The system is tested using real data in the real user environment that cannot be controlled by the developer. All problems encountered by the users would be reported back to the developer at regular intervals

User Acceptance:

 In this type of testing, the software is handed over to the users to determine if it meets their requirements and expectations and works as it is expected to be.
 
It is the final testing process that occurs before a new system is accepted for operational use by the client. It is to get a confirmation from the client of the object under test, through trail or review, that the system meets his requirement specifications

Monkey testing:

This  test an application with stochastic inputs without any specific tests in mind.

             Tests are not logical and there is no intent of learning the system.

             No test cases are used.

Exploratory testing;

 As its name implies, exploratory testing is about exploring, finding out about the software, what it does, what it doesn’t do, what works and what doesn’t work. The tester is constantly making decisions about what to test next and where to spend the (limited) time. This is an approach that is most useful when there are no or poor specifications and when time is severely limited.

            This is done in order to learn/explore the application, to determine how the software works, and how it will handle different test cases.

Ad hoc testing:

 This type of testing is done without any formal test plan or test case. Ad hoc testing helps in deciding the scope and duration of the other testing methods and also helps testers in learning the application prior to starting any other testing.

                                         Tests are logical with familiarity on system functionality.

Retesting:

 After reported bug is fixed then testing of the same issue with same steps and same pre condition along with same test data is called as re-testing. Re-testing is performed to confirm that the reported bug is fixed. When re-testing is performed it is important to ensure that test is executed exactly the same way as it was for the first time, same data, same environment and same steps.

                     Re-Testing is Executing previously failed test case on Modified build to check whether it is passed or not.

Regression testing:

During confirmation testing the defect got fixed and that part of the application started working as intended. But there might be a possibility that the fix may have introduced or uncovered a different defect elsewhere in the software.

                The way to detect these ‘unexpected side-effects’ of fixes is to do regression testing. The purpose of a regression testing is to verify that modifications in the software or the environment have not caused any unintended adverse side effects and that the system still meets its requirements.

                 Regression testing are mostly automated because in order to fix the defect the same test is carried out again and again and it will be very tedious to do it manually. Regression tests are executed whenever the software changes, either as a result of fixes or new or changed functionality.

                  Testing performed to ensure that the changes made to the application doesn’t effect the unchanged part of the application. Changes may be due to Bug fixes or enhancements.