Custom Search

Open source QA tool for automated Web application testing

Q- Could you recommend a quality assurance (QA) tool for automated regression/functional testing (open source or free tools are preferred) for testing a Web application that contains a lot of JavaScript for opening pop-up windows, redirects, etc.? I'm using HTTPUnit and Selenium but these tools are not handling pop-up windows and redirects well. Thanks

A- My preferred automation solution for Web applications is a combination of *Unit and Wati* -- for instance, JUnit plus Watij. I like Watij over Selenium RC simply because it seems a little more object-oriented than Selenium. But this is totally personal, I've used both tools successfully.

Handling pop-ups and other interactive display changes can be a challenge, regardless of the tool. You might consider a couple of approaches. First, be active in the tool's user group, seeking solutions. There are several groups available on the Internet -- just pick one or two. Be polite: post your question once, rather than blasting across multiple groups. If you post to the Selenium user's group, you will definitely encounter other testers who have faced similar challenges in the past -- they'll probably have tried-and-true solutions for you. Secondly, pay close attention to your implementation. If your Web app uses a lot of rich Internet applications (RIAs), your might get away with the use of divs rather than handling each window (in RIA, developers can "pop up" windows which are, in fact, just hidden divs being exposed). Experiment with different programming solutions and see which is more reliable. There is generally more than one way to accomplish what you're trying to do. You're looking for the way which is 1) feasible, 2) most reliable and 3) most performant (in that order).

The third approach may be the most challenging and, in the short term, costly. However, if you're working on an ongoing, long-term project it will have payoff. If the current implementation is not very testable, propose alternative implementations to the development team. For instance, if the current project creates multiple pop-up windows (rather than using Ajax to expose elements), ask that the team take time to change this, implementing a solution which you can test more reliably and in a shorter amount of time. You need to be very, very detailed in your reasoning -- you will need to include schedule and cost savings. Point to gains down the road when your automated regression and functional tests run more reliably. By implementing testability, you will be addressing what Agile teams call "technical debt." You take a short-term hit on schedule, with the outcome being a long-term improvement in effectiveness and efficiency.

Will penetration testing be replaced by preventative tools?

by michaeldkelly

I recently read the article "Penetration Testing: Dead in 2009" by Bill Brenner. In the article Mr. Brenner follows a small debate around the idea that over time penetration testing will be largely replaced by preventative checks.

The debate opens with some quotes from Brian Chess from Fortify Software. Fortify creates code analysis tools that scan for security concerns and adherence to good secure coding practices. That potential bias aside, I suspect that Mr. Chess' statement — that "Customers are clamoring more for preventative tools than tools that simply find the weaknesses that already exist [...]. They want to prevent holes from opening in the first place" — is absolutely true. I know I clamor for those tools, and I'm just a lowly test manager.

I'm a big fan of the work companies like Fortify, IBM and HP are doing in this space. If my project team can find a potential issue before we deploy the code, I'm all for it. It can save us time and helps us focus on different and potentially higher-value risks. However, I've yet to see a tool that can deal with the complexity of a deployment environment (setup, configuration, code, etc…) and while I'm a big believer in doing everything you can up front (design, review, runtime-analysis, etc.), I believe there will always be a roll for a skilled manual investigation of what gets deployed.

Testing (penetration or other) is about applying skill and judgment to uncover quality-related information about the product. That's not just code — it's more than that. Your typical penetration tester today covers more than today's automated tools can cover. While there are different tools to test various components (some that focus on code, some that focus on the network, etc.), and they should absolutely be used, those tools will never be able to uncover all the potential issues with a system. And, what's sometimes worse, is they can lead to a false sense of security.

 

Two-minute guide to determining software testing coverage


By Michael Kelly

Deciding what to test really involves two different questions. The first is a question of scope: "Out of everything that I could possibly test, which features are the right ones to test?" There will always be more to test than you will have time to test. The second is a question of technique and coverage: "For each feature I am testing, how do I want to test that feature?" Different quality criteria will lead to covering different product elements and different testing techniques.

In this two-minute crash course, I'll provide some details on how I answer those questions and how I structure my test execution to ensure I'm testing for the right risks at the right time.

2:00: Figure out the scope of your testing
For the question about scope -- what features should we test -- I like using Scott Barber's FIBLOTS mnemonic (which he presents in his Performance Testing Software Systems class). Each letter of the mnemonic helps us think about a different aspect of risk. Here's a summary of how I apply FIBLOTS when thinking about scope:

  • Frequent: What features are most frequently used (e.g., features the user interacts with, background processes, etc.)?
  • Intensive: What features are the most intensive (searches, features operating with large sets of data, features with intensive GUI interactions)?
  • Business-critical: What features support processes that need to work (month-end processing, creation of new accounts)?
  • Legal: What features support processes that are required to work by contract?
  • Obvious: What features support processes that will earn us bad press if they don't work?
  • Technically risky: What features are supported by or interact with technically risky aspects of the system (new or old technologies, places where we've seen failures before, etc.)?
  • Stakeholder-mandated: What have we been asked/told to make sure we test?

1:33: Understand the details of each feature you're testing
Once I understand what it is I want to test, I move on to understanding what aspects of each feature I'd like to cover. For that, I pull out the Satisfice Heuristic Test Strategy Model. I use the product elements list in that document to determine what aspects of the feature I need to focus on. At a high level, I think of coverage in terms of:

  • Structure: This is everything that comprises the physical product or the specific feature I'm looking at (code, hardware, etc.).
  • Functions: Everything that the product or feature does (user interface, calculations, error handling, etc.).
  • Data: Everything that the product or feature processes (input, output, lifecycle).
  • Platform: Everything on which the product or feature depends (and that is outside your project).
  • Operations: How the product or feature will be used (common use, disfavored use, extreme use, etc.).
  • Time: Any relationship between the product and time (concurrency, race conditions, etc.).

1:03: Structure your work in a way that makes sense to you
I typically start by structuring my work in lists or spreadsheets. Then, once I know what I'm going to test, I start to think of how I'm going to test it. It's not real to me until I can visualize the testing taking place. Do I need specialized software to help (like runtime analysis tools)? Will I need to write code or coordinate some activity (like a network failure)? Even visualizing something as simple as the data that I'll need can sometimes trigger a new idea or obstacle I'll need to tackle. As I think about each test, I'll start to group my tests into charters.

Once I have my charters figured out, I'll start to tackle whatever obstacles or setup tasks need to be done to allow me to run them. Some charters won't have any, and others might require a joint effort across teams. Generally, I'm ready to start testing once two conditions are satisfied:

  1. There is software somewhere that's ready for some level of testing.
  2. I have at least one charter that's ready to be executed (setup is completed or wasn't required).

0:35: Get your hands on the software you're testing
You'll notice I don't have a lot of entry criteria for my testing. That's because I'm always interested in seeing the software as soon as possible. I don't care how buggy it might be, once I see what I'm going to be testing, often my test ideas change. So the sooner I see it, the sooner I can provide feedback to the developer and start refactoring my tests.

While this philosophy won't work for all of my testing (in general I need something that's functionally sound before I can really start performance testing), it's reflective of a value I have to be an asset to the rest of the team. While I of course always want the most bug-free code I can find (well-designed, unit-tested, peer-reviewed), I'm a realist. Sometimes my feedback is more valuable to the team if I can get eyes on the product sooner rather than later.

0:21: Start with the components and build your way out from there
That said, I do have some general timing heuristics I use when thinking about when to test what. In general, I won't start doing any sort of end-to-end testing (following data through multiple parts of a system or subsystems) until I'm fairly confident each piece of the system is working to some degree (basic functionality has been confirmed, it's relatively stable and so on).

I typically don't try to do much automation or performance testing until I get at least one "stable" interface. The interface could be a Web service, a user interface, or even a method call, but I want it to be through at least one or two rounds of preliminary testing and I want to have some indication from the programming team that they don't plan to make major changes to the interface any time soon. I'm not looking for a promise it won't change, things change all the time -- I just want us to agree that right now we don't expect it to change.

0:05: Don't forget to regression test
Finally, I typically won't start regression testing until I've completed my first round of chartered test execution. Schedule constraints can of course override that, but I like the idea of regression testing being the last thing I do. It makes me more comfortable with the changes made as a result of my testing, and it gives me one last (often more relaxed) look at the product.

7 Tips to be More Innovative in the Age of Agile Testing to Survive an Economic Crisis

What is Agile Testing?
"Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing." - A wikipedia definition.

Why Need of Innovations in the Age of Agile Testing?

Global Recession/Economic downtime effect
Current Events are not Current Trends –

When global downturns hit, there is certain inevitability to their impact on information technology and Finance Sectors. Customers become more reluctant in giving software business. Some customers are withdrawing their long term projects and some customers using the opportunities in quoting low price. Many projects that dragged much longer than expected and cost more than planned. So, Companies started to explore how "Agile with different flavors" can help their Enterprises more reliably deliver software quickly and iteratively. The roles and responsibilities of Test Managers/Test Architects become more important in implementing Agile Projects. Innovations are increasingly being fueled by the needs of the testing society at large.

The Challenges in Agile Testing

Agile Testers face lot of challenges when they are working with Agile development team. A tester should be able to apply Root-Cause Analysis when finding severe bugs so that they unlikely to reoccur. While Agile has different flavors, Scrum is one process for implementing Agile. Some of the challenging scrum rules to be followed by every individual are

  •  Obtain Number of Hours Commitment Up Front
  •  Gather Requirements / Estimates Up Front
  •  Entering the actual hours and estimated hours daily.
  •  Daily Builds
  •  Keep the Daily Scrum meetings short
  •  Code Inspections are Paramount

So, in order to meet the above challenges, an agile tester needs to be innovative with the tools that they have. A great idea happens when what you have (tangible and intangible) meets the world's deepest hunger

How Testers Can be More Innovative in the Age of Agile Testing?

Here are Important Keys to Innovation:

1. Creative

A good Agile Tester needs to be extremely creative when trying to cope up with speed of development/release.  For a tester, being creative is more important than being critical.

2. Talented

He must be highly talented and strives for more learning and innovating new ideas. Talented Testers are never satisfied with what they have achieved and always strives to find unimaginable bugs of high value and priority.

3. Fearless

An Agile Tester should not be afraid to look at a developer's code and if need be, hopefully in extreme cases, go in and correct it.

4. Visionary

He must have a comprehensive vision, which includes client's expectations and delivery of the good product.

5. Empowered

He must be empowered to work in Pairs.  He will be involving in Pair Programming to bring shorter scripts, better designs and finding more bugs.

6. Passionate

Passionate Testers always have something unique to contribute that may be in terms of their innovative ideas, the way they carry day-to-day work, their outputs and improve things around them tirelessly.

7. Multiple Disciplines

Agile Tester must have multiple skills like, Manual, Functional, Performance testing skills and soft skills like Leadership skills, Communication skills, EI, etc. so that agile testing will become a cake walk.

 

What's the difference between priority and severity of bugs in Software Testing?


Source: one stop software testing

Priority" is associated with scheduling, and "severity" is associated with standards.

"
Priority" means something is afforded or deserves prior attention; a precedence
established by order of importance (or urgency).

"
Severity" is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behavior.

The words
priority and severity do come up in bug tracking. A variety of commercial, problem tracking/management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its 'severity', reproduce it and fix it.

The fixes of bugs are based on project 'priorities' and 'severity' of bugs. The 'severity' of a problem is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool. A buggy software can 'severely' affect schedules, which, in turn can lead to a reassessment and renegotiation of 'priorities'

How to write effective bug report?

"The purpose of a bug report is to let the developers see their faults and failures of the application under test. The bug report explains the gap between the actual result and expected result, and the details of that how to reproduce the bug."

Many times it happens that if the bug report is not effective or incomplete then programmers face many problems while fixing the bugs.

Due to the Bad bug report:

1. bug is not reproducible by developers
2. bug is fixed but with incorrect functionality.
3. delay in bug fixes
and many more….

Sample of bad bug report:


Bug Title: Error message


When running the application, I get an "Internal Server Error" that says "See the .log file for more details".


Steps to Recreate:

Happens when "Document.create = null". It is not happening when changed to " Document.create".


Expected results:

this error message should not appear when status is "Document.create = null"


Observed results:

See above.


So now how to write effective bug reports? Below I am giving some Bug report best practices:


1. Once the bug is found

Check the bug repository and search that if the bug is already exist. If exists, then check whether the status of bug is CLOSED OR OPEN. If the Status of bug is closed then REOPEN it.
Now, if the bug is not there in the repository and it is a new bug, then you need to report the bug.

2. If the bug is Reproduce able, then report it, Otherwise avoid reporting of non-reproducible bugs (best practices).


3. Report a new bug: "Bug description" also known as "Short description" or "Bug Summary":
It should be a small statement, which briefly points towards the exact problem. Writing a one line description is an ART. Bug Summary helps everyone quickly review outstanding problems. It is the most important part of the bug. It should describe only the problem, not the replication steps.
If it is not clear then managers might defer the bug by mistake and also it affects the individual performance of a tester.

4. The Language of the bug:
Language should be as simple as possible and as straight as possible. Don't point any developer through your words. Remember – the nasty is the bug, not the programmer.

The language should be such that is the bug report should be easily understandable by developers, fellow testers, managers, or in some cases, even the customers

5. Steps to Reproduce:

- The steps should be in a logical flow. Don't break the flow or skip any step.
- Mention the Pre-requisites clearly.
- Use attachments and screenshots of errors, and annotate the screenshots.
- The details must be elaborated like which buttons were pressed and in what order.
Note – Please don't write an essay on it. Be clear and precise. People do not like to read long paragraphs

6. Give Examples:
either with actual data or the dummy scenario. It will be easy for developers to recreate the bug.

7. Provide the Test Case ID, requirement ID, and Specs Reference.


8. Define the proper Severity and Priority.

The impact of the defect should be thoroughly analyzed before setting the severity of the bug report. If you think that your bug should be fixed with a high priority, justify it in the bug report.

This justification should go in the Description section of the bug report.
If the bug is the result of regression from the previous builds/versions, raise the alarm. The severity of such a bug may be low but the priority should be typically high.


8. Read what you wrote. Read the report back to yourself, and see if you think it's clear. If you have listed a sequence of actions which should produce the failure, try following them yourself, to see if you missed a step.

9. Mention the correct environment, application link, build number, and login/password details (if any).


10. Common issues: Many times it happens that the bug is not reproducible (even though the bug report is good) by developers, the don't worry, arrange go to meeting/walkthrough with them and help them in order to recreate the bug. And sometimes it happens like first day the bug is appearing then on next day the same bug is not appearing. In this case, the bug can be assigned back to you. Now you need to accept it and close the bug with appropriate comments like
"It is working fine now, but previously this problem was appearing. So, will close this bug after verifying in next build."
Of course, you need to close the bug after verifying in the next release/build/patch because it is an inconsistent bug.
Thus a good tester needs to be patient & always build a defense mechanism in the form of preserving test data & screenshots etc. to justify his statements.

11. Don't Assume the Expected results.
Write the expectations which are mentioned in the test case, requirements documents, FDD or in Specification documents.

That's all. Practices make us perfect.

Code Coverage " A White Box Testing Technique

What is code coverage – An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

As per wiki – "Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing."

Code coverage measurement simply determines those statements in a body of code have been executed through a test run and those which have not. In general, a code coverage system collects information about the running program and then combines that with source information to generate a report on test suite's code coverage.
Code coverage is part of a feedback loop in the development process. As tests are developed, code coverage highlights aspects of the code which may not be adequately tested and which require additional testing. This loop will continue until coverage meets some specified target.

The main ideas behind coverage:
- Systematically create a list of tasks (the testing requirements)
- Check that each task is covered during the testing

Code coverage is defined in six types as listed below:

• Segment coverage – Each segment of code b/w control structure is executed at least once.
• Branch Coverage or Node Testing – Each branch in the code is taken in each possible direction at least once. Branch Coverage Gives a measure of how many assembler branch instructions are associated with each line. In addition, a measure of the number of branches taken/not taken is given.
• Compound Condition Coverage – When there are multiple conditions, you must test not only each direction but also each possible combinations of conditions, which is usually done by using a 'Truth Table'
• Basis Path Testing – Each independent path through the code is taken in a pre-determined order. This point will further be discussed in other section.

Basis path testing is a white box testing technique first proposed by Tom McCabe. The Basis path method enables to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test Cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.

• Data Flow Testing (DFT) – In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code i.e., those based on each piece of code chosen to be tracked. Even though the paths are considered independent, dependencies across multiple paths are not really tested for by this approach. DFT tends to reflect dependencies but it is mainly through sequences of data manipulation. This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on.
• Path Testing – Path testing is where all possible paths through the code are defined and covered. This testing is extremely laborious and time consuming.

Path coverage
- Goal is to ensure that all paths through program are taken
- Too many paths
- Restrict to paths in a subroutine
- or to two consecutive branches

• Loop Testing – In addition to above measures, there are testing strategies based on loop testing. These strategies relate to testing single loops, concatenated loops, and nested loops. Loops are fairly simple to test unless dependencies exist among the loop or b/w a loop and the code it contains.
This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:

1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops:
The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:

1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.


Nested Loops:
The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops:
Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.

Unstructured Loops:
This type of loop should be redesigned not tested!!!

The role of a software test manager

By David W. Johnson

The role of the software test manager or test lead is to effectively lead the testing team. To fulfill this role, the lead must understand the discipline of testing and how to effectively implement a testing process while fulfilling the traditional leadership roles of a manager. What does that mean? The manager must manage and implement or maintain an effective testing process. That involves creating a test infrastructure that supports robust communication and a cost-effective testing framework.

What the test manager is responsible for:

  • Defining and implementing the role testing plays within the organization.
  • Defining the scope of testing within the context of each release/delivery.
  • Deploying and managing the appropriate testing framework to meet the testing mandate.
  • Implementing and evolving appropriate measurements and metrics.
    • To be applied against the product under test.
    • To be applied against the testing team.
  • Planning, deploying and managing the testing effort for any given engagement/release.
  • Managing and growing testing assets required for meeting the testing mandate:
    • Team members
    • Testing tools
    • Testing processes
  • Retaining skilled testing personnel.

The test manager or lead must understand how testing fits into the organizational structure. In other words, he must clearly define its role within the organization. This is often accomplished by crafting a mission statement or a defined testing mandate. Example: "To prevent, detect, record and manage defects within the context of a defined release."

Now it becomes the test lead's job to communicate and implement effective managerial and testing techniques to support this "simple" mandate. Your team, peers' (development lead, deployment lead and other leads) and superior's expectations need to be set appropriately given the timeframe of the release and the maturity of the development team and testing team. These expectations are usually defined in terms of functional areas deemed to be in scope or out of scope. Examples of those in scope include creating a new customer profile and updating a customer profile. Examples of those out of scope may include security and backup and recovery.

The definition of scope will change as you move through the various stages of testing. The key thing is to make sure your testing team and the organization as a whole clearly understands what is being tested and what is not being tested for the current release.

The test lead/manager must employ the appropriate testing framework or test architecture to meet the organization's testing needs. Although the testing framework requirements for any given organization are difficult to define, there are several questions the test lead/manager must ask. The answers to the questions and others will define the short- and long-term goals of the testing framework.

What is the relationship between product maturity and testing?
In the chart below, the first arrow leads to the product being ready for deployment. The second arrow leads to the product being ready to be tested as an integrated or whole system. The third arrow indicates functional testing can be performed against delivered components. The fourth arrow indicates the developer can test the code as an un-integrated unit. And the fifth arrow leads to the product concept being captured and reviewed.

How can the testing organization help prevent defects?
There are really two sides to testing verification and validation. Unfortunately the meaning of those terms has been defined differently by several governing/regulatory bodies. To put it more succinctly, there are tests that can be performed before the product is constructed or built, and there are tests that can be performed after the product has been constructed.

To prevent defects from occurring, you must test before the product is constructed. There are several methods for doing that. The most powerful and cost-effective method is reviews. Reviews can be either formal, technical reviews or peer reviews. Formal product development life cycles will provide the testing team with useful materials/deliverables for the review process. When properly implemented, any effective development paradigm should supply those deliverables. Example of development models and at what point during those models you can get information for the review process:

  • Cascade or waterfall
    • Requirements
    • Functional specifications
  • Agile or Extreme Programming
    • High-level requirements
    • Storyboards

Testing needs to be included in this review process, and any defects found need to be recorded and managed.

How and when can the testing organization detect software defects?
The testing organization can detect software defects after the product or some operational segment of it has been delivered. The type of testing to be performed depends on the maturity of the product at the time. The classic hierarchy or sequence of testing is as follows:

  • Design review
  • Unit testing
  • Functional testing
  • System testing
  • User acceptance testing

The testing team should be involved in at least three of those phases: design review, function testing and system testing.

Functional testing involves the design, implementation and execution of test cases against the functional specification and/or functional requirements for the product. This is where the testing team measures the functional implementation against the product intent using well-formulated test cases and notes any discrepancies as defects (faults). One example is testing to ensure the Web page allows the entry of a new forum member. In that case, you are testing to ensure the Web page functions as an interface.

System testing follows much the same course (design, implement, execute and defect), but the intent or focus is very different. While functional testing focuses on discrete functional requirements, system testing focuses on the flow through the system and the connectivity between related systems. An example of that is testing to ensure the application allows the entry, activation and recovery of a new forum member. In that case, you are testing to ensure the system supports the business. There are several types of system tests; what is required for any given release should be determined by the scope:

  • Security
  • Performance
  • Integration

What are the minimum set of measurements and metrics?
The single most important deliverable the testing team maintains is defects. Defects are arguably the only product the testing team produces that are seen and understood by the project as a whole. This is where the faults against the system are recorded and tracked. At a minimum each defect should contain the following:

  • Defect name/title
  • Defect description: What requirement is not being met?
  • Detailed instructions on how to replicate the defect.
  • Defect severity.
  • Impacted functional area.
  • Defect author.
  • Status (open, in progress, fixed, closed)

This will then provide the data for a minimal set of metrics:

  • Number of defects raised
  • Distribution of defects in terms of severity
  • Distribution of defects in terms of functional area

From this baseline the measurements and metrics a testing organization maintains are dependent on its maturity and mission statement. The Software Engineering Institute (SEI) Process Maturity Levels apply to testing as much as they do to any software engineering discipline:

  1. Initial: (Anarchy) Unpredictable and poorly controlled.
  2. Repeatable: (Folklore) Repeat previously mastered tasks.
  3. Defined: (Standards) Process characterized, fairly well understood.
  4. Managed: (Measurement) Process measured and controlled.
  5. Optimizing: (Optimization) Focus on process improvement.

How disciplined the testing organization needs to become and what measurements and metrics are required depend on a cost/benefit analysis conducted by the test lead/manager. What makes sense in terms of the stated goals and previous performance of the testing organization?

How to grow and maintain a testing organization?
Managing or leading a testing team is probably one of the most challenging positions in IT. The team is usually understaffed and lacks appropriate tooling and financing. Deadlines don't move, but the testing phase is continually being pressured by product delays. Motivation and retention of key testing personnel under these conditions is critical. How do you accomplish this seemly impossible task? I can only go by my personal experience both as a lead and a team member:

  • If the timelines are impacted, modify the test plan appropriately in terms of scope.
  • Clearly communicate the situation to the testing team and project management.
  • Keep clear lines of communication with development and project management.
  • Whenever possible sell, sell, sell the importance and contributions of the testing team.
  • Ensure the testing organization has clearly defined roles for each member of the team and a well-defined career path.
  • Measure and communicate the testing team's return on investment. If the detected defect would have reached the field, what would have been the cost?
  • Explain testing expenditures in terms of investment (ROI) not cost.
  • Finally, never lose your cool

How to evaluate Testing Tool


Recently started testing of an enterprise grade web application developed in JAVA, that brought me round to that ever important question: How to evaluate testing tools?

After shooting this question to many senior candidates during interviews, I found out that the crowd out there has not yet given a good thought on that one. Here are the criteria for evaluating a tool/ language before you start developing automation frameworks.

1.       NEVER choose a tool/ language for developing you automation framework simply because that's the only thing you know or have worked on in the past. Like I say "Don't try to use a Hammer on a screw."

2.       Begin by determining what kind of user interfaces are there to test and what kind of software components your framework would have to interface with. This could range from GUI's like plain HTML,flex,win32 apps to components like Java/C API's, Databases, 3rd party services like ssh support etc etc. Ascertain whether your target tool/language can "talk" to these interfaces comfortably.

3.       Determine the amount of support available for development using these languages/ tools in the domain areas required. Official/ unofficial forums, extent of search engine indexing, active developer community out there being a few of them.

4.       Determine the ease of developing frameworks using these as against other choices. This is where a couple of POC's can come in handy.

5.       How maintainable and scalable is your framework developed on this. Since testing frameworks are highly dynamic pieces of code that work with changing product code bases.

6.       How lightweight is the tool, since you do NOT want a tool that is so heavy that impacts the performance of your test runs.

7.       What is the amount of support it has for reporting and collaboration. You might want to generate reports in various formats if you want it to range from being "mailable" to your stakeholders to interfacing with your defect management and test case management software.

How to learn white box testing


Q- If I have been a QA tester, and don't have white box testing, where do you learn the skill?

 

A-     White box testing, also known as glass box or clear box testing, is testing that takes place where the testing had working knowledge of the code. In the AST BBST Foundations course, glass box testing concerns are illustrated with the question, "Does this code do what the programmer expects?" This is in contrast to the black box concern of, "Does this product fail to do what users -- human and software -- expect?"

From a learning perspective, this can mean a number of things. Testers working at this level are often comfortable with programming, hardware, networks, databases and application servers. Depending on the software, specialized knowledge of certain programming techniques or specific technologies may be required. Examples could include custom protocols, effective use of connection polling, language-specific test frameworks, among others.

Here's what I normally recommend for people getting started down this path:

  • Learn the basics of computer science: You can do this through schooling or through some well selected books with a good self-study ethic. I recommend looking for entry-level courses and books on computer organization, networking, databases and file systems, data structures, assemblers/compilers/interpreters, algorithms, and discrete mathematics.
  • Get comfortable working in a language: Today, I favor Ruby. But I taught myself Pascal as a teenager, learned C/C++ in college, and used Java in my first job as a programmer. It doesn't matter what language you learn -- just get fluent. You need to be comfortable enough that you can turn out simple programs rather easily and you need to be able understand and follow more complicated code -- even if you couldn't write it yourself.
  • Practice writing unit tests, stubs, and harnesses: As you learn a language, you'll want to be sure you learn how to do unit testing in that language as you progress. The reason isn't so you can write better code yourself -- that's a happy side effect -- but so you get comfortable looking at and editing those types of tests. Similarly, as you start to practice testing, you're more than likely going to encounter situations where you'll want to mock out part of your test. This is a common pastime for white box testers, as they often use custom stubs and harness to mock out parts of the component their testing or to add more visibility to the results of their testing.
  • Download and play with tools: This type of testing usually involves some tooling help if you plan on getting anywhere fast. Anything from your common runtime analysis tools (memory profilers, performance monitors, code coverage tools, etc.) to simple static analysis tools can help you learn the lingo and can get you thinking about common problems in white box testing. There are several good open source sites where you can find tools in this area; I currently favor opensourcetesting.org.
  • Learn about security testing: Get to know your buddies at the Open Web Application Security Project (OWASP). You can think of security testing as a capstone project for white box testers. Not only is it a practical application of the skills and tactics that make white box testers successful, but it pulls together everything else you're learning. If you work through the OWASP materials, you see that you'll need to understand a lot of the computer science materials. You'll need to be able to read code. You'll need to use tools -- lots of tools -- and it will be helpful to be able to write your own simple scripts. I find that OWASP makes all of their material accessible to the beginner or near beginner). They also have local groups -- perhaps in your area -- where you can get some peer support for your learning.

Testers: Time to gear up for mobile software testing


The economy has many businesses retrenching or in a holding pattern -- but mobile applications designed to be accessed via smartphones or personal digital assistants (PDAs) are poised to be one of the next big things, according to many experts. If so, what impact will that have on enterprise quality assurance (QA) and testing organizations?

"Good testing practices apply regardless of the platform," said author and testing consultant Judy McKay, but she and other testing pros say mobile apps will also pose some unique challenges.

For starters, "the mobile phone is a frontier-based mentality," said William Coleman, vice president of business development at LogiGear Corp., a software testing services company in San Mateo, Calif. "There are four or five operating systems all competing for supremacy," he said, and "very loose standards … that many phone manufacturers and app developers circumvent."

But despite the lack of standards -- and the down economy -- it appears mobile app development is forging ahead. In a study released in January, Evans Data Corp. found that 94% of corporate developers expect the development of wireless enterprise applications to either increase (47.6%) or stay the same (46.4%) this year, with the Asia-Pacific region leading the growth.

For the business user, "we're seeing enterprise apps being developed on enterprise standard OSes like WinMo [Windows Mobile] and RIM, but we don't see a rush to the others just yet. Enterprises trust Microsoft and RIM," said Coleman. According to the Evans Data study, 40% more developers plan to target Windows Mobile than Apple iPhone, and 46% more plan to target .NET (compact framework) than Google's Android platform.

Not only are there multiple operating systems to take into account, but testers will also have to deal with multiple versions of an operating system, various hardware devices and form factors, and the strength of a carrier's network connections and services.

"The amount of permutations does create a significant problem for testing and time to market," said Doron Reuveni, CEO and co-founder of uTest, a Software as a Service (SaaS) marketplace for software application testing based in Southborough, Mass. "The issues in mobile apps are quite significant compared to testing Web and desktop apps."

Andrew Reshefsky is a uTest tester who has worked on mobile email encryption programs for corporate clients. A big challenge, he said, is that users are on different networks, and "you have to figure out whether the issue is because of the network or the software."

Another issue is the device itself. "Every phone has an OS; some phones can update the OS, some can't. If the problem is with the phone you have to figure that out, because software developers don't like to fix problems that don't exist [with the software]," he said.

Testers also face the challenge of shorter development cycles. "The time to market is extremely short, so usability testing is a big thing," McKay said. According to Evans Data, 40% of wireless development projects take three to six months to complete, and 60% are completed in less than six months.

McKay added that there will be "more emphasis on performance, connectivity and upgradeability."

"When you have a captive user, they're stuck with the performance you give them, but with mobile devices they expect very fast performance, but you don't have a guaranteed level of connectivity," she said. If users are unhappy with performance, "there's a higher risk that the app will be completely rejected."

uTest's Reuveni said their customers are looking for two levels of testing. First, "they want some level of certification that this works on this variety of carriers, networks, phones, etc., with this version of the software, so there's a degree of comfort when they release it," he said.

Second, he said, is to look for flaws. "In mobile there's a lot of ad hoc exploratory testing, partly due to the apps being developed so quickly, but also the customer wants real user behavior, so they're looking for a combination of flaws and feedback from real users that understand and have seen a lot of mobile apps."

Test automation will be key

To meet the challenges of testing mobile apps, automation will be key, McKay said. "Any time you do performance load you've got to use tools," she said. LogiGear's Coleman added, "As complexity [increases], the requirements for testing get more demanding. The only way to capture that is through automation."

LogiGear specializes in test automation, and Coleman said they are working in the mobile area. "The big hole in this space is automation for mobile phones," he said.

It's a nascent area, said Manish Mathuria, founder and chief technology officer of InfoStretch Corp., an outsourcer of quality assurance, test automation, software development and mobile testing services. "One reason is the form factors of the different devices in use vary so widely; for an automation tool to cover all of them reliably is not an easy task," he said.

In terms of adopting automation, Mathuria said, QA teams have a long way to go as well: "Enterprise QA teams are sometimes barely equipped to manage automation well on the desktop side."

Mathuria said that today the great majority of mobile apps his company sees are consumer-facing, with games being the biggest area, followed by utilities, but everyone interviewed for this article agreed that the enterprise is likely to follow. "The smartphone is the new computer abstraction," Coleman said. "Ten years from now we won't be using laptops."

Are testers prepared for the mobile revolution? "Testers tend to be behind the development curve," McKay said. "Developers are out experimenting with new things, but testers may be deeply involved in the last project."

Her advice? "Keep an eye on what developers are doing and thinking, and the ops people. [Testers] can no longer pretend they don't need to know about performance of load and connectivity. There is a lot of expertise that will have to be developed, and this will be the time to get better."


Cloud computing creates software testing challenges

The "cloud" promises to create new opportunities for enterprise developers as well as for suppliers offering services and tools for this new paradigm. For testing organizations, there will be both new challenges and new tools for answering what Soasta CEO Tom Lounibos calls the one key question: Can I go live?






The impact on testing is that the end-user experience is being influenced by the cloud provider and all other parties involved.
Vik Chaudhary
VP of product management and corporate development, Keynote Systems Inc.











"Testing all the layers — from your application to the cloud service provider — is something testers will have to become efficient in," said Vik Chaudhary, vice president of product management and corporate development at Keynote Systems Inc. in San Mateo, Calif.
According to market research firm IDC, spending on IT cloud services is expected to grow nearly threefold, to $42 billion by 2012. Cloud computing will also account for 25% of IT spending growth in 2012 and nearly a third of the IT spending growth in 2013, IDC projected.
IDC makes a distinction between "cloud services" and "cloud computing." Cloud services, according to the market research firm, are "consumer and business products, services, and solutions that are delivered and consumed in real-time over the Internet." In contrast, cloud computing as defined by IDC is the infrastructure or "stack" for development and deployment that enables the "real-time delivery of products, services, and solutions over the Internet."
Chaudhary explains the shift: "Enterprises like Schwab, Travelocity, etc. have been deploying their own data centers for years. The challenge was to manage highly scalable applications and how to ensure the best experience. Legions of people were employed by these companies to monitor/test/add servers, etc."
What's happening more recently with new cloud infrastructure like Google App Engine, he said, is that organizations can run their applications on Google's infrastructure.
"That means the bar to deploy applications in the cloud is so much lower. You don't have to have data centers or ops teams; you can focus on building the application and the functionality. It's a paradigm shift in application development," he said.
It's a shift for the tester, too. For example, Chaudhary said, "If you build an application and you use the BlackBerry to access a manufacturing application hosted by a cloud company like Salesforce, Salesforce does a certain amount of testing, to ensure the server is available, etc. But when it comes to the application itself, does it run on two phones or 50 phones? Do you have a long page to load?"
In addition, the cloud hosting company may use a third-party service to speed performance. "The impact on testing is that the end-user experience is being influenced by my company, by the cloud provider, and all other parties involved," he said.
Reducing testing costs
While Lounibos said Mountain View, Calif.-based Soasta Inc. has a growing group of customers that don't own servers and do everything in the cloud, "the majority are still more traditional; they use managed service providers and are dabbling in the cloud." However, he said, cloud-based testing is a way for organizations to learn the cloud and reduce the costs of testing at the same time.
"Traditional customers see testing as a money pit. They're looking for ways to reduce costs. The [main] argument for cloud computing for the enterprise is, is it reliable enough," he said. "This is not so for testing. Testing [in the cloud] just replicates the real world; it doesn't have the issues associated with production, but it has the benefits of cost reduction."
With cloud computing, Lounibos said, testers "have access, availability, and affordability to enormous amounts of computing power, which is what's needed in testing. The idea of being able to provision 125 servers in 5 to 8 minutes and only pay for the hours you test is so compelling. You no longer have to have huge test labs for Web applications."
Soasta's CloudTest, for example, is available as an on-demand virtual test lab in the cloud or as an appliance. It supports load, performance, functional, and Web UI/Ajax testing. According to Lounibos, "We were built on top of the cloud for the cloud."
For its part, Keynote offers KITE (Keynote Internet Testing Environment) for testing and analyzing the performance of Web applications across the Internet cloud. KITE offers instant testing from the desktop as well as from a variety of geographic locations.
For Internet applications in particular, Chaudhary said performance testing needs to move to the cloud.
"When it comes to performance, you're not depending just on the application but on all the providers [involved]. And do you [the user] have DSL or a dialup line, or a mobile device? Performance testing by nature is environmental," he said.
For mobile applications, Chaudhary said both performance and functional testing should move to the cloud.
"For mobile applications, the functional testing also depends on the providers. Say you've got a screen for login. The size of the page and the screen on phone and the provider can all affect if the application works," he said.
By testing in the cloud, Chaudhary added, organizations can more easily and cost-effectively test for hundreds of devices.
With applications that run on the cloud, "you need to test network performance, server performance, database performance, software performance on the application, and how it's cached in the client," said Dennis Drogseth, a vice president at market research company Enterprise Management Associates Inc., based in Boulder, Colo. "If you have a single application that runs in one place, you can test it geographically in one place. What you have with an Amazon or Facebook, for example, is all kinds of pieces coming in from different geographies, and you can't know ahead of time where they'll be. It's definitely more complicated than running a test script on a single server-based application."
The challenge is to run tests across all the diverse components and geographies to identify problems, he said, and organizations that develop an application "typically don't have access to those types of environments. So [a company like] Keynote is giving those testers a working environment where they leverage the Internet cloud and all the vagaries and look at real networks and desktops."
New testing tools needed
Drogseth said new types of testing tools will be needed. "You can't do cloud computing with application development testing tools for a LAN or a single server. You need tools to allow you to understand the network and desktop implications and all the pieces. You need to bring the network to the developer."
"I suspect over the next five years every testing vendor will try to come to the cloud. I think we will have a new generation of testing companies," said Lounibos. "This [cloud computing] is a big market coming down road; it's just the way we'll consume services."

Software Testing Advice for Novice Testers

Novice testers have many questions about software testing and the actual work that they are going to perform. As novice testers, you should be aware of certain facts in the software testing profession. The tips below will certainly help to advance you in your software-testing career. These ‘testing truths’ are applicable to and helpful for experienced testing professionals as well. Apply each and every testing truth mentioned below in your career and you will never regret what you do.
Know Your Application
Don’t start testing without understanding the requirements. If you test without knowledge of the requirements, you will not be able to determine if a program is functioning as designed and you will not be able to tell if required functionality is missing. Clear knowledge of requirements, before starting testing, is a must for any tester.
Know Your Domain
As I have said many times, you should acquire a thorough knowledge of the domain on which you are working. Knowing the domain will help you suggest good bug solutions. Your test manager will appreciate your suggestions, if you have valid points to make. Don’t stop by only logging the bug. Provide solutions as well. Good domain knowledge will also help you to design better test cases with maximum test coverage. For more guidance on acquiring domain knowledge.
No Assumptions In Testing
Don’t start testing with the assumption that there will be no errors. As a tester, you should always be looking for errors.
Learn New Technologies
No doubt, old testing techniques still play a vital role in day-to-day testing, but try to introduce new testing procedures that work for you. Don’t rely on book knowledge. Be practical. Your new testing ideas may work amazingly for you.
You Can’t Guarantee a Bug Free Application
No matter how much testing you perform, you can’t guarantee a 100% bug free application. There are some constraints that may force your team to advance a product to the next level, knowing some common or low priority issues remain. Try to explore as many bugs as you can, but prioritize your efforts on basic and crucial functions. Put your best efforts doing good work.
Think Like An End User
This is my top piece of advice. Don’t think only like a technical guy. Think like customers or end users. Also, always think beyond your end users. Test your application as an end user. Think how an end user will be using your application. Technical plus end user thinking will assure that your application is user friendly and will pass acceptance tests easily. This was the first advice to me from my test manager when I was a novice tester.
100% Test Coverage Is Not Possible
Don’t obsess about 100% test coverage. There are millions of inputs and test combinations that are simply impossible to cover. Use techniques like boundary value analysis and equivalence partitioning testing to limit your test cases to manageable sizes.
Build Good Relations With Developers
As a tester, you communicate with many other team members, especially developers. There are many situations where tester and developer may not agree on certain points. It will take your skill to handle such situations without harming a good relationship with the developer. If you are wrong, admit it. If you are right, be diplomatic. Don’t take it personally. After all, it is a profession, and you both want a good product.
Learn From Mistakes
As a novice, you will make mistakes. If you don’t make mistakes, you are not testing hard enough! You will learn things as you get experience. Use these mistakes as your learning experience. Try not to repeat the same mistakes. It hurts when the client files any bug in an application tested by you. It is definitely an embracing situation for you and cannot be avoided. However, don’t beat yourself up. Find the root cause of the failure. Try to find out why you didn’t find that bug, and avoid the same mistake in the future. If required, change some testing procedures you are following.
Don’t Underestimate Yourself if Some of Your bugs Are Not Fixed
Some testers have assumptions that all bugs logged by them should get fixed. It is a good point to a certain level but you must be flexible according to the situation. All bugs may or may not be fixed. Management can defer bugs to fix later as some bugs have low priority, low severity or no time to fix. Over time you will also learn which bugs can be deferred until the next release.