Friday, October 2, 2009

Uncovering myths about Globalization testing- Context driven planning

Myth 18: There is one standard way of Test Planning the Globalization testing, which is applicable to all the contexts.

I have been looking at the alternate approaches on how different Software Organizations are planning for Globalization testing. Its quite an intriguing topic because the way different Software product organizations go for strategizing Globalization testing is quite unique and yet yield successful results within their own sphere of influence. In the quest to gain knowledge in this area, i came across an interesting presentation at the below URL.

www.localisation.ie/resources/conferences/2007/presentations/TCallanan/Risk-based-L10N.pps

I had an interesting conversation with the Author of this PPT- Mr. Tim Callanan and below is the excerpt of the conversation i had with him-
[Please ensure to go through the above PPT to appreciate the questions and responses below]

A question regarding Test case prioritization and Test Prioritization-
1. The High Risk=1 category shows the Risk as "New Product Feature" and this feature is also on High Importance category (=1). Suppose the Core team has defined 1000 test cases to test this new feature-

a. Will Globalization (G11N) team pick up all 1000 test cases in its execution ?
[Tim]
At the start of a project, we will define a number of core modules that we must test. These core modules again will be based on importance. So, for
example, Product A, would have Feature A and Feature B as critical functions, this will be reflected in the importance value that will be associated with this function. Then say if 1000 test cases was associated with this function, G11N team will quickly review all 1000 test cases and will then sample some of those test cases from an G11N view point to ensure that the most critical of these, based on importance and risk to determine priority will be covered.


b. If at all G11N team decides to not use all 1000 test cases (as execution would be costly), what factors would be considered to cut down on number of test cases here ?
[Tim]
Again, this will be based upon importance and risk using historical bug analysis, newly written areas of code, potential customer impact etc. We reviewed all core test cases for all products and found a lot of them did not directly effect G11N, the reason being that core testing involves a lot of performance, compatibility, hardware etc. testing that may not effect the Globalized product.

c. What kind of test cases other than these 1000 or subset of 1000 (depends on approach a. or b. above) would be created from G11N perspective to test the particular feature ?
[Tim]
The main areas we would need to look at would be tests that would effect G11N but would not be covered by core. This would involve particular 3rd party applications that would only be available in the regions e.g. Japan use a lot of e-mail clients that are just not sold outside Japan, compatibility with these 3rd party apps would be critical for L10N. Also some type of I18N testing i.e scanning a DBSC or extended character directory for example or compatibility with a particular piece of hardware that is only sold in the regions. Also the insertion of non EN characters and lots of UI issues such as char. corruption, clipped strings, etc.

The approach defined in your presentation seems to be very effective. What kind of deliverables are expected of Core development team to ensure that Risk based L10N testing works ?
[Tim]
The most important deliverables is that core team adhere to what was agreed at the milestones of the Software Development Lifecycle. By including G11N criteria at each stage of SDLC and ensuring that the Core team deliver these will ensure that nothing is overlooked that could potentially make any piece of software 'unlocalisable' or that would resulting major L10N or I18N defects being found at a very late stage in a process that would cause delay in the I18N releases.

Deliverables like-
The code base is same for all Single byte languages.
The code base is same for all double byte languages.
etc.
[Tim]
Yea, there must be only one code based for all language versions. So here I18N considerations are critical. For this reason i18N testing is included as a critical functional of either the core or L10N teams (depending on what can be agreed between the two groups) to be execute as early as possible in SDLC. I have seen in the past, in other companies, how the code base was split and a separate code based used to address i18N defects that was found late in the project to ensure that the En-US version could ship on time. It turned out to be a real nightmare trying to maintain to separate code bases and merge code later in the project. This is definitely something to be avoided if at all possible.
The key I think to ensure a good Localised and Internationally enabled product is to ensure that there is 'Functionality parity'; between software products running on EN OS and all other Localised OSs. In other words the EN product on En-US OS becomes your base line and all Languages version of software running on any non-En-OS will functional in exactly the same way as the English product. If we don't have this we have big problems.

Its a well known fact that most of the tests that Globalization team does are derived from English test cases. Another fact is that it is not efficient to run all test cases on all the languages.
Is there a standard that you know or a norm that you follow in testing that defines-
- How much %age of English test cases should be run on the localized languages ?
[Tim]
To answer your question, we usual base this upon risk and importance, this was the focus of the LRC paper that I wrote a couple of years back
http://www.localisation.ie/resources/conferences/2007/programme.htm. It is hard to sat exactly the percentage of test cases we would cover as this would be based upon product maturity, feature set, degree of internationalization testing already conducted, the level of pseudo translation testing already undertaken etc. etc.
Based on projects that we have done in the past, L10N would involve testing roughly 30% of the cases that are conducted by the core team but this is a very general estimate and we would use the risk based testing method to ensure that all necessary tests are carried out. If this comes out at 30% then that would be our coverage but it will depend on the risk that is assessed.


No comments: