Sunday, November 22, 2015

Some Valuable Learnings from Amazon Fire Phone failure

This is more of a recent failure. For more reasons than one, i was keen to explore this case further to figure out the possible reasons why Fire Phone failed (Significant number of Fire Phone remained unsold) to cause an impact. Amazon, under Jeff Bezos, have really made some big, bold bets in the past and atleast from outside, it appears to have a culture in which failing is not treated as a sin. This case is even more interesting considering that Amazon has had a reasonable run of success with Amazon Fire Tab. Agreed, a phone is a different animal than a tab when it comes to nuances related to its development, usage and adoption but it (and Amazon's past success with Kindle and its variants) does tell that Amazon was no novice in the area of building hardware product. Read on to know more-

Customer focus gone wrong:

If i try and think about why an organization like Amazon would want to venture into building a phone, i can certainly think of Amazon want to own the starting point of customer experience to access its online retail services. Apple, for instance, takes 30 percent of all revenue generated through apps, and 70 percent goes to the app's publisher. It means that even though Amazon may be selling e-books using Kindle app in iOS, it won't get its full share. Controlling the starting point of buying experience (i.e. owing a smart phone) can have other benefits too where Amazon could have had its own services that could have helped them sell more stuff. So, venturing into phone wasn't a bad idea at the first place. Some analysis that led into analyzing the failure of Fire Phone also led to the fact that it lost out on customer focus. This finding is a bit intriguing given the fact that Amazon is known for its customer centricity. Below are some useful insights from a Fast Company

 "Bezos’s guiding principle for Amazon has always been to start with the needs and desires of the customer and work backward. But when it came to the Fire Phone, that customer apparently became Jeff Bezos. He envisioned a list of whiz-bang features,....Perhaps most compelling was Dynamic Perspective, which uses cameras to track a user’s head and adjust the display to his or her vantage point, making the on-screen image appear three-dimensional."

 And team members simply could not imagine truly useful applications for Dynamic Perspective. As far as anyone could tell, Bezos was in search of the Fire Phone’s version of Siri, a signature feature that could make the device a blockbuster. But what was the point, they wondered, beyond some fun gaming interactions and flashy 3-D lock screens. "In meetings, all Jeff talked about was, ‘3-D, 3-D, 3-D!’ He had this childlike excitement about the feature and no one could understand why," recalls a former engineering head who worked solely on Dynamic Perspective for years. "We poured surreal amounts of money into it, yet we all thought it had no value for the customer, which was the biggest irony. Whenever anyone asked why we were doing this, the answer was, ‘Because Jeff wants it.’ No one thought the feature justified the cost to the project. No one. Absolutely no one."

If this is to be believed, Jeff Bezos vigorously backed some features which didn't make much logical sense to the team. Apparent lesson is that CEO shouldn't necessarily become the customer himself or herself but CEO should rather take the role of the best representative of the business.

Eric Schmidt, the former CEO of Google, brings in an interesting perspective in his book- "How Google Works". He doesn't eat his words when he says "Don't listen to HIPPO". Who is an HIPPO? Read on the excerpt from his book to appreciate this point more-
From the book- "How Google Works"

Lack of App ecosystem:

Keeping platform style thinking at the forefront, it is imperative that a product should be able to build an ecosystem of developers (and consumers), who can help extend the product's capabilities further and make it more useful for the consumers. As this Time article points out, 

Amazon’s app store has about 240,000 apps, compared to more than 1 million in the Google Play store.

As Apple's success had proven, to be successful- it is imperative for the Smart Phone companies to have an app ecosystem that helps its users. On the contrary, as the not-so-wide adoption of Windows phone have proven too, the apps or lack of it, really is one of the key factors. Windows, since its leadership change, has started to take the corrective actions in this regard by announcing the iOS and Android bridge projects for Windows OS.  These projects, if successful, would make it easy for developers to port Android and iOS apps to Windows. To me, releasing a smart phone without adequate app support is a fundamental flaw. It would have been accepted in 2007 (when iPhone was launched), not now when already precedent has been set with so many players already in the market.

Micro-management- the root of all evils? Nah.

There are also some apparent references to Bezos' micromanaging the entire project, which might or might not have been true. Even if it were true, i would rather look into this with a balanced mind. What i mean by this is, if we choose to blame Bezos' micro-management as a reason of failure of Fire phone, we should also be open to appreciate where his apparent micro-management worked and delivered a blockbuster product. My take on micro-management is that it is not as bad as it is often made out to be. Micro-management, like empowerment is a management strategy, which is necessary in some situation where there is a need to monitor the situation closely. It is probably bad if it's an inherent trait of a person. It isn't bad if it is used only in situations where it's used is justified.

Begin with the end in mind:

Ok, this one is not a negative learning. One of the things that i read about the how Amazon Fire phone development effort was started was particularly interesting. It said-

Like every product created at Amazon, the Fire Phone began on a piece of paper. Or rather, several typed, single-spaced pieces of paper that contained a mock-up of a press release for the product that the company hoped to launch some day. Bezos requires employees to write these pretend press releases before work begins on a new initiative. The point is to help them refine their ideas and distill their goals with the customer in mind.

Visualizing the impact that the product is bound to make even before its development has commenced didn't seem like an everyday idea to me. This approach has certain freshness in the times when many product organizations start with a formal product requirements document. With this end clear, reviewed and approved- it has the capability to become a sort of oracle against which product specific decisions could be made. Another notable aspect is the fact that Jeff Bezos' act of involving his employees to write the same. With all the flak that Bezos have been getting regarding Amazon's work culture of late, this seem like a perfect counter-argument.

This point also proves that when a product is a failure, it doesn't mean that everything that happened during its inception was wrong.

What's your take on these points? Please do share in the comments.



Monday, November 16, 2015

Does Intel Pentium Bug of 1990s Still Holds Any Lessons for us?

Information Technology, at its core, is a forward oriented profession. What i mean by this assertion is that, as a general observation, the rate of change that this profession deals with is unprecedented. In my career time span so far, i have seen many a paradigm shifts including rise and fall and re-rise of Microsoft, birth and dominance of Google, a gigantic comeback of Apple, and all these eventually impacting our lives as professionals and as consumers of technology. In dealing with such acute dynamism, in my belief, it is very easy to lose the sense of history of our profession. I personally feel that having a good sense of history for a chosen profession often helps us connect the dots better and better fathom the current events that we experience. History helps connect things through time and I do consider knowledge of the history of our profession important in shaping its future. Most of the today's methodologies and good practices are evolved by bettering what didn't work in the past. To say the least, sense of history also gives as a sense of connection with the past which we should look to not lose.

I was recently reading the book- "Only the Paranoid Survive", the first person account of Andy Grove (former CEO of Intel) on how he dealt with strategic inflection points i.e. the time in the life of a business when its fundamentals are about to change. One of the narration in the book talks about the Pentium chip bug and it goes like as follows (written as is it appears in the book)-

"Several weeks earlier, some of our employees had found a string of comments on the Internet forum where people interested in Intel products congregate. The comments were under the headings like, "Bug in the Pentium FPU." (FPU stands for floating point unit, the part of the chip that does heavy-duty math.) They were triggered by the observation of a math professor that something wasn't quite right with the mathematical capabilities of the pentium chip. The professor reported that he had encountered a division error while studying some complex math problem.
We were already familiar with this problem, having encountered it several months earlier. It was due to a minor design error on the chip, which caused a rounding error in division once every nine billion times. At first, we were very concerned about this, so we mounted a major study to try to understand what once every nine billion divisions would mean. We found the results reassuring. For instance, they meant that an average spreadsheet user would run into the problem only once every 27,000 years of spreadsheet use.

Andy spends a quite a few pages later in the book to tell why this bug was critical and how it turned his thinking around some peculiar was happening in the world around him. Let me summarize that point of view in next few points and also explain its relevance in today's world-

1. The beginning of social media as a force to reckon with:
Internet Forum in 1990s
     We pretty much take social media for granted these days. It generates a lot of data and opinions every passing second, which is very valuable to those who see the need to seek information out of it. This is especially true for anyone seeking feedback for a 
newly launched product or a service. Consumers, on the other hand, provide feedback often without being asked on social media. It more often turns out to a medium for venting out imperfections and bad experiences. This is now. But when we talk about 1990s, when the Pentium FPU bug occurred, things were still in infancy w.r.t Internet and people had started to use Internet forums to share opinions. Intel, then was not in the business of selling the computer chips directly to consumers. It used to sell via PC manufacturers like IBM. Intel's emergence was at the cusp of PC industry turning more horizontal oriented than vertically oriented meaning that earlier one manufacturer like Digital used to manufacture/assemble all parts of a computer (vertical orientation), later each key component became individual business (horizontal orientation) serving the PC assembler like IBM, Dell etc. Andy mentions in his book that with this bug, he smelled something unusual happen in the field. And it was that though he was not selling directly to consumers, he was getting feedback from them directly. He inferred that if this situation wasn't handled with proactive stance, then he could receive a lot of negative backlash. Mind you, this was 1990s, when it was hard to imagine the power of social media. Andy took the corrective actions quickly and even justified the huge cost of this bug- around USD 450 million (mammoth amount now but more so 20 years back).

It stands the lessons for today's times too. Proactively dealing with feedback received on social media is the order of the day. It is easy to manufacture negativity even by bad intentions of the competitors. The birth of techniques such as Sentiment Analysis that help to proactive assess positive and negative sentiments around the events like product releases further help to deal with negative perceptions well. In my recent memory, i am reminded on the social buzz that was created by the security vulnerability in SSL- Heartbleed bug and the negative response generated in the social media when the news about their (hidden) social experiment A/B test leaked out publically where they subjected a certain percentage of their consumers to negative news deliberately. Even though social media as a channel is quite useful to generate feedback but it also makes companies vulnerable to negative publicity in the event of bugs that catch public attention.

2. Handling strategic inflection points need different skills
   In the wake of negative press and crisis-like situation that the Pentium FPU bug generated for Intel, Andy made a very interesting observation in his book. He says-
 "A lot of people involved in handling this stuff had only joined Intel in the last ten years or so, during which time our business had grown steadily. Their experience had been that working hard, putting one foot in front of other, was what it took to get good outcome. Now, all of a sudden, instead of predictable success, nothing was predictable. Our people, while they were busting their butts, were also perturbed and even scared."

In short, the skills needed to handle peace time in business are quite different from the ones needed during war time. People often come to work believing the workplaces to be fair i.e. if i do "X" amount of work, i will get equivalent of "X" credit. While there is nothing wrong in this assumption generally but such thinking (from employee's perspective) do not take into account changing business situations. The reality of today's times is that an effort that would have resulted in a great output (for company and personally) in a certain business situation would not just be enough in a very different business situation. This often happens because of no fault of employee, who did his best given the current situation but probably lacked situational awareness to alter the nature of efforts. To quickly explain this perspective, Nokia's example comes to mind. The story of rise and further decline of Nokia is widely written about. During good times (till atleast 2007), the company made a big fortunes with its existing model (with its phones based on Symbian OS). But when the time came to change to more modern mobile OS like Android, they just failed to move swiftly. I can imagine the employees in this situation would have put in great efforts with their key skills around Symbian OS but due to situational change, the same efforts which bore huge fruits earlier were just not enough to reap similar or greater rewards.

3. Lessons in Defect Advocacy
    To me, the most interesting part of the narration regarding Pentium FPU bug was this- "an average spreadsheet user would run into the problem only once every 27,000 years of spreadsheet use"
This was actually a known problem before the Pentium chip was released. What might have happened is, following the usual defect prioritization principles, it would have been given acknowledged but given less priority as the frequency of this bug happen was staggering 27000 years of spreadsheet use. Now, one may question this data's accuracy, which is probably a fair question but larger point that this case teaches is that the usual defect prioritization approach usually fail to consider the macro aspects impacting the product. Let me explain this point a little bit-
Pentium chip was released at the backdrop of the legendary "Intel Inside" marketing campaign. The extent of popularity (due to marketing efforts) of this campaign was so huge that Intel almost became a household brand. When people started seeing the effects of the error related to this, they put the blame squarely on Intel and not the computer manufacturer. The early social media in the form of Internet forums gave voice to their concerns. Had the defect prioritization decision, take into account the macro environment that the product will operate under, it would probably have been chosen to be fixed. 
One of the key learnings here that is still relevant in today's times is to have a holistic approach towards defect advocacy. A tester advocating the defect should relate the bug information with the macro environment happenings like business situation, popularity of the bug, users impacted and much more. For a tester to be playing the role of the headlights of the product, he/she should not just think about internals of the bugs but also associate it with the necessary business information and related factors.

What else do you learn from this case? Please do share your thoughts in the comments.

Images source:

Friday, November 13, 2015

Are the Lessons from Google Wave Debacle Still Relevant ?

In this blog (or hopefully a series of them), i intend to write about the key lessons to be learned not only from the tech products that failed in the past but also i intend to write a bit about why products succeed as well. The motivation to write this comes directly from my experience. I started my career with working on a large ecommerce product, which was (at that time) supposed to be the biggest technology application built entirely on Microsoft technology. Ok, the word "biggest" in the past sentence has more or less an anecdotal reference but the key point is that this project really struggled to find its feet due to various reasons. I will talk more about my own experience in the coming posts but to start off, i wanted to put a microscope on some of the public products that bit the dust. Failures, like success, leaves a lot to be learned from. My personal hypothesis is that most failures share similar reasons that led to eventual results. As a series of this blog, i want to test this hypothesis as well. I will try and bind my narrative to around 500 words so (covering top 3-4 reasons) that it remains within the readability limits.

The first product that pick for analysis is Google Wave. The reason I pick up this product is simply that i got to know a bit about this product while reading the book "How Google works" and it is quite fresh in my mind. As the book says- Google Wave as the creation of
a small team of engineers who took their 20 percent time to explore the question "What would email be like if it were invented today?" Google Wave was said to be a technological marvel, but it proved to be a major flop. Its user base never grew as expected (said to be close to 1 million or so) and Google eventually cancelled this project within one year of its launch in around 2010. Google Wave was said to be "collaboration and communication tool consolidating] core online features from e-mail, instant messaging, blogging, wikis, multimedia management, and document sharing,”

Some reasons for its failure as I researched as below-

Complicated User Experience:
As this Quora article suggests,
In retrospect, the lack of success of Google Wave was attributed among other things to its complicated user interface resulting in a product that was a bit like email, a bit like an instant messenger and a bit like a wiki but ultimately couldn't do any of the things really better than the existing solutions.

One of the studies even pointed that a celebrated Tech journalist even wrote a 195 page manual on how to use Wave.
This fact that someone need to write a hefty manual explaining a product alone would testify that Wave probably missed a trick or two in designing a simpler application. It appeared that it was not only hard to use but it was also perceived as hard to explain.

If we fast-forward to 2015, companies are adopting unbundling as a product strategy, which means that they are decoupling the features to make it more usable or focused e.g. Facebook unbundling messenger from the core app. It means that the world gets to value simpler, one feature app more than an app doing too many things, which Google Wave tried to do.

Launched with Lofty Expectations:
As this Mashable article suggests, The first lesson that Google or any web application developer can learn from Google Wave is the importance of managing expectations. Because the hype window started four months before Wave actually launched, the idea of what Wave was easily exceeded the reality. Phrases like "radically different approach to communication" and "e-mail 2.0" were bandied around, along with buzz-word laden phrases like "paradigm-shifting game-changer."

As I have experienced, when a product is launched with much fanfare, it always runs the risk of being subdued under its own expectations. This is something Intel observed when they launched Pentium chip (more about it in later blogs) when one edge case bug resulted in a loss of close to a half billion dollars majorly because of the marketing hype preceding it.

Lack of Extensions:
As one of the reasons cited in eweek- Google Wave was open sourced and yet failed to catch on with developers. While SAP, Novell and all vowed to work with Wave, and there were a number of extensions created, the support didn't match that of other Google projects, such as Chrome, for which there are thousands of browser extensions. That's a big killer.

As I explained in one of my earlier blogs, in today’s tech era the successful products are more defined with a platform style architecture where building a successful ecosystem of developers is the key. The absence of incentives attracting enough developers timely impacted the speed at which extensions were created and hence resultant user adoption.

No Integration with Google Apps

Again a reason cited in one of the analysis- Google proudly displayed Wave as its own entity. It would have been better served attached to Google Apps similar to the way Google Buzz was tied to Gmail, with Google suggesting users try it out for certain collaboration functions in Google Docs or Sites.
Integration between products is one of the key problems faced with most big sized tech. companies that typically have multiple products in its portfolio. Big companies usually expands their portfolios by acquiring other companies. Acquisitions usually have a negative engineering impact when it becomes to integration because of conflicting architectures.
The book that I referred earlier- “How Google Works” described Google Wave as an ahead of its time product. I politely disagree to this given the fact that now, 5 years later, the world still doesn’t see a compelling reason to have a product like this. To be fair to Google Wave and its superior technology, Google did use the pieces of Wave platform in Google+ and Gmail. But hearing Eric Schmidt say that Google liked the Wave UI represents a sort of disconnect between what users felt and what management saw the product. At the core, this aspect is something that’s common across most product failures.

Please do share your thoughts, ideas around this blog.

Saturday, October 31, 2015

Mark Zuckerberg demystified Platform Thinking as a Teenager

I recently stumbled upon this interview that Mark Zuckerberg gave to CNBC as a mere 19 year old entrepreneur, Before I comment further on this, do take around 1 minutes 26 seconds to watch this clip-

At first, I couldn't stop but rave about the confidence exhibited by a teenager and the maturity, clear-headedness and the humbleness that he demonstrated in this interview. He mentions that Facebook was started to solve the problem faced by Harvard to connect student community and it evolved to connect to various other universities in the early days.
No, my purpose of this blog is not the just shower praises on Zuckerberg but what really struck me with this video was its correlation with the last blog that I wrote.

Let me just capture some key points/phrases that he made-
1. He was looking to build something cool.
2. The original plan, Zuckerberg said, was to make a bunch of "side applications" that would bring users back.
3. He called out Facebook simply as an online directory that connected people in universities and help build social networks.

Dissecting his 1 minute interview further, I couldn't help but associate it with some characteristics of platform that I mentioned in my previous blog-

1. Connection: how easily others can plug into the platform to share and transact
2. Gravity: how well the platform attracts participants, both producers and consumers
3. Flow: how well the platform fosters the exchange and co-creation of value

He talks about online directory (Connection) to help connect people.
He talks about making apps that will help pull people (Gravity) back to the site.
He talks about letting people know other people (Flow) by helping them access profiles.

No big deal if we look back at it now but certainly a big deal if we consider the fact that Zuckerberg said this more than a decade back when platform thinking was there, but to be fair it was not as evolved as it is now.

Facebook has also evolved in these years but the fundamental elements that Zuckerberg described in 2004 still drive its success.

Monday, October 5, 2015

Evolution of Software Engineering in the Age of Platforms

What makes Apple, Google, Facebook and Amazon such successful companies of today’s times?

One may wonder if there is any commonality in comparing these four giants who are operating almost independently in their own market spaces. Apple is leading in mobile hardware/OS pace, Google essentially in everything related to Search, Facebook is a peerless social networking site and Amazon primarily into retail. Of course, each of these companies have overlapping areas such as- Amazon’s foray into mobiles or Google’s foray into social networking but by and large, these companies have been largely distinct in the way they have carved a niche for themselves. So, is the quest to find a thread that binds these companies- a right one?

The generic answer to this thread could be that these companies have embraced superior technology, are innovative to the core, they have built ecosystems, they have built experiences that have really let the users hooked on to their offerings and so on and so forth. In reality, more that these easily guessable attributes, these companies have figured out an entirely new way of doing business- The Platform.

Phil Simons, who is an evangelist around the idea of platform style business- describes that there is a great distinction in the way businesses operated in 1990s to much technologically integrated times of today. While the businesses in 1990s thrived in the systems that were more closed in nature and important details pertaining to organization were protected to a limited set of partners, the organizations of today are much more open and collaborative in the way it runs the mutually beneficial partnerships and the supremely successful businesses.

These examples explains this transition better-
1. Google's product AdSense helps democratize Google’s Search technology. By inserting a snippet of code, even the smallest websites can become Google partners, making money for both themselves and Google. This reinforces the “win-win” scenario. Beyond direct monetization, it gives Google a footprint on millions of other websites whose owners also want to monetize their platforms.
2. Facebook has a features called "Facebook Connect". It helps users connect users to multiple websites if they have a Facebook account e.g. it is possible for you to login to The New York Times using Facebook Connect. It helps Facebook get its footprint beyond their own walls and also helps their partners drive more users to their site.
3. Amazon has a Product Advertising API that allows customers to embed product links on their own websites, making money for the partners and Amazon in the process.
4. Apple, despite being perceived as closed ecosystem has intelligently built itself on the Platform-style thinking. It allows various App developers to build the apps for its devices that not only helps the developers their share of money but also enhances the capabilities of the devices.

These are just the small set of examples around how these ultra-successful companies operate.

The topic of discussion here is not limited to the evolution of platform as a business but more so on the technical and engineering aspects of platforms that helps to successfully weave together a business.

Platform development is different than Product development:
While these companies market specific products or solutions, they are often created through a platform of foundational technologies that the company can reconfigure in endless ways to address emerging needs.
The platforms are built not the same way as traditional products are. Developers of the traditional product teams tend to be motivated by the prospect of creating a finished good, the way a sculptor wants to begin with a piece of stone and end with a fully realized figure.
Whereas the Developers of platforms tend to think about the distinct capabilities of platform first and then look for the ways to mix and match them in infinite ways. So unlike being a sculptor, now the developers first produce puzzle pieces instead which can later be integrated in myriad of ways.

Shift from Product thinking to Platform thinking:
Thus, the leap from Products to Platforms involves these three distinct characteristics-

1. Connection: how easily others can plug into the platform to share and transact
2. Gravity: how well the platform attracts participants, both producers and consumers
3. Flow: how well the platform fosters the exchange and co-creation of value

Platform General/Technical Characteristics:
As I have experienced the drift from Product thinking to Platform thinking in Citrix and studying the other organizations, some of the below represents the characteristics related to platforms.
1    1. Think of product in form of interfaces
2    2. APIs and SDKs form the basis of Innovation
3    3. Ability to scale infinitely
4    4. Promotes developer ecosystem
5    5. Product with Infinite features
6    6. Extendibility- Ability to Plug-n-Play
7    7. Co-creation of value
8    8. Drift from monolithic architectures

Beyond these characteristics, there is an in-depth focus on how the platforms are designed in today’s software engineering discipline. The aspects such as designing the interfaces, consistently exposing the APIs, making APIs externalizable etc. are just a few of the pointers.

This is an introduction that i wanted to cover w.r.t to the topic. I will be exploring this topic further in the coming days and sharing further insights.

Please do share your feedback and comments.

Images source:

Sunday, October 4, 2015

How Infosys (possibly) changed its Performance Management System ?

There was a recent news about Infosys making changes to it's performance management system recently. This resonated well with my recent study around these changes made by many organizations. Providing a quick perspective below-

Sources and Acknowledgements:
Most of the below text is adapted (directly and indirectly) from the below URLs. So all credit to the author of the below articles for the upcoming text.

Motivation behind Infosys's change:
- The change in the performance assessment system has primarily been pushed by chief executive Vishal Sikka.
- To bid adieu to the bell curve as a performance assessment tool for its 1.76 lakh employees
- Bell curve is often criticised as a forced ranking system as managers have to mandatorily classify employees into three categories, and rank the performance of 70 per cent as average, 20 per cent as high and 10 per cent, low.

Shortcomings of the traditional system as observed by Infosys:
1. Said to be the cause of attrition.
2. “In performance evaluation based on bell curve, it became a race to get to the top for employees. We were losing a lot of good people who were not ranked at the top." -Richard Lobo

What kind of changes in Performance Management System were embraced by Infosys?:
- "From this quarter, we have removed the forced ranking and in the October appraisal, employees will be appraised on the open ranking. From now on, the managers will take a call and reward," said Richard Lobo, senior vice president, human resources department, at Infosys. The IT services company started conversations around getting rid of the bell curve three months ago.
- The new system, he said, will be more open and flexible with a pronounced focus on rewards for performance.

Additional comments:
- “Attrition is now close to 13 per cent. One of the big reasons for this (attrition) to come down is because we consciously got rid of the bell curve." -Richard Lobo, HR Head Infosys.
- Kissing goodbye to the bell curve, however, is creating problems to managers while justifying ratings to team members. "They will now be forced to face their team members. In the former world, it was easier for the managers to push any disagreement from the team members to the forced ranking," Lobo said.

Around what time-frame were the changes brought in:

Sunday, September 27, 2015

Three clues that the recent Volkswagen saga give about the future of Software Testing

This morning, on a usual check on WhatsApp, I found this message-

VW generates pollution equal to 10 Auto Rickshaws.... that's y the name... DAS Auto??

(If unfamiliar, do check the meaning of Auto Rickshaw to catch-up on humor here)

I always find it fascinating the way people manage to keep pace of their humor with the events happening around the world. Whether it could be attributed to talent or immense availability of time- is a discussion for a different time. My focus with this post is to shed some thoughts around the news that's grabbing headlines these days. On surface, it can be attributed as a story of the auto car major Volkswagen's decline. Beyond the esteemed company's decline, this story is also about the rise of a certain kind. Intriguingly, it brings forward the subject of rise of software in our day-to-day lives. I will come back to this point in a bit as before that i will try and summarize in simple terms on what is currently happening with Volkswagen.

In single line, Volkswagen installed a software program in its vehicles which helped it cheat the emission test. Elaborating further, it had been doing so since 2009 using a software that is notoriously labelled as "defeat device". I am sure there are a lot more layers to it but at the surface level, the software could sense that the given vehicle is under emission test and once it sensed that, it altered the configurations so that it could give favorable emission numbers and when it detected that it’s not under test i.e. during regular driving it turned itself off and let the emissions as high as 40 times the allowed norms to go up in the air. 

 Over the next few months, this saga will be assessed and analyzed from the various angles. While that will happen, I wanted to take this opportunity to analyze this from the Software testing point of view. Listed below are three points that occupied my mind after delving deeper into this story-

#1 Rise of Independent testing

First my mind goes back to the curious story of Maggi noodles in India. One of the statistics that I had heard was that this legendary noodles brand did not lose the profitability in a single quarter for a period of close to 30 years. But recently that trend got broken and Maggi brand observed its first non-profitable quarter. The main cause was the sensation caused through negative lab test results which was done by an Independent agency. It showed the presence of excess lead in Maggie noodles rendering it in the state of being considered almost inedible by customers.

Let’s hold on to this example for a bit and switch to the software world. Over the last many years, we have ceded a lot of things that we use on a day-to-day basis to an irreplaceable phenomenon called as software. Check this example of how work desk has changed over the years and has been replaced by software. While this move towards “softwarization”  (if I can take the liberty to invent this new word) of the world has certainly had its own benefits primarily in making human race more productive, on the contrary, the examples such as Volkswagen's recent case also suggests that it has also left us quite vulnerable too. 

There has been too much focus on preventing the software from hackers with bad intentions (and these efforts are 100% justified) but do we really have enough focus of protecting the software from the bad intentions of its own creator?

There were some protection mechanisms in the case of Maggi noodles but probably not so much in the case of Volkswagen case. Much unlike a physical thing like Maggi noodles, which leaves a trail behind, a software, with its own intelligence hardly leave any traces of malicious intentions.

Intel says that an estimated 31 billion devices will be connecting the population of roughly 4 billion people by 2020, which will make it roughly 7 devices per person in just 5 years from now. In such a world, what factors would make you trust your device manufacturer? Among other things, these companies will arguably be able to even predict most intimate of human functions like how many breaths you take in a minute.

Winning the trust in this situation is paramount and it should lead to rise of Independent and credible testing to certify the ethical functioning of the software. As much as the software producing company owns the quality of the software, it will help to have Independent bodies run by credible partners who would protect the interests of consumers as well as the potential impact of environment. Taking the cue from the Separation of powers  doctrine, which simply states that the Govt. in command cannot be in-charge of the Supreme Court at the same time - the Independent testing function can help to prevent the ethical mishaps related to the software.

#2 Evolution of testing Invisible Software

 I recently wrote about how software of the future will be invisible to the user. To elaborate further, the companies like Google, Microsoft et-all are embracing invisibility of software as a conscious strategy towards making user experience more meaningful and minimalistic. As an example, since the time I have migrated to Windows 10, I have started to use Bing more than Google. This is primarily not by choice but driven by ease that Microsoft has brought in to search anything I want without necessarily having to type or If we look at the role of software in Volkswagen's case and in any other automobile for that matter, it’s hidden at best. As a layman driving a car, I wouldn't know the role of software except, perhaps the one running on the dashboard, which I can see. Automobiles these days, it is said, runs as high as 20 million lines of code, most of which we hardly see in action. On a lighter note, while referring to automobiles, we can easily say that- earlier there were "cars' on the road, now there is 'code' on the road. The larger point that I am trying to make is that we are not far off from the world where probably everything will be run by Software but its existence will be hidden and invisible. In the case of Volkswagen too, the software dictated how much pollutants will reach the environment without the knowledge of the driver riding the car.  When we talk about testing such software that's hidden, we probably are talking about a paradigm shift in the way software testing is largely conducted where with UI being the main point of verifying the tests. I know i am generalizing a bit here and not all software is tested this way but still a vast majority of popular testing methods work that way. Fundamentally, testing then should move to it's building blocks which is the APIs. Apart from the normal testing methods, testing the APIs both at a singular level, then at Integration level and if possible at the system level should form one way to test such a complex system.

#3 Testers taking high stand on Integrity, Ethics

I am reminded of a case when I had to correct a test engineer who had gotten into the habit of marking the test as pass without even testing the scenario. Intentionally writing the result of the test without actually doing it would fall under the delinquency of professional ethics violation. Before I seem like digressing here, I understand what happened with Volkswagen was not exactly the same but somewhere it is a gigantic case of ethical violation. The specifics of the case are not yet known fully as there are independent component manufacturers involved in building an automobile and the assembling company mostly owns how the components integrate and talk to each other. I am not sure if any tester in Volkswagen could have sensed the wrongdoing earlier but what I do know that it takes guts of steel to raise the voice in case one knows about such violations. It becomes more so difficult if anyone from the top is driving the same. I don't want to speculate too much here but I do want to put a word here that one of the most underrated skills of a tester is his or her Integrity or moral compass. Integrity doesn't get tested in happy times when things are going as intended but it surely does get tested in tougher times. One must build courage to do the right thing how-so-ever difficult it may sound.

Marc Andreessen once said in 2011 that Software is eating the world, and as the world has seen in past few years, there is no denying this statement. The cases like Volkswagen's just gives a twist to these immortal lines like "Software is eating the world, albeit in a wrong way". Given the potential impact of Volkswagen's misdeeds on the environment, it can also be said that "Bad Software can eat the world, quite literally". Taking this case as a wake-up call, we probably need to get up and take charge before we give too much of charge of our lives to software. I presented the views here from the point of view of software testing and I am sure there are other ways to restore faith too.

What do you think?