Sunday, October 4, 2015

How Infosys (possibly) changed its Performance Management System ?

There was a recent news about Infosys making changes to it's performance management system recently. This resonated well with my recent study around these changes made by many organizations. Providing a quick perspective below-

Sources and Acknowledgements:
Most of the below text is adapted (directly and indirectly) from the below URLs. So all credit to the author of the below articles for the upcoming text.

Motivation behind Infosys's change:
- The change in the performance assessment system has primarily been pushed by chief executive Vishal Sikka.
- To bid adieu to the bell curve as a performance assessment tool for its 1.76 lakh employees
- Bell curve is often criticised as a forced ranking system as managers have to mandatorily classify employees into three categories, and rank the performance of 70 per cent as average, 20 per cent as high and 10 per cent, low.

Shortcomings of the traditional system as observed by Infosys:
1. Said to be the cause of attrition.
2. “In performance evaluation based on bell curve, it became a race to get to the top for employees. We were losing a lot of good people who were not ranked at the top." -Richard Lobo

What kind of changes in Performance Management System were embraced by Infosys?:
- "From this quarter, we have removed the forced ranking and in the October appraisal, employees will be appraised on the open ranking. From now on, the managers will take a call and reward," said Richard Lobo, senior vice president, human resources department, at Infosys. The IT services company started conversations around getting rid of the bell curve three months ago.
- The new system, he said, will be more open and flexible with a pronounced focus on rewards for performance.

Additional comments:
- “Attrition is now close to 13 per cent. One of the big reasons for this (attrition) to come down is because we consciously got rid of the bell curve." -Richard Lobo, HR Head Infosys.
- Kissing goodbye to the bell curve, however, is creating problems to managers while justifying ratings to team members. "They will now be forced to face their team members. In the former world, it was easier for the managers to push any disagreement from the team members to the forced ranking," Lobo said.

Around what time-frame were the changes brought in:

Sunday, September 27, 2015

Three clues that the recent Volkswagen saga give about the future of Software Testing

This morning, on a usual check on WhatsApp, I found this message-

VW generates pollution equal to 10 Auto Rickshaws.... that's y the name... DAS Auto??

(If unfamiliar, do check the meaning of Auto Rickshaw to catch-up on humor here)

I always find it fascinating the way people manage to keep pace of their humor with the events happening around the world. Whether it could be attributed to talent or immense availability of time- is a discussion for a different time. My focus with this post is to shed some thoughts around the news that's grabbing headlines these days. On surface, it can be attributed as a story of the auto car major Volkswagen's decline. Beyond the esteemed company's decline, this story is also about the rise of a certain kind. Intriguingly, it brings forward the subject of rise of software in our day-to-day lives. I will come back to this point in a bit as before that i will try and summarize in simple terms on what is currently happening with Volkswagen.

In single line, Volkswagen installed a software program in its vehicles which helped it cheat the emission test. Elaborating further, it had been doing so since 2009 using a software that is notoriously labelled as "defeat device". I am sure there are a lot more layers to it but at the surface level, the software could sense that the given vehicle is under emission test and once it sensed that, it altered the configurations so that it could give favorable emission numbers and when it detected that it’s not under test i.e. during regular driving it turned itself off and let the emissions as high as 40 times the allowed norms to go up in the air. 

 Over the next few months, this saga will be assessed and analyzed from the various angles. While that will happen, I wanted to take this opportunity to analyze this from the Software testing point of view. Listed below are three points that occupied my mind after delving deeper into this story-

#1 Rise of Independent testing

First my mind goes back to the curious story of Maggi noodles in India. One of the statistics that I had heard was that this legendary noodles brand did not lose the profitability in a single quarter for a period of close to 30 years. But recently that trend got broken and Maggi brand observed its first non-profitable quarter. The main cause was the sensation caused through negative lab test results which was done by an Independent agency. It showed the presence of excess lead in Maggie noodles rendering it in the state of being considered almost inedible by customers.

Let’s hold on to this example for a bit and switch to the software world. Over the last many years, we have ceded a lot of things that we use on a day-to-day basis to an irreplaceable phenomenon called as software. Check this example of how work desk has changed over the years and has been replaced by software. While this move towards “softwarization”  (if I can take the liberty to invent this new word) of the world has certainly had its own benefits primarily in making human race more productive, on the contrary, the examples such as Volkswagen's recent case also suggests that it has also left us quite vulnerable too. 

There has been too much focus on preventing the software from hackers with bad intentions (and these efforts are 100% justified) but do we really have enough focus of protecting the software from the bad intentions of its own creator?

There were some protection mechanisms in the case of Maggi noodles but probably not so much in the case of Volkswagen case. Much unlike a physical thing like Maggi noodles, which leaves a trail behind, a software, with its own intelligence hardly leave any traces of malicious intentions.

Intel says that an estimated 31 billion devices will be connecting the population of roughly 4 billion people by 2020, which will make it roughly 7 devices per person in just 5 years from now. In such a world, what factors would make you trust your device manufacturer? Among other things, these companies will arguably be able to even predict most intimate of human functions like how many breaths you take in a minute.

Winning the trust in this situation is paramount and it should lead to rise of Independent and credible testing to certify the ethical functioning of the software. As much as the software producing company owns the quality of the software, it will help to have Independent bodies run by credible partners who would protect the interests of consumers as well as the potential impact of environment. Taking the cue from the Separation of powers  doctrine, which simply states that the Govt. in command cannot be in-charge of the Supreme Court at the same time - the Independent testing function can help to prevent the ethical mishaps related to the software.

#2 Evolution of testing Invisible Software

 I recently wrote about how software of the future will be invisible to the user. To elaborate further, the companies like Google, Microsoft et-all are embracing invisibility of software as a conscious strategy towards making user experience more meaningful and minimalistic. As an example, since the time I have migrated to Windows 10, I have started to use Bing more than Google. This is primarily not by choice but driven by ease that Microsoft has brought in to search anything I want without necessarily having to type or If we look at the role of software in Volkswagen's case and in any other automobile for that matter, it’s hidden at best. As a layman driving a car, I wouldn't know the role of software except, perhaps the one running on the dashboard, which I can see. Automobiles these days, it is said, runs as high as 20 million lines of code, most of which we hardly see in action. On a lighter note, while referring to automobiles, we can easily say that- earlier there were "cars' on the road, now there is 'code' on the road. The larger point that I am trying to make is that we are not far off from the world where probably everything will be run by Software but its existence will be hidden and invisible. In the case of Volkswagen too, the software dictated how much pollutants will reach the environment without the knowledge of the driver riding the car.  When we talk about testing such software that's hidden, we probably are talking about a paradigm shift in the way software testing is largely conducted where with UI being the main point of verifying the tests. I know i am generalizing a bit here and not all software is tested this way but still a vast majority of popular testing methods work that way. Fundamentally, testing then should move to it's building blocks which is the APIs. Apart from the normal testing methods, testing the APIs both at a singular level, then at Integration level and if possible at the system level should form one way to test such a complex system.

#3 Testers taking high stand on Integrity, Ethics

I am reminded of a case when I had to correct a test engineer who had gotten into the habit of marking the test as pass without even testing the scenario. Intentionally writing the result of the test without actually doing it would fall under the delinquency of professional ethics violation. Before I seem like digressing here, I understand what happened with Volkswagen was not exactly the same but somewhere it is a gigantic case of ethical violation. The specifics of the case are not yet known fully as there are independent component manufacturers involved in building an automobile and the assembling company mostly owns how the components integrate and talk to each other. I am not sure if any tester in Volkswagen could have sensed the wrongdoing earlier but what I do know that it takes guts of steel to raise the voice in case one knows about such violations. It becomes more so difficult if anyone from the top is driving the same. I don't want to speculate too much here but I do want to put a word here that one of the most underrated skills of a tester is his or her Integrity or moral compass. Integrity doesn't get tested in happy times when things are going as intended but it surely does get tested in tougher times. One must build courage to do the right thing how-so-ever difficult it may sound.

Marc Andreessen once said in 2011 that Software is eating the world, and as the world has seen in past few years, there is no denying this statement. The cases like Volkswagen's just gives a twist to these immortal lines like "Software is eating the world, albeit in a wrong way". Given the potential impact of Volkswagen's misdeeds on the environment, it can also be said that "Bad Software can eat the world, quite literally". Taking this case as a wake-up call, we probably need to get up and take charge before we give too much of charge of our lives to software. I presented the views here from the point of view of software testing and I am sure there are other ways to restore faith too.

What do you think?

Tuesday, September 15, 2015

Will Self-discipline Often beat the Need of a Mentor ?

There was an inspiring story of Kenyan athlete-Julius Yego doing rounds recently. Julius recently won the gold medal in the athletic sport of Javelin (in 2015 World Championship). Winning a Gold medal is always special and culmination of lots of hard-work and dedication but what was unique about Julius's feat was that he did so after learning how to throw on YouTube.  Having a mentor or a coach after often talked about as key ingredient for success in leadership. Infact a lot of senior level folks that i know directly or have come across in my professional career so far have acknowledged the role mentors played in their success. 

Some organizations even invest in formal mentorship programs too. And if we talk of the sports scene, in my humble belief, the role of coaches are sometimes overrated but often justified. I say so because its the players who toil on the field whose spot decisions and related actions results in a win or a loss. The role of a coach is often behind the scenes and is considered crucial in the skill development as well the mental preparation of the athlete. The role of coach also matters in cultural context, as an example, in India there is even a National level award called as Dronacharya Award that honors the best coaches at an annual basis.

In such a context, the achievement of Julius Yego can be easily thought of as extraordinary and even out of world. To put things in perspective, he won nothing short of a gold medal that too in World championships, which can be considered the most important athletic event of the year- only probably a notch lower than Olympics in its importance.  Consider another fact that he won the Gold medal in Javelin competition, probably the first one from Kenya which is a country whose major sporting identity is known from the on-field exploits of long distance runners. One cannot resist the temptation of thinking that he is not just supremely talented at Javelin but also even greatly gifted at the skill of learning. Yes, i just called learning as a skill. Unless we think of something as a skill, we tend to not give it as much importance as it deserves. Coming to the point, does Julius' achievement makes us look at the future world with less need of mentors?

In favor of this question, i can site an example directly from my experience. I completed a full marathon (42.195 km) run distance twice last year. A lot of runners that i had come across over the years always had association with some sort of running group that helped them train, mentor them towards this arduous distance. Given my schedule and family responsibilities, it was never going to be possible for me to find time for the preparation for the marathon, which i to dearly wanted to do. Given the desire, i chose the next best option. I caught hold of a book from amazon that helped train the first time marathon runners, followed it by the word, pushed myself physically and mentally for around 4 months of training and eventually achieved the coveted distance not once but twice in a month. So i can safely say that i have had my Julius Yego like moment. Of course, there is a big difference there- i "just" completed the marathon whereas Julius achieved the pinnacle of the sporting achievement by winning the gold medal. This example points to the fact that the role of a mentor may not be all that important but i think that's just one point of view.

Rahul Dravid, a legendary Cricket player from India recently launched the "RD Athlete Mentorship" program, which aims at providing the much needed mentorship to deserving athletes. His idea of mentorship (atleast at the highest level that he played) is that a mentor provides a sounding board for an athlete to talk to. He thinks that the major role of a mentor is to listen to you and helps provide alternative perspectives around the problems a sportsperson may be facing. 

Coming to organizational world of today, which is increasingly getting dominated by millennials, i have been witness to a very smart work force. One of the characteristics that i have observed in millennials is the ability (or probably "willingness" is a better word here) to figure out things on their own. Much like Julius did, they find pride in finding their own solutions to distinct problems and it is they consider is as a part of showcasing value. This is the generation that thrives on the need of a feedback and want to improve on every aspect of their work.

 Julius also said that "I do not have a coach, my motivation comes from within. Training without a coach is not an easy thing.” He did later coach with Finnish coach but bulk of his learning seem to have come from self-learning through YouTube videos. This proves that self-learning probably requires more self-discipline and internal motivation but the notion of self-learning do have its own limits and human touch to learning always helps fill a gap.

I personally do not think that we are headed to a world where mentors won't be needed at all but i can safely say that the role of mentors will eventually evolve. It would no longer be to delve into the basics (Ok, sometimes even basics need a friendly reminder), which can be learned from typical online sources but it will be more towards helping people reach a certain level of self-actualization.

What do you think?

Image source:

Wednesday, September 9, 2015

Learnings from the Performance Management shifts made by Juniper, Accenture, Adobe, Microsoft, Deloitte and Infosys

Having gone through the rationale of some of the companies that embraced the change in performance management system in the last few posts, i think its wise to consolidate the information in one smart view. This is what i tried to do with the table below. As you go through the contents, i will try and summarize some of the points that i found interesting in all these cases-

1. As is evident, such a change that directly impacts the  cannot be brought in without the commitment of Top management. Almost in all the organizations studied here, the change was initiated by someone on the top or near the top.
2. Interestingly, the companies like Adobe, Deloitte and Accenture did the cost analysis of time taken in the traditional system. I find this interesting because this seem like a one-sided metric (atleast going by the way it is presented in the referred articles) because it does not cover how much time would the new process take. Accenture case does admit that the new process change will not lead to substantial money and time saving.
3. Juniper, Adobe and Microsoft saw the traditional system as breaking trust between employee and manager and quite interestingly, in the case of Deloitte, Employees were comfortable with the traditional system while the management team wanted to change the system, which is a shift from the norm.
4. Most companies studied here cited that there wasn't enough mechanisms for regular feedback between employees and managers in the traditional system. It cites that once-a-year discussion wasn't enough. Interestingly, most of these companies then figured out the processes such as Check-ins, Conversation day so the employee and managers can talk regularly. I find this amusing because nothing stopped these companies to introduce these processes in the traditional performance management system. Moreover, the newer system won't ensure that check-in conversations would be of high quality. In my humble opinion and experience, i have seen traditional performance management systems work well as long as discussions between employee and manager were of great quality. Somehow, i saw the fancy names being used for conversations in these cases but i could rarely see a mention of conversation quality.

Organization Motivation behind the change Shortcomings with the old system What will be/was embraced
Juniper Networks (Embraced around 2010)
1. Top Management Commitment towards the change.
2. Embrace personalization of Technology
1. Facilitated less Trust with employees/management.
2. Facilitated Less regular feedback mechanisms.
1. Driving Philpsophy: Juniper drew on work of David Rock – see ‘Managing with the Brain in Mind’-
2. No Ratings
3. Embracing regular concersation day between employee-manager.
4. More focus on career growth than the past.
Accenture (Embraced from 2015-2016)
1. Top Management Commitment towards the change.
2. Focus on establishing trust.
3. Focus on managing millennials.
1. Forced Rankings of employees.
2. Time consuming paperwork.
3. Tendency to promote Nascissism (Self-promotion) among employees
4. Too costly process (200 hours a year per manager, $35 million per year for Accenture)
5. Yearly wait is too long for sharing feedback.
1. Timely feedback mechanisms, more fluid system.
2. No Forced Rankings
Adobe (Embraced from 2014)
1. Top Management Commitment towards the change.
2. Realization that the existing process was too time-consuming.
1. Cause of Employee-Manager Conflicts.
2. Not assisting business Transformation.
3. Yearly wait is too long for sharing feedback.
4. Recency bias related problems (feedback ususally related to recent events).
5. More focus backwards (past) than forward (future).
6. Forced Rankings
1. Regular Check-ins (Timely feedback sharing, career conversations).
2. Active participation from the employee.
Microsoft (Embraced from 2013-14)
1. Top Management Commitment towards the change.
2. Realization of Stack rankings as a destructive process.
Unpopular stack rankings 1. More emphasis on teamwork and collaboration.
2. New process called "Connects" (timely feedback, meaningful career discussions).
3. No ratings.
4. No more stack rankings.
Deloitte (Embraced around 2015)
1. Realization that the process out of sync with business objectives
2. Findings from public survey results
1. Employees thought that the existing process was fair, however management didn't.
2. Existing process was too costly (2 million hours a year of organization's time).
3. Ratings reveal less about Ratee, More about Rater
1. no cascading objectives.
2. no 360 degrees feedback tools.
3. no once a year review.
4. Focus on key performance questions.
5. Weekly check-ins (focused meetings between managers and employees).
Infosys (Embraced from 2015-16)
1. Top Management Commitment towards the change.
2. Realization that bell curve process is a reason for attrition.
Said to be the cause of attrition. 1. No forced rankings.
2. Managers take a call and reward.

Monday, September 7, 2015

Why didn't Google get rid of ratings like many other companies ?

So far, in the past few posts, we looked at the changes into performance management system brought in by companies such as Juniper Networks, Accenture, Microsoft, Adobe and Deloitte. As stated, all these companies had their own share of good reasons while deciding to make the shift. I will delve into a bit more analysis around this in the upcoming posts but for now, would like to focus on Google.
I consider the case of Google as important because- by the virtue of it's image, is known to be an organization that has evolved newer ways of doing routine things. I had recently gotten a chance to read the book by SVP of HR at Google, Laszlo Bock. Bock is the key author of the book titled- Work Rules!: Insights from Inside Google That Will Transform How You Live and Lead. This book is all about how Google reinvented the way their HR function is run. While i will try and narrate in detail the review of this book in the future posts, but in gist, Google relied a lot on the concept of Data driven HR by bringing in a unique focus around People analytics.
Using this approach as a foundation, Google reviewed the various aspects of the way Human Resources function was managed and brought in many changes that eventually helped HR function make a desired impact. Like many other companies, the book states that- Google also reviewed the Performance Management system and did bring in the changes wherever necessary or seeked an objective reason to not change. Unlike most of the organizations that i have featured in the recent blogs, Google didn't let go of rating system and retained the rating system as a part of its core performance management philosophy. Why did Google retain ratings ?

To answer this question, i have included the below image from the Page-167 of the above stated book by Laszlo Bock. Please do take a moment to go through the contents-

To Summarize:
1. Google sees ratings as a tool that can help managers take key people related decisions.
2. The focus is not just on having a rating system but rather a "just" or "fair" rating system.
3. Google considers calibration meetings important, where the key managers calibrate the efforts across the organization to bring in the necessary consistency across the organization.
4. Ratings play an important role in calibration meetings by providing a standard language. It can also help provide the visibility of exceptional employees to the rest of the organization's managers- hence help take crucial decisions like move to other teams.
5. The rating system probably makes less sense if the organization has less people. But if we are dealing with a large mass of people, a foundation of just system is needed to create consistency and fairness. Rating process that relies on calibration actively weeds out badness and bias from the system.

Google has an interesting take here, as it is not directly taking the approach of weeding out symptom of the problem (i.e. ratings) but rather tries to address the root of the problem (bad "unjust" rating system) as they see it.

Hope you enjoyed reading this. See you soon.

Sunday, September 6, 2015

How Deloitte Revamped its Performance Management System ?

Sources and Acknowledgements:
Most of the below text is adapted (directly and indirectly) from the below URLs. So all credit to the author of the below articles for the upcoming text.

Motivation behind the Deloitte's change:
1. Like many other companies, we realize that our current process for evaluating the work of our people—and then training them, promoting them, and paying them accordingly—is increasingly out of step with our objectives.
2. In a public survey Deloitte conducted recently, more than half the executives questioned (58%) believe that their current performance management approach drives neither employee engagement nor high performance. They, and we, are in need of something nimbler, real-time, and more individualized—something squarely focused on fueling performance in the future rather than assessing it in the past.

Shortcomings of the traditional system as observed by Deloitte:
1. Employees thought that the existing process was fair, however management didn't. Internal feedback demonstrates that our people like the predictability of this process and the fact that because each person is assigned a counselor, he or she has a representative at the consensus meetings. The vast majority of our people believe the process is fair. We realize, however, that it’s no longer the best design for Deloitte’s emerging needs: Once-a-year goals are too “batched” for a real-time world, and conversations about year-end ratings are generally less valuable than conversations conducted in the moment about actual performance.

2. But the need for change didn’t crystallize until we decided to count things. Specifically, we tallied the number of hours the organization was spending on performance management—and found that completing the forms, holding the meetings, and creating the ratings consumed close to 2 million hours a year. As we studied how those hours were spent, we realized that many of them were eaten up by leaders’ discussions behind closed doors about the outcomes of the process. We wondered if we could somehow shift our investment of time from talking to ourselves about ratings to talking to our people about their performance and careers—from a focus on the past to a focus on the future.

3. The most comprehensive research on what ratings actually measure was conducted by Michael Mount, Steven Scullen, and Maynard Goff and published in the Journal of Applied Psychology in 2000. Their study—in which 4,492 managers were rated on certain performance dimensions by two bosses, two peers, and two subordinates—revealed that 62% of the variance in the ratings could be accounted for by individual raters’ peculiarities of perception. Actual performance accounted for only 21% of the variance. This led the researchers to conclude (in How People Evaluate Others in Organizations, edited by Manuel London): “Although it is implicitly assumed that the ratings measure the performance of the ratee, most of what is being measured by the ratings is the unique rating tendencies of the rater. Thus ratings reveal more about the rater than they do about the ratee.”

4. We also learned that the defining characteristic of the very best teams at Deloitte is that they are strengths oriented. Their members feel that they are called upon to do their best work every day. We wanted to spend more time helping our people use their strengths—in teams characterized by great clarity of purpose and expectations—and we wanted a quick way to collect reliable and differentiated performance data.

What kind of changes in Performance Management System were embraced by Deloitte?:
1. what we’ll include in Deloitte’s new system and what we won’t- It will have-
   a. no cascading objectives,
   b. no once-a-year reviews, and
   c. no 360-degree-feedback tools.
2. We’ve arrived at a very different and much simpler design for managing people’s performance. Its hallmarks are speed, agility, one-size-fits-one, and constant learning, and it’s underpinned by a new way of collecting reliable performance data.
3. At the end of every project (or once every quarter for long-term projects) we will ask team leaders to respond to four future-focused statements about each team member. We’ve refined the wording of these statements through successive tests, and we know that at Deloitte they clearly highlight differences among individuals and reliably measure performance. Here are the four:
   a. Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus [measures overall performance and unique value to the organization on a five-point scale from “strongly agree” to “strongly disagree”].
   b. Given what I know of this person’s performance, I would always want him or her on my team [measures ability to work well with others on the same five-point scale].
   c. This person is at risk for low performance [identifies problems that might harm the customer or the team on a yes-or-no basis].
   d. This person is ready for promotion today [measures potential on a yes-or-no basis].
4. In effect, we are asking our team leaders what they would do with each team member rather than what they think of that individual.
5. Our design calls for every team leader to check in with each team member once a week. For us, these check-ins are not in addition to the work of a team leader; they are the work of a team leader. If you want people to talk about how to do their best work in the near future, they need to talk often.

Additional comments:
1. This is where we are today: We’ve defined three objectives at the root of performance management—to recognize, see, and fuel performance. We have three interlocking rituals to support them—the annual compensation decision, the quarterly or per-project performance snapshot, and the weekly check-in. And we’ve shifted from a batched focus on the past to a continual focus on the future, through regular evaluations and frequent check-ins.
2. Deloitte's previous performance management:
Objectives are set for each of our 65,000-plus people at the beginning of the year; after a project is finished, each person’s manager rates him or her on how well those objectives were met. The manager also comments on where the person did or didn’t excel. These evaluations are factored into a single year-end rating, arrived at in lengthy “consensus meetings” at which groups of “counselors” discuss hundreds of people in light of their peers.

Around what time-frame were the changes brought in:
Likely 2015

Image source:

Saturday, September 5, 2015

How Microsoft (possibly) changed its Performance Management System ?

Sources and Acknowledgements:
Most of the below text is adapted (directly and indirectly) from the below URLs. So all credit to the authors of the below articles for the upcoming text.

Motivation behind the Microsoft's change:
1. Driven by Microsoft HR chief- Microsoft HR chief Lisa Brummel
2. Microsoft employees who all cited stack ranking as the most destructive process inside the software giant.

Shortcomings of the traditional system as observed by Microsoft:
1. For years Microsoft has used a technique, stack ranking, that effectively encourages workers to compete against each other rather than a collaborative Microsoft that CEO Steve Ballmer was trying to push ahead of his retirement.
2. Stack ranking is a process where each business unit's management team has to review employees' performance and rank a certain percentage of them as top performers, or as average or poorly performing. Former Microsoft employees have claimed it leads to colleagues competing with each other, especially when some employees in a group of individuals need to be given poor reviews to match the method.

What kind of changes in Performance Management System were embraced by Microsoft?:
1. More emphasis on teamwork and collaboration.  We’re getting more specific about how we think about successful performance and are focusing on three elements – not just the work you do on your own, but also how you leverage input and ideas from others, and what you contribute to others’ success – and how they add up to greater business impact.
2. More emphasis on employee growth and development. Through a process called “Connects” we are optimizing for more timely feedback and meaningful discussions to help employees learn in the moment, grow and drive great results.  These will be timed based on the rhythm of each part of our business, introducing more flexibility in how and when we discuss performance and development rather than following one timeline for the whole company.  Our business cycles have accelerated and our teams operate on different schedules, and the new approach will accommodate that.
3. No more curve.We will continue to invest in a generous rewards budget, but there will no longer
be a pre-determined targeted distribution.  Managers and leaders will have flexibility to allocate rewards in the manner that best reflects the performance of their teams and individuals, as long as they stay within their compensation budget.
4. No more ratings. This will let us focus on what matters – having a deeper understanding of the impact we’ve made and our opportunities to grow and improve.

Around what time-frame were the changes brought in:
Around the year 2013-14

Image source: