Browse by year:
Authority and Hierarchy Cultivating the Rewards while Minimizing the Risks
Bidhan Parmar
Wednesday, December 30, 2009
A basic feature of human social groups is hierarchy and authority. Families, religious groups, governments, and businesses all utilize a form of social interaction where subordinates defer to others with more power, information, and control. At their best, authority and hierarchy allow people to coordinate their efforts quickly and efficiently, and carry out tasks that a single individual cannot do alone. For example, in consulting companies, new consultants and analysts defer to the expertise of the project manager, since she has more experience and knowledge about how to complete the project successfully. Cognition and action in hierarchical groups become specialized and unequally distributed so that out of many lesser individuals is born a more capable organization.

The use of authority to coordinate action is not without its drawbacks. By subverting individual freedoms to group norms and autocratic controls, hierarchies can create high bureaucracy, lower worker empowerment, which ultimately decreases the innovation and adaptation capability of the group. The most dangerous consequence of hierarchies, however, is the blurring of responsibility that can happen within them. When many actors come together to produce a joint outcome, it becomes very difficult to answer the question, “Who is responsible for what?” For example, think about the development of a new product or service, typically in companies a new product results from the behavior of many different actors, and it is hard to tease apart which one person brought it about. Knowing each individual’s contribution can help sort out responsibility, but in hierarchies, the thinking and the acting are separated and carried out by different people. So it’s hard to say who is really responsible: those who had the idea, or those that made it happen? Sorting out responsibility is important for executives who want to know where to invest their limited funds in order to increase the company’s capabilities, and what exactly to fix when something goes wrong.

For decades, social psychologists have been interested in the effects of authority and hierarchy on human well-being. In the early 1960’s, a young social psychologist, Stanley Milgram, conducted a set of experiments at Yale University that forever changed the field of social psychology and our understanding of authority and obedience. Upon arrival to the laboratory, each of the experiment’s participants were told that they were about to begin a study on the role of punishment in learning. They were to read word pairs through a microphone to a learner, who was strapped to a chair in another room. The learner would then try to remember the word pairs, and when prompted with the first word of the pair, would try to recollect the second. If he got it correct they would continue on to the next work pair, if he got it wrong, he was to receive an electric shock. The participants were instructed to administer the shocks in increasing voltages from 15 up to 450 volts.

In reality, the learner was not shocked; he was actually a confederate of the experiment, which was not really about learning and memory. Milgram and his research team were interested in whether people would obey orders to continue delivering shocks in the face of protest by the learner to stop. As the test continued, the learner/confederate would begin to get increasing uncomfortable and eventually demand to be let out of the experiment. At which point the experimenter in charge of the experiment (who was also an actor) would simply say, “please continue,” as if the protesting was normal and nothing to worry about.

To Milgram’s surprise, he found that roughly 65 percent of participants would obey the experimenter and continue on with the procedure until the end of the experiment, while the learner screamed to be let out.

For years, social psychologists have been intrigued with the results of the Milgram experiments. No demographic variables such as age, gender, education level, political affiliation, or religion have been shown to predict whether someone will obey or disobey a malevolent authority. The experiments have been replicated in many countries with similar results.

Almost 50 years later the dynamics of obedience continue to play out in our militaries, governments, and business organizations. Today, corporations are becoming more global and complex, and increasing numbers of stakeholders are affected by the actions of a particular business. Flexibility, speed, and agility are all necessary to compete in today’s dynamic market place; therefore managers need to find ways to maximize the benefits of hierarchy, while minimizing the moral risks.

In order to make our organizations resilient to moral failures caused by blind obedience, executives and managers need to better understand the strengths and weaknesses of authority and how its use can change the way people think and behave.

The first step is to better understand the circumstances under which authority is most useful. Authority is most effective in stable environments where the authority’s expertise closely matches the nature of the problem or task that the subordinate faces. In dynamic environments, where things change faster than you can make sense of them, authority is less effective because the authority’s expertise may not apply to the evolving situation. In these kinds of situations, more egalitarian modes of decision making are appropriate because managers can share information and constructively debate better strategies to cope with change.

Second, when managers don’t know who is responsible for a particular outcome, it is harder to fix problems and invest resources. When moral failures occur, most companies turn to some form of ethics and compliance training. Traditional training in business ethics focuses on individual decision making and typically ignores the larger organizational context of making moral choices. To make organizations more resilient to moral failures, executives need to understand and shape the way responsibility is dispersed throughout the organization. “What-if” scenarios can help employees understand who the relevant decision makers are and what process they will use to make choices, even before unexpected situations arise. Managers and employees can work together to better understand the most likely moral failures in their organization and jointly create solutions.

Finally, executives need to understand the role of the fundamental attribution error in stakeholder perceptions. The fundamental attribution error is a pattern of thinking where individuals attribute other’s behavior to their character or traits and attribute their own behavior to situational forces. For example, if you see a person spill a cup of coffee, you are more likely to think that that person is uncoordinated or clumsy – an attribution about their character. Whereas, if you spill a cup of coffee, you are likely to attribute it to situational factors like the cup being hot or you tripping over your shoelace. In short, the fundamental attribution error, results from differences in the amount and type of information available to make sense of a behavior. Managers need to think not only about how they understand the actions and responsibility of their own company, but how other stakeholders with less information will view them and thus when and how they will hold the company responsible.

Careful attention to managing responsibility in organizations can allow companies to benefit from the use of authority while mitigating the risk moral failures.

Bidhan Parmar, The Darden School of Business Administration, The University of Virginia
Share on LinkedIn