Case 2: Development of a Consistent Key Account Strategy

CASE 2: Development of a Consistent Key Account Strategy through Joint Customer Ratings and Strategies

Strategy phase:  Analysis and development
Level:     Company
Primary content:  Facts, experiences, options, actions
Thinking types: Convergent
Main benefits of visualization:   Social: eliciting and aligning ratings of cross-department key accounts
Visual format used:   Elaboration technique: parameter ruler


Company Context and Strategic Situation:

This reinurance company is a leading diversified multi-line reinsurer. It is active in over a hundred countries and employs approximately 800 people. Its total revenues amounts several billion dollars, which derive from the US and from the European market. The company has a diverse client base and is profitable. As part of a strategic initiative, the company has decided to invest more resources in the handling of its clients. The combination of business unit knowledge about multi-line clients or key accounts is of great importance for the company’s optimal negotiation with such clients (i.e., before renewals). To identify the true potential and importance of a multi-line client, the business units need to develop a common understanding about the client (through knowledge sharing) and they need to agree (efficiently and early) on generally acceptable terms regarding the clients. This process of pre-agreement among the business units has not always worked smoothly in the past. In fact, sub-optimal knowledge sharing and pre-negotiation among business units has already lead to the loss of substantial business. A common client strategy is often difficult due to another organizational issue: the flexible part of the reward system (e.g., the financial incentives given to client managers) implicates that underwriters (who assess the risks to be insured) are in the first place interested in their business lines whereas account managers mainly care for their geographical region. As a result, underwriters want to underwrite contracts with the clients only when a certain profit margin is possible within their line. Account managers by contrast, adapt a more global vision on the client. Based on these premises the communication of knowledge among the involved parties becomes a key issue. It is described in the next section.
The following (anonymized) screenshot shows how based on the existing customer data (that the client managers brought to the joint workshop on Excel sheets), information was visually integrated in real time (via a laptop connected to a beamer) and then applied to a common rating of each client through a visual ruler. This joint rating laid the ground for the subsequent client strategy. The documented decision for a strategy was then captured visually through the same ruler application and appended with explanatory comments to capture the rationale of the taken decision (see the subsequent screenshot). In this way, the gathered client managers could not only aggregate client data and assessments (opinions) into common decisions, but also visualize client profiles to represent their understanding and common insight into a customer on a company-wide level.

Figure 1: A real-time joint client rating based on the ruler interface

Figure 2: Example of a jointly devised client strategy


Method Description: Parameter Ruler

The parameter ruler application is split into two sections. The left column designates assessment criteria or strategy dimensions that are rated or completed on the right hand side (the sliders can be moved, edited and annotated as well). The visual metaphor of a slide ruler is employed to help managers establish a common rating schema of an issue, competitor, client, or supplier. To do this the facilitator can ask the group which of the mapped criteria is the most important one and move this criterion up accordingly. He or she can also ask for specific scales for each criterion. These scales are then entered into the empty fields of every slider. Each entry can be further defined and described through the comment box associated with each field (not visible in the screenshots). In this way, a group can have a detail discussion on each criterion (and its weight), but also sees the big picture, i.e., the overall profile of the current rating. If the ruler is used in combination with a Smartboard the horizontal sliders can be moved simply by touching the slider and moving it left or right. In case of a virtual ruler session via Internet application sharing, the facilitator can allow such movements for any or for only select meeting participants.  The ruler is loosely based on Zwicky’s (1969) morphological box and can also be used as a scenario tool. In this case, the fields on the left designate scenario sections, such as the political, social, economic environment. 



The main advantage of the parameter ruler is that it takes people away from their individual data and opinion and focuses them on common ratings and thus a collective perspective and synthesis that allows comparisons among clients in a systematic and joint way. In this manner, it fosters the integration and alignment of knowledge through a jointly devised and adapted artefact – the final ruler profile. A disadvantage of this method is, however, the fact that people who may be shy may not get the chance to voice their opposition and make their arguments known and visible. As in case 1 groupthink should be actively avoided by a facilitator who seeks to involve all participants into the visual strategy dialogue. An alternative way would be to ask for individual ratings before the collective ratings are developed.


Case Learnings:

Interactive, real-time visualization can be used effectively to integrate strategic knowledge, combine diverse perspectives, and use them for strategy development. The visualization tool, however, has to be used by an experience facilitator who ensures that each participant’s knowledge is adequatly represented and that the resulting vizualisation is not a consensus illusion. This can be achieved by capturing the participants’ individual ratings upfront.

Last modified: Friday, 2 February 2007, 5:50 PM