I have briefly described the decision making process and decisional and information retrieval behavior. Now, in addition to information retrieval behavior, I will examine and research the way in which people can make mistakes when processing and interpreting information. It is important that business intelligence software solutions keeps track (and makes visible) of which data and attributes a manager has been monitoring (Borgman, 1994). An important variance between people at the operational level and senior managers at the strategic level is that the latter demands more different and non-routine information in order to make decisions. Thus, information is more difficult to interpret and to internalize.
Faulty decision making
Bias is a considerable contributory factor in faulty decision making. This occurs when people are involved in the process of information acquisition and processing. Borgman provides us with a clear overview of the most common living biases which are relevant to information retrieval (Borgman, 1994). In the table below, I examine each bias and explain whether a bias prevention mechanism could support the manager and if so, how it should be implemented. In general, this can be accomplished by a user interface that keeps track of which information the manager has been viewing compared to the decisions and actions undertaken.
Repeat information at different places
Moreover, a general prevention mechanism is to (arbitrarily) repeat information at different places in the information system in different presentation forms. In addition, detailed information that has less impact and importance for the realization of the goals can be displayed in such a form (e.g. smaller font) that the user will perceive it as less important. At last, user interfaces should be created to support the process of searching to undermine the evidence instead of the other way round (Isenberg, 1984). A given hypothesis can never be validated since not all occurrences can be checked. Hypotheses can only be falsified (Popper, 1978). Interfaces that are in line with the principle of ‘counter evidence’ can prevent the arousal of biases. However, such a mechanism requires a complete research study on its own.
The most common biases
Nevertheless, the reader will be provided with some high level suggestions and recommendations as to how each bias could be prevented.
BIAS | PREVENTION MECHANISM |
Hypothesis fixation Occurs when people persist in maintaining a hypothesis that has been confirmed conclusively to be false. In that case information that does not fit in a consistent profile is downplayed or disregarded. |
Difficult to prevent without the system knowing what the manager’s hypothesis is. The system should support hypothesis input in an easy way. On a higher level, this might already be known by the software by the modeled causal relations. |
Confirmation biases These will take place when someone searches only those instances that are expected to be positive under one’s current hypothesis (VandenBosch, 1997). If one thinks that one product is a bad performer and is expected to be responsible for a loss, only that product will be queried. If indeed the product performs badly, the initial hypothesis is considered to be correct while other products could be much more accountable for that loss. |
Can be prevented by also showing other products of the same product group. It becomes more difficult when products of another product group are also performing badly. If all products are performing equally badly, those products can be shown in one list. These products can be selected because they generate the same amount of business or reveal the same development over time. |
Perceived conformation bias A perceived conformation bias has similar errors to the ordinary confirmation bias. Whereas the confirmation bias will look only to those instances that were expected to be positive against the hypothesis, the perceived confirmation bias focuses only on features relevant to the initial hypothesis and not to features appropriate to the alternatives. |
This can be prevented by a rule that the software keeps track of whether the same features are retrieved from any alternative data sets. |
Interpretation biases These emerge when a group of people is presented with the same information and each of them favors a different initial opinion. After interpreting the information, the opinions of the people do not change; in fact, their initial opinions become stronger. |
Difficult to track and prevent. |
Logical fallacy A logical fallacy (a deviation from truth) can be illustrated by people who fill in missing information, not able to recall all details, and who are just creating a ‘logical’ reconstruction that may not be completely accurate. |
Can sometimes be prevented by applying the opposite of one of the Gestalt Laws (Preece, 1994), the Closure Law which states that missing parts are automatically filled in. |
Concrete information biases These occur when concrete information overrules abstract (summaries, statistical base-rates) information. A manager of the customer service department who is contacted by two different customers on the same day, who are both angry about delivery delays, is less likely to have confidence in statistics that inform him that the waiting period is at an all time low. |
Simple work around: first show the summarized information and then the detailed information. Complex work around: if the system knows at which information the manager has been looking and the actions undertaken it could be possible to prevent this. If the manager makes a decision, the system can, based on causal cards modeled in the business intelligence software, see if a decision is tackling the right problem. |
Reliance only on positive hits This is the assumption made when a person perceives a correlation between two events based only on the number of cases where both events occur. People that conclude, based only on a number of ‘hits’ from the database for the product/loss combination, that event 1 (product loss) is interrelated with event 2 (company loss). |
If the manager draws conclusions based on perceived correlation’s of variable numbers of events and the system allows the managers to input their conclusions, the system can compare the conclusions with other cases in the same domain. |
Fill the screen with less information
As said before, for all these biases, it is important to know at which item or screen the manager has been looking, for how long and what information he focused on. Since a screen is filled with much information, it is difficult to know exactly which information is studied and which not. A profound solution, but not always satisfactory, is to fill the screen with less information so the business intelligence system is more sure which information has been seen.
Extra functions to prevent biases
Besides a tracking system, in order to prevent biases, a few extra functions are required in the business intelligence software:
- input of hypotheses;
- input of conclusions;
- the system should know which information is not available (for instance, caused by the tremendous costs of producing such information) but is part of the total.
Interface is much more than just the outside
Once more, we see here that the interface is more than merely the outside of the business intelligence system. In the case of matching a hypothesis against other cases, one can perceive this as a function of the information system. It is an information function which collects data and transforms it into useful information for the manager. However, another viewpoint is to view the system as a collection of information-items that should be effectively communicated to the manager in order to make proper decisions. The factual calculations that transform data into information is a function but the decision mechanism (trigger) to start these functions is ‘hidden’ in the interface to prevent biases (enabling effective communication by telling the manager the whole truth). A half-truth is a whole lie.
No comments:
Post a Comment