“In computer science, and specifically the branches of knowledge engineering and artificial intelligence, an inference engine is a computer program that tries to derive answers from a knowledge base. It is the "brain" that expert systems use to reason about the information in the knowledge base for the ultimate purpose of formulating new conclusions.” Wikipedia
Planning a journey requires at least some knowledge of the start position. Much the same observation applies to change initiatives or interventions intended to develop an organisation and its performance. The difficulties in establishing the start point – 'current state' - of an organisational development journey are formidable. Here are some of the key challenges:
- If face to face interviewing is the main tactic used to collect data about the organisation's 'current state', there are the time and resources needed.
- Excessively long time frames required to collect information generally mean that the 'current state' that is defined through the data collection process is no longer 'current' by the time all the required data is available, and analysis is complete.
- If softer, attitude, process and relationship orientated data are collected by interview, by one person, then even exhaustive note taking is not likely to prevent important insights being lost, as connecting relationships between 'bits' of information can be missed during analysis.
- If more than one person is involved, time frames may be shorter, but the risk of connections being missed increases rapidly with the size of the group employed.
- The human brain is not a hard disk, as we find in computers. It does not just faithfully record input data, without interpretation – and leave it unchanged until someone wishes to access it. All people process incoming data through a wide variety of filters, including those that are experiential, emotional and cognitive.
- When the interviewer asks questions, data filtering and interpretation apply when the interviewee hears the questions – and when responses are offered, and heard by the interviewer. All this adds up to a high probability, if not certainty, of data contamination.
- Another important issue here is that of limited listening skills. The ability to 'hear flags' that are offered by respondents is often limited. 'Flags' are expressions rather like a glimpse of the tip of a flag flying over the top of a hill to a military observer – maybe there is an enemy battalion, just the other side of that hill. Perhaps a little reconnaissance would be a good idea, before charging over the top!
- Research suggests that many people miss such flags as often as 20 plus times per hour in typical business conversations. How about, for example, the respondent comment that includes - “Management style is very good here – we don't feel pressurised at all. Motivation is also pretty good, with good salaries, fringe benefits and a generous bonus scheme.” There are at least four flags flying here that, like all flags, demand probing to get at hidden messages.
- Add to these issues the political considerations that often apply, in many hierarchical organisations, that govern what is and what is not 'OK to say', and the difficulties are compounded yet again.
Technology, indirect questions and inference engines
Indirect, observational and non-judgemental questions get over most of the problems of data contamination. Respondents are asked only to describe what is observed as they go about doing their jobs. Coherence tests are applied to the data generated, and the patterns of coherence and incoherence that emerge are the source of insights into the nature of the organisation. This type of analysis is time consuming and still risks missing important insights, but technology has the answer.
Expert systems have been around for many years, and are one example of artificial intelligence. Their use for credit rating and diagnosis of health issues is common.
Inference engines are expert systems with several additional layers of reasoning built in, generally as 'rules' and often in many-to-many relationships. Moreover, inference engines generally operate by applying Bayesian probability theory, and produce multiple, complex outputs.
Inference engines themselves are generally complex. Small engines may have as few as 2,000 or 3,000 formulae; more complex products may need as many as 100,000 rules. This very complexity leads to the development of what are regarded as 'robust systems'. 'Errors' in one or two rules makes little if any difference to outputs.
Inference engines take the data generated by indirect, observational questions and reason (infer) a series of outputs from those data.
Combining the use of inference engines with indirect, observational questions and coherence tests, and using Internet based data collection processes, handles most if not all of the difficulties described above. Moreover, larger samples can be polled than would generally be regarded as economic in the case of manual data collection. This means that discovering 'current state' conditions is both fast and rigourous at the same time.
Moreover, inference engines provide the opportunity to go beyond the insights generated through more traditional staff surveys. With few exceptions, the latter identify symptoms of problems and not the causes of the problems.
Where multiple, possible causes of any one problem may exist, tools based on inference engines offer two possibilities. The first is the usual drill-down querying capabilities to locate differences in patterns throughout the whole sample. Combined with graphical reports, these are very powerful for stimulating open conversation about effects and their causes – and developmental actions to address them.
The second is the development of diagnostic tools that enable facilitated conversations to be targeted directly on to the possible root causes of organisational problems. As a generalisation, these will include structural design decisions, process design decisions and the behaviour of individual managers. Implicitly covered in these are the technical, political and cultural systems, both hard and soft, that are operating.
(There is no implication in these notes that one-to-one interviews or workshops of some sort should not be used, as part of discovery processes. If technology provides enhanced 'reach' to the discovery process, face-to-face conversations will always add 'richness'. Just watch out for data contamination!)