Friday, June 01, 2012

Representing different combinations of causal conditions

This week I attended a workshop on QCA (Qualitative ComparativeAnalysis). QCA is a useful approach to analysing possible causality in small-n situations, i.e. where there are not many cases to examine (e.g. villages or districts), and where perhaps only categorical data is available.  Equally importantly, QCA enables the identification of different configurations of conditions associated with observed outcomes in a set of cases. In that respect it shares the ambitions of the school of Realist Evaluation (Pawson and Tilley). The downside is that QCA findings are expressed in Boolean logic, which is not exactly user friendly. For example, here is the result of one analysis:

Clue: in Boolean notation the symbol "+" means OR and the symbol "*" means AND. The letters in upper case refer to conditions present and the letters in lower case refer to conditions absent.

Decision trees

In parallel I have reading about and testing some data mining methods, especially classification trees (see recent blog). These are also able to identify multiple configurations of causal conditions. In addition they produce user friendly results in the form of tree diagrams, which are easy to read and understand. The same kind of decision trees can be used to represent the results of QCA analyses. In fact they can be used in a wide range of ways, including more participatory and ethnographic forms of inquiry (See Ethnographic Decision Models). From an evaluation perspective I think Decision Trees could be a very valuable tool, one which could help us answer the frequently asked question of "what works well in what circumstances". This because they can provide summary statements of the various combinations of conditions that lead to desired outcomes in complex circumstances.

In the first set of graphics below I have shown how Decision Trees can represent four important different types of causal combinations. These relate to whether an observed condition can be seen as a Necessary or Sufficient cause. The graphic is followed by an example of four fictional data sets, each of which contains one of the causal combinations shown in the graphic (highlighted in yellow). Double click on the graphic to make it easier to read.

Implications for evaluation work

There has been a lot of discussion amongst evaluators of development projects about whether it is appropriate to talk about causal attribution versus causal contribution, and in the latter case, how causal contribution can be best described. Descriptions of particular conditions in terms of whether they are necessary and/or sufficient are one way of doing so, especially when made visible in particular Decision Tree structures.

When necessary or sufficient conditions (1,2,3) are believed to be present this should provide some focus for evaluation efforts, enabling the direction of scarce evaluation attention towards the most vulnerable part of an explanatory model.

It has been argued that the most common causal configuration is 4., where an intervention is a necessary part of a package but that package is not essential, and that other packages can also generate the same results. If so, this suggests the need for some modesty by development agencies in their claims about making an impact and some generosity of views about the importance of other influences.

How do decision trees relate to Theories of Change?


The comparator here is the kind of diagramatic Theories of Change seen in  Funnell and Rogers (2011) Purposeful Program Theory. A common feature of most of their examples is that they show a sequence of events over time, leading to an expected outcome. We could call them causal pathway ToC. In my view these would include LogFrames, although some people dont consider these as embodying a ToC.

I would argue that Decision Trees can also describe a ToC, but there are significant differences:

1. Decision Trees tend to describe multiple configurations that as a set can explain all observed outcomes. ToC, especially LogFrames, tend to describe a single configuration that will lead to one desired outcome. In doing so each part of the configuration appears to be necessary but not sufficient for the expected outcome.

2. Decision Trees describe configurations but not sequences. It is important to note that in Decision Trees there is no causal direction implied by relative positions in the branch structure, unlike in a ToC.  The sequence of conditions associated together along a branch could in theory be in any order. What matters is what conditions are associated with what.

3. Decision Tree models are testable. Unlike most causal pathway ToC  (at least those that I know of) Decision Trees can be generated direct from one data set (i.e. a training set), and they can be then tested again other data set (i.e test data) containing new cases with the same kinds of attributes and outcomes. These tests examine not only whether the predicted outcome happened when the expected attributes were present, but also whether the predicted outcome did not happen when the expected attributes were absent.

Causal pathway ToC are testable, by examining whether their implementation leads to the achievement of target values on performance indicators. The opposite possibility is also testable in principle, by observing if expected outcomes were absent when events in the causal pathway did not take place, via the use of a control group. However, compared to Decision Tree models, this kind of testing is much more laborious, and requires considerable upfront preparation.

Despite the differences there is also some potential inter-operability between Decision Tree models and causal pathway ToC:

1. An expected causal sequence of events in a ToC (e.g. in a LogFrame)  could be represented in a Decision Tree, as a collection of attributes all located in one branch. Looking in the reverse direction, different branches of Decision Trees can be seen as constituents of  seperate causal pathways in ToCs that have a more network rather than chain structure.

2. While Logframes may be suitable for individual projects, Decision Tree models may be suitable for portfolios of projects, capturing the difference in contexts and interventions that are involved in different projects.

3. Decision trees have some compatability with Realist Evaluators ways of thinking about change. The Realist Evaluation formulation of  "Context + Mechanism = Outcome" type configurations can easily be represented in the above tables by creating two broad categories of conditions, about Context,  Mechanisms and Outcome conditions respectively.

Decision tree analysis of QCA data set


 Decision Tree algorithms can be used as a means of triangulating the results generated by other methods such as QCA. 

The following table  of data can be found in a paper on "Women’s Representation in Parliament: A Qualitative Comparative Analysis" by Krook (2010)

The values in this table were then converted to binary values, using various cut-off values explained in the paper, resulting in the table below.

In Krook's paper this data was analysed using QCA. I have taken the same data set and used Rapid Miner to generate the following Decision Tree, which enables you to find all cases where women's participation in national parliament was high (defined as above 17%)

The same result was found via the QCA analysis:

Translated this means:

IF Quotas AND Post-conflict situation THEN high % women in Parliament  [= far right branch]
IF Women's status is high AND Post-conflict situation THEN high % women in Parliament [=3rd from left branch]
IF Quotas NO post-conflict situation AND women's staus is high THEN high % women in Parliament [=3rd from right branch]

Assessing the performance of decision trees


Relative to causal pathway ToC, there are many systematic ways to assess the performance of Decision Trees.

 1. When used for description purposes

There are two useful measures of the performance of decision trees when they have been developed as summary representations of known cases:

1. Purity: Are the cases found at the end of a branch all of one kind of attribute (i.e. pure), or a mixture of kinds.

2. Coverage: What proportion of all positive case that exist are found at the end of any particular branch. In data mining exercises branches that have low coverage are often "pruned" i.e. removed from the model, to reduce the complexity of the model (and thus help increase its generalisability).

QCA uses similar measures of consistency and coverage. See page 84 of the fsQCA manual

Decision Trees can also be compared in terms of their simplicity, with simpler being better. The simplest measure is the number of branches in the tree, relative to the total number of cases (fewer = better). Another is the number of attributes used in the tree, relative to all available (fewer = better).

2. When used for prediction purposes

After having been developed as good descriptive models, decision trees are often then used as predictive models. At that stage different performance criteria come into play.

The most important metric is prediction accuracy: the ability of the Decision Tree to accurately classify new cases. From what I have read, it seems that a minimum level of accuracy is 80%, but the rationale for this cut-off point is unclear. Both predictive and descriptive accuracy can be measured using a Confusion Matrix 

"I wanted to add that a typical trade-off analysis is done with learners in general (and decision trees are no exception) that compares model accuracy within a data set to model accuracy at classifying new data. A more generalizable model would be more favorable for predictive analysis. A more accurate, specialized model would be good for understanding a particular data set. Limiting the tree-depth is (in my opinion) probably the fastest way to explore these trade-offs."[from rakirk on RapidMiner blog]

Greater descriptive accuracy risks what data mining specialists call "over-fitting" - that is, after a certain point is reached the descriptive model's ability to accuractely predict outcomes in a new set of cases will start to diminish. (A classic tradeoff between internal and external validity)

Moore et al, 2001  provide criteria that mix both descriptive and predictive purposes. In their view "... the most desirable trees are:
1.  Accurate (low generalization error rates)
2.  Parsimonious (representing and generalizing the relationships succinctly)
3.  Non-trivial (producing interesting results)
4.  Feasible (time and resources)
5. Transparent and interpretable (providing high level representations of and insights into the data relationships, regularities, or trends)"

More information on decision trees, which is not maths intensive!

New software

PS July 2012: I have just found out about BigML, an online service where you can upload data, create Decision Tree models, test them and use them to generate predictions. So far it looks very good, although still under development. I have been offered extra invitation codes, which I can share with interested readers. If you are interested, email rick at

I have been experimenting with two data sets on BigML, one is the results of a 2006 household poverty survey in Vietnam (596 cases, 23 attributes), and the other is a randomly generated data set (102 cases, 11 attributes).

A Decision Tree model of the household poverty data has the potential to enables people to do two things:

  • Find classification rules that find households with poverty scores in a given range e.g. above a known poverty level. Useful if you want to target assistance to specific groups
  • Find the poverty score of a specific household with a given set of attributes. Useful if you wanted to see if they are eligible for a targeted package of assistance
Here is a graphic of the BigML Decision Tree model. Its unorthodox in that it does not display branches with negative cases, but this approach does simplify the layout of complex trees. On the right of the tree is the decision rule assocated with the highlighted branch (on the right side). The outcomes it predicts (the leaf at the end of the branch) is the Basic Necessity Survey (BNS) poverty score for the households in that group (32 in the right side branch)

This tree has been minimally pruned, and shows branch ends containing 1% or more of all cases (i.e. 5 or more in this case). The highlighted branch shows one classification rule that accounts for about 8% of all households above the poverty line. All the green nodes in the tree account for around 92% of all households above the poverty line. The remainder will be found when the other colored "leave" nodes are clicked on. 

My main finding from this exercise is that there is no classification rule that accounts for a large proportion of cases. The largest is one rule (Bathroom + Motorbike+Pesticide pump+Stone built house) that accounts for 31% of households above the poverty line. My interpretation is that this finding reflects the diversity of causal influences present, most probably being the agency of the households themselves.

PS 15 July 2012: Although at the start of this blog I made a clear distinction between four types of situations, where a condition or attribute is necessary and/or sufficent, it could be argued that there are degrees of necessity. If a complex decision tree has 25 branches (or explanatory rules), as in the above example, a certain condition may be present in many of the branches (as necessary but not sufficient part of a package that is sufficient but not necessary i.e. INUS). For example, having a watch is a condition present in 4 of the 25 branches. One way of looking for conditions that are relatively necessary is to look at the upper levels of the tree. Having a bathroom is relatively necessary, it is a necessary part of 14 of the 25 branches. This is still a fairly crude measure, we also need to take into account what proportion of all the cases are covered by these 14 branches. In this example, the 14 branches cover 70% of all the cases (households). Having a stone built house is not a necessary condtion to be judged as not-poor, but is a fairly necessary condition!

PS 18 July 2012: One dimension of the structure of a Decision  Tree is its "diversity". After Stirling (2007), diversity can be seen as a mix of variety (number of branches), balance (spread of cases across those branches) and disparity (distance between the end of each branch, measured by degrees - number of intervening linkages). A rougher measure is simply the number of branches x the number of kinds of attributes making up all those branches. Diversity suggests, to me, a larger number of causes at work. How does this diversity connect to notions of complexity? Diversity and complexity are not simply one and the same thing. My reading is that complexity = diversity + structure (relationships between diverse entities). I need to go back and read read / finish reading Page, S (2011) on (Diversity and Complexity" and "Diversity versus Complexity" by Shahid Naeem (2001)

No comments:

Post a Comment