Friday, December 16, 2005

The "attribution problem" problem

I have lost count of the number of times I have seen people make reference to "the attribution problem" as though doing so was a magic spell that dispelled all responsibility to do anything, or to know anything, about the wider and longer term impacts of a project. Ritualistic references to the "attribution problem" are becoming a bit of a problem.

In the worst case I have seen an internationally recognised consultancy company say that "our responsibilities stop at the Output level". And while other agencies might be less explicit, this is not an uncommon position.

This notion of responsibility is very narrow, and misconceived. It sees responsibilities in very concrete terms, delivering results in the form of goods or services provided.

A wider conception of responsibility would pay attention to something that can have wider and longer term impact. That is the generation of knowledge about what works and does not work in a given context. Not only about how to better deliver specific goods or services, but about their impact on their users, and beyond. Automatically, that means identifying and analysing the significance of other sources of influence in addition to the project intervention.

Contra to some people's impressions, this does not mean having to "prove" that the project had an impact, or working out what percentage of the outcome was attributable to the project (as one project manager recently expressed concern about). Something much more modest in scale would still be of real value. Some small and positive steps forward would include: (a) identifying differences in outcomes, within the project location [NB: Not doing a with-without trial], (b) Identifying different influences on outcomes, across those locations, (c) prioritising those influences, according to best available evidence at the time, (d) doing all the above in consultations with actors who have identifiable responsibilities for outcomes in these areas, (e) making these judgements open to wider scrutiny.

This may not seem to be very rigorous, but we need to remember our Marx (G.), who when told by a friend that "Life is difficult", replied "Compared to what?" Even if project managers choose to ignore the whole question of how their interventions are affecting longer term outcomes, other people in the locations and institutions they are working with will continue to make their own assessments (formally and informally, tacitly and expliictly). And those judgements will go on to have their own influences, for good and bad, on the sustainabilty and replicability of any changes. But in the process their influences may not be very transparent or contestable. A more deliberate, systematic and open approach to the analysis of influence might therefore be an improvement.

PS: On the analysis of internal variations in outcomes within development projects, you may be interested in the Positive Deviance initiative at http://www.positivedeviance.org/

Sunday, October 23, 2005

Impact pathways and genealogies

I have been working with three different organisations where the isssue of impact pathways has come up. Note the use of the plural: pathways. Network models of development projects allow the representation of multiple pathways of influence (whereby project activities can have an impact) whereas linear / temporal logic models are less conducive to this view. They tend to encourage a more singular vision, of an impact pathway.

In one research funding organisation there was a relative simple conception of how research would have an impact on peoples lives. It would happen by ensuring that research projects included both both researchers and practioners. Simple as it was, this was an improvement on the past, where research projects included researchers and did not think too much about practioners at all. But there was also room for improvement in this new model. For example, it might be that some research would have most of its impact through "research popularisers",who would collate and re-package research findings in user friendly forms, then communicate them on to practioners. And there may be other forms of research where the results were mainly of interest to other researchers.This might be the case with more "foundational" or "basic" research. So, there might be multiple impact pathways, including others yet not identified or imagined.

Impact pathways can not only extend out into the future, but also back into the past. All development projects have histories. Where their designs can be linked back to previous projects these histories can be seen as genealogies. The challenge, as with all genealogical research, is to find some useful historical sources.

Fortunately, the research funding organisation had an excellent database of all the research proposals it had considered, including those it had ended up funding. In each proposal the staff had added short lists of other previous research projects they had funded, which they thought were related and relevant to this project proposal. What the organisation has now is not just a list of projects, but also information about the web of expected influences between these projects, a provisional genealogy which stretches back more than ten years.

I have suggested to the organisation that this data should be analysed in two ways. Firstly, to identify those pieces of research which have been most influential over the last 10 to 15 years, simply in terms of influencing many other subsequent pieces of research. They could start by identifying which prior research projects were most frequently refered to in the lists attached to (funded) research proposals. This is very similar to citation analysis used in bibliometrics. These results would then need to be subject to some independent verification. Researchers' reports of their research findings could be re-read for evidence of the expected influence (incuding, but not only, their listed citations). They could also be contacted and interviewed.

The second purpose of a network analysis of past research would be to identify a sample of research projects that could be the focus of an ex-post evaluation. With the organisation concerned, I have argued the case for cluster evaluations, as a means of establishing how a large number of projects have contributed to their corporate objectives. But what is a cluster? A cluster could be identified through network analysis, as a groups of projects having more linkages of expected influence between themselves than they do have with other research projects around them. Network analysis software, such as UCINET, provides some simple means of identifying such clusters in large and complex networks, based on established social network analysis methods. Within those clusters it may also be of interest to examine four types of research projects, having different combinations of outwards influences (high versus low numbers of links to others) and inward influences (high versus low numbers of links from others).

Looking further afield it may of value for other grant making organisations to be more systematic about identifying linkages between the projects they have funded in the past, and those they are considering funding now. And then encouraging prospective grantees to explore those linkages, as a way of promoting inter-generational learning between development projects funded over the years.

Saturday, October 22, 2005

Networks of Indicators

A few months ago I was working with a large scale health project, that was covering multiple regions within a large country. The project design was summarised in a version of the Logical Framework. Ideally a good Logical Framework can help by providing a simplified view of the project intentions, through the use of a narrative that tells how the Activities will lead to the Outputs, via some Assumptions and Risks, and how the Outputs will lead to the Purpose level changes, via some Assumptions and Risks, and so on... Running parallel to this story will be some useful indicators, telling us when various events at each stage of the story has taken place.

That is of course in a ideal world. Often the storyline (aka the vertical logic) gets forgotten and the focus switches to the horizontal logic: ensuring there are indicators for each of the events in the narrative, and more!

Unfortunately, in this project, like many others, they had gone overboard with indicators. There were around 70 in all. Just trying to collect data on this set of indicators would be a major challenge for the project, let along analysing and making sense of all the data.

As readers of this blog may know, I am interested in network models as alternatives to the use of linear logic models (e.g. the Logical Framework) to represent development project plans, and their results. I am also interested in the use of network models as a means of complementing the use of Logical Framework. Within this health project, and its 70 indicators, there was an interesting opportunity to use a network model to complement and manage some of the weaknesses of the project's Logical Framework.

Sitting down with someone who knew more about the project than I did, we developed a simple network model of how the indicators might be expected to link up with each other. An indicator was deemed to be linked to another indicator if we thought the change that it represented could help cause the change represented by the other indicator. We drew the network using some simple network analysis software that I had at hand, called Visualyzer, but it could just have easily been done with the Draw function in Excel. I will show an "anonomised" version of the network diagram below.

When discussing the network model with the project managers we emphasised that the point behind this network analysis of project indicators was that it was the relationships between indicators that are important. To what extent did various internal Activities lead to changes in various public services provided (the Outputs)? To what extent did the provision of various Outputs affect the level of public use of those services, and their attitudes towards them (Purpose level changes)? To what extent did these various measures of public health status then related to changes in public health status (Goal level changes)?

The network model that was developed did not fall out of the sky. It was the results of some reflection on the project's "theory of change", its ideas about how things would work, how various Activities would lead to various Outputs and on to various Purpose level changes. As such it remained a theory, to be tested with data obtained through monitoring and evaluation activities. Within that network model there were some conspicuous indicators, that would deserve more monitoring and evaluation attention that others. These were indicators that (a) had an expected influence on many other indicators (e.g. govt. budget allocation), or (b) indicators that were being influenced by many other indicators (e.g. usage rates of a given health service)

The next step, on my next visit, will be to take this first rough-draft network model back to the project staff, and refine it, so it is a closer reflection of how they think the project will work. Then we will see if the same staff can identify the relationships between indicators that they think will be most critical to the project's success, and therefore most in need of close monitoring and analysis. The analysis of these critical relationships may itself not be any more sophisticated than a cross-tabulation, or graphing, of one set of indicator measure against another, with the data points reflecting different project locations.

Incidentally, the network model not only represented the complex relationships between each level of the Logical Framework, but also the complex relationships within each level of the Logical Framework. Activities happen at different times, so some can influence others, and even more so, when Activities are repeated in cycles, such as annual training events. Similarly, some Outputs can affect other Outputs, and some Purpose level changes can affect other Purpose level changes. The network model captured these, but the Logical Framework did not.

Wednesday, July 06, 2005

Fight institutional Alzheimers

I have taken this headline and the following text from the POLEX: CIFOR's Forest Policy Expert Listserver run by David Kaimowitz. I am reproducing it in full because I strongly agree with David's conclusions. How many other bilateral or multilateral aid gencies have done something like this recently? If you know of others, let me know.

"They say the good thing about having Alzheimer’s disease is that you are always visiting new places and meeting new people. Many development agencies have apparently taken that to heart. Rapid staff turnover, weak efforts to save and share documents, and strong incentives to repackage old wine in new bottles keep many institutions from learning from the past.

That why it is good to see the US Agency for International Development (USAID) invest in reviewing everything they have funded related to natural forests and communities during the last twenty-five years. The result is a three-volume report called USAID’s Enduring Legacy in Natural Forests: Livelihoods, Landscapes, and Governance by a Chemonics International team led by Robert Clausen. It provides an overview and ten country studies.

Back in the 1970s, USAID’s forestry activities focused mostly on fuelwood and promoting tree planting as part of watershed management projects. Later, growing concern about deforestation made them shift towards biodiversity conservation and protected areas. After that came a move towards market-based instruments such as forest certification, ecotourism, and tapping consumer demands for non-timber forest products. Over time, they have funded more NGOs and local governments and fewer national bureaucracies. And if the report’s authors have their way, the links between natural resources, democratization, and conflict prevention will soon be high on the agenda.

Through all that time and changes, some things remained the same. For example, it is still important to invest in forests for the long-term and get the technical aspects right. You need to work with specific farms, forests, and parks, but keep your eyes on larger landscapes. If no one invests in studying and monitoring forests and their products and services, when it comes time to justify investments or make decisions the data simply won’t be there. Projects need to focus more on ethnic and cultural issues. You ignore conflicts at your own risk.

People with advanced Alzheimers can be nice and well-intentioned, but they should not be running the show. If we don’t build up our institutional memory we will keep making the same mistakes, although we may give them another name. Let’s hope other agencies follow USAID’s lead and invest in learning from their own experience."


[If you would like to receive CIFOR-POLEX in English, Spanish, French, Bahasa Indonesia, or Nihon-go (Japanese), send a message to Ketty Kustiyawati at k.kustiyawati@cgiar.org]

regards from Rick, in Cambridge

Monday, May 23, 2005

Using "modular matrices" to describe programme intentions and achievements

There has been an interesting discussion about the pros and cons of Logical Frameworks, on the MandE NEWS mailing list. One participant has expressed concerns about the unrealistic expectations many people have about the use of the Logical Framework. We should not expect it to do everything. It is supposed to be a summary. To be read along side narrative accounts which can be as detailed as needed.

My response was to point out that there was some usable middle ground between long narrative accounts and tables that attempted to summarise a whole programe in a four by four set of cells. The middle ground is what I now call a "modular matrix approach" (MMA). Google defines
"modular” as follows: “Equipment is said to be modular when it is made of "plug-in units" which can be added together to make the system larger, improve the capabilities, or expand its size”

So a Gantt chart can be seen as a modular unit, because it can build onto and extend the LogFrame. It can do this because it has one common dimension: a set of Activities. Another module that I have seen used in association with the LogFrame is a matrix of Outputs x Actors (using the outputs). Here the Outputs are the common dimension that links this matrix and a LogFrame.

In the last year or so I have experimented with a range of modules, some of which have proved more useful than others. Ideally, this development process would be a collective enterprise, such that what emerged was a public library of usable planning modules. Some, like the Logframe, would offer a very macro perspective. Others, such as an Activity x Activity module, can provide a more micro perspective on work processes within single organisations.

When developing new matrix modules I use the social network analysis convention, that cell contents should describe the relationship from the row actor to the column actor. The actors involved are listed down the left column and across the top row. In practice I also use documents (produced by actors) and events (involving actors). Such matrices allow the representation of networks of communications and influence, not just one directional chains of cause and effect.

A second important convention that I try to follow, implicit in the above description, is that the entities listed on the two axes of such matrices should be verifiable, either by interviewing them (if they are actors) or reading them (if they are documents) or reading about them (if they are events). This will then allow us to establish if the links between them were planned, and eventuated, as described. There are probably other conventions that could be developed to enure that matrix modules developed by different people are compatable, and can add value to the whole.

For some recent practical experiments along these lines see this paper. In the near future I hope to provide a comprehensive summary of this approach, in a paper provisionally titled "From Logical to Network Frameworks: A Modular Approach to Representing Theories of Change" This paper will be publicised via the Network Evaluation and MandE NEWS mailing lists.

Saturday, April 02, 2005

Constructing "an auditable trail of intentions...."

A useful report has recently been produced by ODI (Lucas, Evans, Pasteur and Lloyd, 2004)“on the current state of PRS monitoring systems. PRS are national level Poverty Reduction Strategies, promoted by multilateral and bilateral aid agencies. In that report they argue for more attention to the severe capacity constraints facing governments who are try to monitor their PRSs. Donors need to take “a less ambitious attitude as to what can be achieved and a willingness to make hard choices when prioritising activities”. Later on, in discussions about the range of indicator data that might be relevant they note that “Given scarce resources, a focus on budget allocations and expenditures may well an appropriate response, particularly if it involves effective tracking exercises with mechanisms to ensure transparency and accountability…Linking these data to a small set of basic service provision indicators that can reasonably reflect annual changes could provide a reasonable starting point in assessing if the PRS is on track.”

Meanwhile I have been working on the fringes of a PRS update process that is taking place in a west African country. While I agree with the line taken above, I am wondering now if even this approach is too ambitious! This will be the second PRS for the country I am working in. This time around the government has made it clear to ministries that their budgets will be linked in to the contents of the PRS. This seems to have had some positive effect on their levels of participation in the planning groups that are drafting sections of the PRS. By now some draft versions of the updated PRS policies have been produced, and they have been circulated for comment within a privileged circle (including donors). Some attempts have been made at explicitly prioritising policy objectives, but only in one of five policy areas. Meanwhile there is a deadline approaching at high speed, for identifying and costing the programmes that will achieve these policy objectives. This is all due by the end of this month, April 2005. Then it is expected the results will feed into a public consultation and then into the national budget process starting in June. However, as yet there is no agreed methodology for the costing process. As the deadline looms the prospects increase for a costing process that is neither systematic or transparent (aka business as usual).

If the process of constructing the costings is not visible, then it becomes very clear to identify the specific linkages between specific PRS policy objectives and specific items in the national budget. So while we can, on ODI good advice, monitor budget allocations and expenditures, what they mean in terms of the PRS policy objectives will remain an act of literary interpretation. Something that could easily be questioned.

IMF and UNDP have I think both had some involvements in costings of broad policy objectives, including the achievement of the MDGs. However, from what I can see these costings have been undertaken by consultant economists, primarily as technical exercises. But I am not sure if this is the right approach. The budgets of ministries are political resources. The alternative approach is to ask Ministries to say how they will use their budget to achieve the various PRS policy objectives, and while doing so make it clear that their performance in achieving those selected objectives with their budget will be assessed. To do this we (/an independent agent) will need what can be described as “an auditable trail of intentions”, from identifiable policy objectives to identifiable programmes, with identifiable budgets, to identifiable outputs and maybe even identifiable outcomes.

There is an apparent complication. This auditable trail will not be a simple linear trail, because a single policy objective can be addressed by multiple programmes, and a single programme can address more than one policy objectives. Similarly with the relationship between a ministry’s programmes and outcomes in poor peoples lives. However, an audit trail can be mapped using a series of linked matrices (each of which can capture a network of relationships). These could include the following: PRS Policy Objectives X Ministry’s Programmes matrix, Ministry's Budget lines X Ministry’s Programmes matrix, and Ministry’s Programmes X Outputs matrix, and an Outputs X Outcomes matrix. This seems complex, but so is the underlying reality. As Groucho Marx said when his friend complained that life is difficult, “Compared to what?”

Postscript: Parallel universes do exist: Proof - a five year national plan with lists of policy objectives in the front and lists of programs in the back (with their budgets) but no visible connections between the policy objectives and programs & budget.

Parallel universes

Identifying the impact of evaluations: Follow the money?

Some years ago I was involved in helping the staff of a large south Asian NGO to plan a three-yearly impact assessment study. It was almost wholly survey based. This time around myself and a colleague managed to persuade the unit responsible for the impact assessment study to take a hypothesis-led approach, rather than simply trawl for evidence of impact by asking as many questions as possible about everything that might be relevant. The latter is often the default approach to impact assessment and usually results in very large reports being produced well after their deadlines.

With some encouragement the unit managed to generate a number of hypotheses in the form of if X Input is provided by our NGO and Y Conditions prevail then Z Outcomes will occur (aka Independent variable + Mediating variable = Dependent variable). Ostensibly they were constructed after consultations with line management staff, to get their interest and ownership in what was being researched. The quality of the hypotheses that were generated was not that great, but things went ahead. Questions were designed that would gather data about X, Y and Z, and cross-tabulation tables were constructed that would enable analysis of the results, showing with/without comparisons. The survey went ahead, the data was collected and analysed, and the report written up. The analytic content of the report was pretty slim, and not very well embedded in past research done by the NGO. But it was completed and submitted to management, and to donors. My inputs had ended during the drafting stage. The study then seemed to sink without trace, as so often happens.

A year or so later a report was produced on the M&E capacity building consultancy that I had been part of when all this had happened. In that report was a reference, amongst other things, to the impact assessment study. It said “The study also produced some controversial findings in relation to training, as it suggested that training was a less important variable in determining the performance of groups than had previously been thought. This finding was disputed at the time, but when [the NGO] had to make severe budget cuts in 2002-3 following the blocking of donor funds by the [government], training was severely cut. There is though still an urgent need for [the NGO] to undertake a specific study to review the relative effectiveness of different types of training.”