Your cart is currently empty!
Analysing Thinking in Systems by Donella Meadows — Book Summary & Review
☞ Scroll down to read the summary of the book and skip the review.
Thinking in Systems: A Primer, first published in 2008 after the death of its author Donella H. Meadows (1941-2001), is a compilation of various texts written by Meadows from 1982 until the year of her death in 2001. The articles it is based on, compiled and edited by Diana Wright, are founded on conscious and progressive themes around environmental sustainability. These reflect Meadows’ background and clear passion for utilising systems studies for sustainable development and environmentally conscious living.
This was the first book I read on the topic of systems thinking, and it served as a great introduction to the basics of systems analysis from the viewpoint of a renowned and respected system dynamics practitioner. The author, Donella Meadows, outlines how the different parts of a system can interact, before providing examples of how the systems that we try to understand and control can actually be confounding, confusing, and impossible to fully grasp. Meadows warns us about how human behaviour is likely to cause many of the issues that plague the systems we exist in. Even our efforts to correct perceived problems in the system can have effects that are unexpected, and create outcomes that are the opposite to the ones we intended. Despite this, we are reassured that by learning about systems and the pitfalls we might encounter when attempting to manage them, we have a chance of making changes that benefit the system and the individuals within it. As actors in the system, we have a responsibility to create opportunities for positive change.
Donella Meadows is clearly a skilled and passionate communicator, her simplified but concise writing providing an accessible entry into Systems Thinking, especially when compared to some of the difficult, dated academic texts available on the subject. There are real-world examples given that are applicable today, showing a deep understanding of human nature and the parts of it that are so ubiquitous, they can to some extent be predicted with the help of models and equations. However, Meadows does not make the mistake of assuming that systems, whether physical or intangible, can be completely controlled. We learn that the models we use to understand systems are always constructs. This is a good jumping-off point to start thinking about the role of the individual or practitioner in relation to the definition, study, and shaping of systems.
Although there are some useful points in the book about the effect of behaviours in the system, a lot of the language can feel a bit detached from the idea of personal responsibility, while describing the behaviour of ‘others’ or perhaps ‘the masses’ as they act without any self-awareness in these conceptual system models. I don’t think detachment was the intention, as the overall message of the book implies that we do have a responsibility. It may be a linguistic choice aimed at conveying an educational message without directly confronting the reader, or maybe an acknowledgement that the responsibility doesn’t lie with the typical reader. I suppose it is also a natural consequence of simplifying vastly complex situations. I wonder what are the best ways to talk about complex systems that provide perspective, are easy to understand, and are able to inspire effective, meaningful action in those that have the power to take such action…
In a way, this book was inspirational in the sense that it provided descriptive summaries of factors that affect systems and how the effects of those factors can be mitigated. I finished the book with an excitement for the subject, and a desire to learn more. I also wanted to learn more about how conclusions were arrived at regarding those factors — are they the only ones we have to worry about? To what extent are system archetypes inherent to all systems? To what extent do human social systems differ from any other types of system, and is it reasonable of systems ‘thinkers’ to assume that we can develop something like a science of systems that treats all systems the same? Or perhaps the objective of such a science would be to uncover the subtleties and nuances in systems, allowing us to develop unique solutions rather than generalisations. The generalisations in the book are useful for getting an overview of how systems can operate, and the kind of issues that might arise, but it does make me question how applicable and useful generalisations are when the purpose of studying systems typically appears to be finding ways to deal with complicated, messy situations that are each unique by nature.
The ideas in this book are quite idealistic, and it would be interesting to read more about how the ethical management of systems could be applied realistically. It seems to me that conceptualising and modelling systems can be useful for visualisation, but this can create limits for real-world application. I am keen to learn more about different methods, tools and techniques for visualising complex concepts and how effective they are in developing solutions to complex problems. It is clear that when presented with symbolic diagrammatic, spoken, or written representations of situations, it may be easier for people that have in the past experienced similar situations to create similar conceptualisation of the situation of interest. But how do we minimise misunderstanding when communicating across cultures, huge physical spaces and long timespans? To what extent does technology support or distort communication of models that could solve big problems in the world? These are some of the questions that come up for me when I consider the supposed ‘simplicity’ of some models.
Although the start of the book does describe systems in general using very simple models, the later chapters of Thinking in Systems are very much an analysis of human behaviour and how that informs the systems in which the behaviours exist. The behaviours are also created within systems, and we learn about how stocks and flows, ‘feedback loops’, system goals, and other factors can be adjusted to alter systems and therefore the behaviours within those systems. This gives some sense of hope of possible opportunities for changes in the systems that affect us, until you think too much about the powerlessness of most individuals that exist in those systems. I would still recommend reading this book. It is useful for gaining a basic understanding of system dynamics and gives examples of things that people could be doing to improve systems. It provides useful concepts that we can all apply to our personal lives even if we do not identify as professional or academic ‘system practitioners’.
There were some concepts that I found easier than others to understand. For example, because of my experience with psychological dissociation, I already have lived experience that made me acutely aware of the ‘mental models’ we create of ourselves, our constructed realities and the translations our minds make of the world. This experience adds an additional level of ‘knowledge’ that will be helpful when analysing system models, and critiquing my own mental constructs for the evaluation of my own systems thinking processes.
On the other hand, I found it difficult to conceptualise systems themselves, specifically knowing how to identify feedback loops and establish boundaries for complex systems. It feels like no matter what, there will be a bias when creating a model, and that will affect the validity and effectiveness of the model as a tool for analysis. I look forward to learning more about how useful quantitative and qualitative data is collected, so that useful models can be built using that data. I would also like to know how effective models can be if they are not relying on data, but simply as a tool to convey current understanding from different viewpoints. I will have to practice using various diagramming techniques, and look into case studies that demonstrate how diagrams are used in a variety of settings.
When I have a better handle on the basics, I am keen to explore ‘leverage points’, and how people can intervene in systems. Thinking in Systems provided a helpful summary of methods to begin thinking about, areas that I need to read more about, and reasons to dig deeper into Systems Thinking.
Book Summary
A system is an interconnected set of elements organised to achieve something, persistently affecting each other and causing a different effect as whole than they would individually. A conglomeration of elements without interconnections or functions is not a system.
The different system elements, or ‘stocks’, can be living or non-living, physical or intangible. To be part of a system, they must have intact interconnections, or ‘flows’, with different parts reacting to each other. A system typically has a consistent function or purpose which is a crucial determinant of system behaviour. The function / purpose is visible through the operation of the system, but can actually be the least visible part of a system.
From high to low visibility:
- Physical system elements
- Intangible system elements
- Physical interconnections between system elements
- Intangible interconnections between system elements
- Function / purpose of the system
To rank the importance of different parts of the system, imagine or test what would happen if you change each part individually.
An important function of almost every system is to ensure its own perpetuation. Mechanisms that perpetuate and maintain integrity of the system include self-maintenance, self-organisation, and self-repair. These are features that may be organic to the system, or built into it to support sustainable operation.
To understand, analyse, plan and construct systems, it is useful to create models that can be communicated to others with the help of ‘system diagrams’ or ‘system maps’. These can be simple drawings that identify stocks and flows, or they can be complex diagrams showing interconnections and interactions between the elements. To more effectively communicate your understanding of a system, it is useful to understand and utilise system principles to show what can happen when there are changes in the system.
Flows affect stocks, and flows are affected by the state of stocks. In regards to flows, if in-flow is greater than out-flow, then stock will increase. If out-flow is greater than in-flow, stock will decrease. If in-flow and out-flow are equal, we get ‘dynamic equilibrium’.
Stock can increase with higher in-flow, but also with lower out-flow. Stocks are buffers to flows, and can cause different effects in the system: delays; shock absorption; stabilisation; opportunity for change or regulation.
Stocks can be regulated by manipulating stocks and flows in a system — this involves a feedback process. A feedback loop is a communication of a change in stock that affects flows into that stock. Feedback loops communicate Information that only affects future behaviour, and cannot correct behaviours that drive the current feedback.
There are different types of feedback loop. A balancing feedback loop is ‘goal-seeking’ or ‘stability-seeking’, and can be a source of stability or a source of resistance to change. Reinforcing feedback loops multiply the input to stock, causing exponential growth or collapse of the system. In physical exponentially growing systems, there is eventually a constraint to growth because physical systems cannot grow forever in a finite environment (one of the systems archetype is actually the Limits to Growth archetype). A quantity growing exponentially toward a constraint or limit reaches that limit in a short time. That is how exponential growth can quickly become exponential collapse after growth has reached its peak.
The strength of loops changes over time, causing growth, decline, or balancing of stocks. Dominant loops have a stronger impact on behaviour. We can call the changes in strength ‘shifting dominance’.
Competing balancing loops can cause imbalance or disequilibrium as flows have competing effects on stock. For example, heating a room while heat is leaking from the room results in the temperature not reaching the thermostat setting, and water poured in a bucket while water is leaking from a hole in the bucket causes a variable water level. Stock-maintaining balancing loops must have goals set to compensate for all in- and out-flows affecting stock.
There are always delays in responses. Flows can only react to changes in stocks, not to flows, and there is a delay to register and process incoming information. When flow is adjusted, there is a delay and when there is a change in stock, there is a delay. This should be taken into account when constructing a system or analysing a system model.
System models must include all important flows so that dynamic systems analysis can be used to effectively test scenarios with different variables. For useful results, it is important to have a representative system model. To assess the representativeness, ask the following questions as part of your assessment:
- Are the driving factors likely to work this way?
- If so, would the system react this way?
- What is driving the driving factors?
- Does the model show realistic patterns of behaviour?
- What are the feedback loops affecting the system?
- Where is the system boundary of the system?
- Are there models for other systems with similar feedback structures? It may be useful to compare as they may produce similar dynamic behaviours.
To manage the behaviour of systems, we have to recognise the latent behaviours of system structures and the conditions that result in those behaviours. We should also arrange the structures and conditions to reduce the possibility of destructive behaviour, and encourage possibility of beneficial ones.
By investing in renewable stock whenever possible, we are more likely to have sustainable, longer-lasting, productive systems. Systems with non-renewable stock can be classed as ‘stock-limited’ — stock is available all at once, extractable at any time, but is not renewed so faster extraction shortens the lifespan of the resource. Renewable stock is ‘flow-limited’ — if flow rate is finite and equal to the regeneration rate, then indefinite harvest or extraction is possible (if over-extracted, the stock can become non-renewable). Renewable stock that is non-living refills the stock regardless of the state of the stock, because there is a steady input. Living stock regenerates itself through a reinforcing feedback loop from itself (regeneration depends on the state of the living stock).
Reinforcement from feedback loops can create resilient systems, for example with fail-safes created through several feedback loops taking care of various supportive mechanisms. Feedback loops that restore or rebuild feedback loops are even more resilient. We can call this meta-resilience. Higher meta-resilience are even more effective as feedback loops learn, create, design and evolve more complex restorative structures, for example in populations and eco-systems.
Unfortunately the resilience of a system is not always visible until limits are exceeded, so those that manage systems may unknowingly sacrifice resilience for short-term productivity or stability. Stability is different from resilience because it is visible and is measured by variation in condition of the system, but it doesn’t necessarily reflect resilience which is a strength of a system allowing it to repair itself. When a system is able to make its own structure more complex, that is referred to as ‘self-organisation’. We can see examples of that in fractals, biological reproduction, and community action.
Another characteristic of systems that can be effective for self-maintenance is hierarchy. In a hierarchy, sub-systems support the larger system which coordinates and enhances sub-systems, making them stable, resilient and efficient as a whole structure. Issues of power can compromise the effectiveness of a hierarchical system — self-maintenance is inhibited if there is too much central control and upper levels of the hierarchy fail to serve the purposes of the lower levels. However, suboptimisation can occur if sub-system goals dominate at the expense of the total system. Each part can technically function as a system in itself, so it is important to develop relationships between sub-systems and minimise feedback delays. This will help to manage information efficiently and maintain functionality of the system.
The behaviour of systems is not always predictable, and there are a lot of things that can surprise us. Part of this is because the mental models we have for making sense of the world do not represent the world fully, although they do have a strong congruence with the world. As systems are models, we have to be aware of not just the unexpected features of systems, but also the way we think about and act within systems. By being aware of our construction of systems as models, we can be more aware of our own biases and limitations when we create and analyse those models. Unfortunately it is natural for most of us to operate with ‘bounded rationality’, making decisions with imperfect information that we further confuscate by making mistakes such as ignoring information, not thinking ahead, misperceiving risk etc.
To counteract some of these issues we can:
- Consider who set system boundaries and why
- Keep in mind that boundaries are useful for narrowing down scope of study, but can also be adjusted to test outcomes and visualise potential effects of flows that might not have been considered important
- Reassess what is considered important, by discussing the system with a variety of stakeholders and looking out for unexpected behaviours
- Find appropriate boundaries for each new problem or purpose
- Don’t ignore non-linearities (causes without proportional effects) or other factors that may be difficult to calculate or predict
- Make sure to plan carefully to accommodate and minimise delays in systems that you manage
- Redesign systems to communicate more complete information in a timely manner, provide incentives for behaviour that benefits the system and the individuals, and reduce constraints and stresses that affect specific actors.
There are system structures that produce common patterns of problematic behaviour, and these structures must be changed to create opportunities. We can generally do this by reformulating the goals of the system or altering/adding feedback loops. Some of the structures are so common that they can be viewed as archetypal:
- Policy Resistance is when actors in a system, all with their own goals and bounded rationality, see a discrepancy between their own personal goals and the state of the system. They all pull in different directions, keeping the system in an unsatisfactory state for everyone. Any effort by one intensifies effort from others. This can be mitigated by letting go of narrow goals, and providing an over-arching goal that aligns the various goals of the sub-systems and the welfare of the entire system.
- Tragedy of the Commons is similar to Policy Resistance in that each actor in the system acts for their own benefit, over-using common resources in a reinforcing feedback loop that is exponentially destructive and harms all users. It may be helpful to appeal to the morality of users through education and exhortation, privatise the commons so that each actor reaps consequences of actions, or regulate commons with enforcement through policing and penalties.
- Drift to Low Performance is when low standards create low expectations which lead to low performance. A lower discrepancy between the state of the system and the desired/expected state results in less corrective action which causes continuous degradation. To prevent this, it’s important to keep standards absolute, regardless of performance, and also to make goals sensitive to best performance rather than worst.
- Escalation is when reinforcing feedback loops are set up by competing actors, trying to one up from the perceived state of others’ system. Building exponentially usually ends in destruction. The feedback loop must be interrupted by refusing to compete or negotiating a new system with balancing feedback loops.
- Competitive Exclusion is a reinforcing loop of winners getting rewards that give even greater competitive advantage. Without feedback loops that prevent monopolies, or other equalising mechanism that increase advantage for the less privileged, the ‘winners’ in a system will always win. As individuals, for example in market situations, it may help to diversify and try to avoid direct competition.
- Addiction can be caused when an actor adjusts balancing feedback loops to bring the perceived state of stock to the goal state quickly and easily. This is usually a short-term solution that doesn’t deal with the cause of a problem, distracts from long-term solutions, and undermines the system’s self-maintenance. Because of this, more intervention is needed, which weakens capabilities even more. To minimise the effects of resulting dependence on quick fixes, there should be a focus on long-term restructuring with interventions that strengthen the system’s ability to self-maintain.
- Rule-beating is the distortion of a system through the appearance of rules being followed but the spirit of the law not being met, causing unexpected outcomes and behaviours. Rules can be revised and better explained, or redesigned to turn self-organisation capabilities in a positive direction that achieves the purpose of the rules.
- Wrong Goals can have unwanted outcomes so it is important to clearly specify the indicators and goals that reflect the real welfare of the system. Incentivise production of desired results, not just obedient following of rules.
Changes can be made to systems by identifying and acting on leverage points (in the right direction, and at the right pace). These are some intervention points where change can be made, listed from least to most effective:
- Numbers/parameters provide guidelines and limits but do not necessarily change behaviour, especially in big systems.
- Buffers are stabilising stocks that are big relative to their flows. Usually physical interventions, they are often not easy to change.
- Stock and flow structures can be amended such as in physical systems with clear nodes of intersection.
- Delays in a feedback process are critical but can be difficult to change compared to slowing the rate of change.
- Balancing feedback loops can be underestimated in the functioning of a system as they appear opposed to goals and they require resources, but they have self-correcting effect and can keep stock near goal level.
- Reinforcing feedback loops can be weakened to reduce negative effects of exponential growth, or balancing loops introduced to counter their effects.
- Information flows can be improved to reduce human error and increase transparency, accountability, equity, and efficiency.
- Rules help to define the scope and boundaries of a system, and can be enforced with incentives and constraints.
- Evolution of system structure towards even greater capability of self-organisation increases resilience of the system.
- Goals that define the purpose and function of the system can maintain balance even if sub-systems are competitive.
- Paradigms are the shared societal beliefs about how the world works and what things mean, and although change can be heavily resisted, intervention at this level is transformational.
- Enlightenment is the transcendence of paradigms, freeing us from attachment to any paradigm and allowing us to choose any paradigm that achieves our purpose, with the understanding that purpose is also uncertain.
Understanding systems does not mean we act on what we know to be best for ourselves or our systems. We don’t understand everything, and by asking more questions, we uncover more mysteries. Even simple systems are hard to understand, and impossible to control. All we can do is learn from them and continue to design systems to the best of our knowledge, and with our values in mind.
- Analyse the system, look for interconnections, ask questions, watch behaviour, and try to learn the facts to reduce bias
- Expose your mental models by inviting others to challenge your assumptions and share theirs, create diagrams and test models scientifically against the evidence
- Respect and distribute information that is timely, accurate and complete
- Expand language and use it carefully, by being clear and truthful, accommodating of new understanding whilst being aware that the language we develop and use creates our realities
- Value what’s important, not just quantifiable
- Make feedback policies for feedback systems, with processes designed to include learning and correction
- Work for the good of the whole, aiming to enhance total system properties and design hierarchies that serve lower levels of the system
- Respect the wisdom of the system, supporting the structures and forces that enable self-maintenance before making other changes
- Locate responsibility for factors that influence behaviour, and design intrinsic responsibility, accountability and correction into the system
- Learn through experimentation, testing and monitoring, and be humble enough to embrace errors and change course if needed
- Expand time horizons, watching the whole of the system in the short- and long-term
- Defy disciplines, learn from each other, and commit to solving the problem rather than being correct
- Expand boundary of caring, remember that all systems are connected
- Don’t erode the goal of goodness — maintain standards, expectations, and morality.
Leave a Reply