Ethical Systems Change

Burkina Institute Two Loops model for systems change
“Two Loops” model by the Burkana Institute

We always spend the time and energy at the beginning of any initiative to ensure that we have the right objectives – putting together this program for The Value Web has pushed us to get specific about what makes a set of objectives “right” or not…especially when you’re trying to make far-reaching changes with your efforts.

There are the obvious table-stakes of “SMART” objectives – basically, are they well crafted and specific enough to be able to tell you’re achieving them. There’s the design-thinking approach of feasibility/viability/desirability – but in a systems-change context this only raises the main question; desirable for whom? 

The fact is that systems change happens all the time – sometimes unintentionally, but often very much with intent, and in many cases the systems we may be trying to change are working exactly as someone else may have intended.

CV Harquail’s three questions to guide decision-making get closer to it: Who benefits? Who is left behind? Is this what we want?

These simple questions, for me, at least start to surface key issues in systems change – that benefits and impacts won’t be the same for everyone, and we must be conscious of the choices we make. It also connects well with my overall belief that we are exiting the age of externalities, where we can blithely write off the impacts of our actions as external to our value calculations.

So from my perspective, we can’t say what the “right” objectives are without having a declared point-of-view on the kind of systems change we are looking for.

The Berkana Institute has its “Two Loops” model which describes the move from an old paradigm to a new one. I think this model is useful in thinking through the non-linear move from one ruling set of norms to a new one. 

But setting intentional about designing the shift from one system, or one paradigm, to another asks of us what we plan to leave behind, what we want to bring into being, and why. It also begs the question of what right we, as individuals, have to make that change. I’ll address the “who” question separately. For now, I’ll focus on whether a set of objectives in driving systems change are “right” or not.

I offer these guidelines for whether a set of systems change objectives are the right ones. I would be suspicious at any kind of change effort that:

  • Benefits an in-group at the expense of out-groups
  • Leverages weakness or imbalance to propagate its effects
  • is based on limited definitions 
  • Internalizes gain and externalizes harm

These are the kinds of “system change” that I think – using the Two Loops Model, represent the paradigm we should be leaving behind. Think through the accomplishment of your objectives to the fullest extent – what would result from achieving them? What would the second and third-order effects be?

Keep in mind John Sterman’s admonition: “There are no side-effects – there are only effects.”

The test for objectives that represent “healthy systems change”, then, would create change that:

  • Is seen as improving the conditions for all parties involved
  • Is supported by the contributions of parties across the system
  • Is informed by networked knowledge from across the system
  • Does not create harmful consequences beyond its own scope

I’m curious about the tests that others use to ensure their efforts will create truly “positive” change – what rules do you use to keep yourself on track?

Designing Power in Systems Change

a model of power in systems
A model of power in systems.

Systems work involves finding leverage in the systems in which you’re operating, and understanding the dynamics that are involved between the various stakeholders.

Key to understanding the context you’re working in is being able to explore a few key questions:

  1. What is influencing the players to act the way they are acting?
  2. Among our stakeholders, how might they exert influence to bring about the change they want to make?
  3. What resistance might we experience in these efforts, and how will it manifest?

In many recent projects I have seen shadows of these questions, whether talking about climate governance, multi-stakeholder partnerships, food systems transformation or the societal effects of technology.

From a design standpoint, I found myself looking for a model both to interpret the dynamics I was seeing, but also to inform an intervention that might achieve maximum leverage.

So, while my projects were on exploring “the rules of the game”, what I ended up wanting to model was power; how were some actors able to translate their agendas into influence over the thoughts and actions of others? How could those working to improve the system exert their own influence?

In my research, I came across Suerie Moon’s exceptional paper Power in Global Governance: An Expanded Typology, which is well worth the read. While focused on public health, her combination and extension of other, past explorations of power are much more broadly relevant. I have adapted her typology for the first iteration of this model, which I have been using to analyze inbound influence, and to design systemic interventions.

But first, the element that stood out for me from Moon’s paper was her definition of power: the ability to make someone think of do something. Conversely, it is the ways in which others can make you think or do something.

In this post I’ll just outline the types of power; I’ll be exploring some of the clusters, dynamics and influences in other posts. The types of power explored in this model are as follows:

  1. Physical – this is motivation through real force, or the threat of force; the ability to give, or revoke, physical safety
  2. Economic – the control and use of resources to exert influence
  3. Cultural – the complex mix of norms, social position, tradition and shared social reality
  4. Institutional – rules, laws and the imagined structures which sustain them
  5. Moral – the ability to define shared understanding of right and wrong
  6. Expert – the structures, processes and social positions which establish facts and truth
  7. Narrative – influencing through the creation of language and framing of an issue or concept
  8. Network – the use of personal relationships to achieve an outcome

All of these avenues are bi-directional, meaning, they are both channels through which we, ourselves are influenced, but also conduits for individuals or groups to exert their own influence.

The relative influence of each type of power might vary in different systems, and concentrations or combinations of multiple types of power can lend significant influence to certain actors.

I have also added in “spiritual” power, which I will be exploring elsewhere. While you could argue for this as institutional, cultural, narrative or moral, I believe this to be a distinct, though currently underdeveloped element.

I have made no mention of the ethics of this as a design model, primarily because it is based, in part, on what I have been observing already “in the wild”, and my goal was thus to try and make it explicit.

Finally, a few interesting lenses to apply to these types of power are Marx’s base/superstructure; Noah-Harari’s shared, fictional realities; Laloux’s organization types and Maslow’s hierarchy of needs.

What questions come up when you look at this model? How does it apply to the dynamics in the systems you’re working in?

The Accidental Cyborg

As we close out a year that started with a conspiracy-fuelled assault on the US Capitol, and is ending with all the buzz around the Metaverse; as we find ourselves wondering how fringe opinions became mainstream and Generals warn of impending civil war, I thought it would be helpful to lay out simply how we got here, from a technical perspective.

For those aghast at the vulnerability that log4j represents, have a seat; we have a worse one in our heads.

In our unmediated processing of the world, we build mental models from experience, which then allow us to navigate the world without much thought. If something doesn’t match the model, it changes based on new experience.


I’ll try to keep it short, but if you’re in a rush…


TL/DR; we accidentally became cyborgs, and our brains have been hacked through old, unpatched flaws that allowed others to tell us what to think. This is an outline of those human design flaws and how to exploit them.


First, the unpatched flaw in human cognition, laid out in a few steps:

1.) Humans operate mostly on autopilot, guided by mental models generated from infancy to make navigating the world easier

2.) Those models are created through hard, direct experience (that’s sharp, that hurts, that one burns, that is tasty)

3.) Once we have a mental model, we don’t revisit it again unless we have to…we just use it for autopilot (oops, there was one more stair on this staircase than I thought…attention!)

4.) Those mental models apply not just to the physical world, but also to the social and conceptual world (hence “what is the sound of one hand clapping”)

While original impressions are formed from direct experiences, systems can intervene to interact with our mental models to reinforce and shape them in new directions. Perceptions of frequency and availability are increasingly shaped by this interaction.


This flawed, but useful system has served our species well enough (we thought up science, cities, supply chains and music) with some notable drawbacks (racism, wars, genocide, climate change). 
While in the past this flaw left us vulnerable to xenophobia and charismatic leaders, it also created a much bigger opening: an unprotected opening for a human/machine interface.

Mental models and Kahneman’s whole construct of “System 1 and System 2” (reflexive use of mental models and considered thought, respectively) is based on a system of inputs and outputs, which, especially in the social and conceptual world, is vulnerable to man-in-the-middle attacks (I’ve kept this expression gendered, because, well, in this case, the perpetrators generally fit the idiom).

Here’s the hack:


1.) In digitally mediated interactions (web searches, social sharing, reading, looking at photos, GPS navigation, shopping, adjusting your thermostat) you can watch patterns to find the edges of someone’s mental models (habits, sexual preference, gender identity, political views, cat/dog person)

2.)  Staying within “n” degrees of that mental model allows injection of information that won’t trigger conscious thought

3.) Adding peripheral information allows exploitation of “availability and representativeness” heuristics (if I see it a lot, there must be a lot of it)

4.) Most people are like at least some other people; finding similar groups allows you to test how far you can push similar groups.

As the information ecosystem becomes largely shaped by the MITM system, new inputs are injected and the perception of frequency and representativeness replace direct experiences in a self-reinforcing loop.


And there it is – though this is obviously the simplified version.


I’ve laid this out in a few simple models – unmediated, mediated and captured.

As our information ecosystem becomes increasingly mediated through digital channels, it is important to understand that our cognition is co-evolving in response to our sources of stimuli. The natural environment we evolved in was indifferent to us, but the digital environment is not. Go ahead; test the matrix. Change your behaviour on the platforms you use, and see how it shifts.

Now, how the stimuli are served to the humans entirely depends on the priorities coded in to the algorithm. What is it optimizing for? Is your GPS teaching you the fastest routes, or the ones that pass by the most McDonalds? Is your news feed optimizing for balanced viewpoints, or engagement? Are search results based on rigour and authority, or sponsorship?


Unfortunately, for this software update, you’re going to have to patch yourself.