Ethical Systems Change

Burkina Institute Two Loops model for systems change
“Two Loops” model by the Burkana Institute

We always spend the time and energy at the beginning of any initiative to ensure that we have the right objectives – putting together this program for The Value Web has pushed us to get specific about what makes a set of objectives “right” or not…especially when you’re trying to make far-reaching changes with your efforts.

There are the obvious table-stakes of “SMART” objectives – basically, are they well crafted and specific enough to be able to tell you’re achieving them. There’s the design-thinking approach of feasibility/viability/desirability – but in a systems-change context this only raises the main question; desirable for whom? 

The fact is that systems change happens all the time – sometimes unintentionally, but often very much with intent, and in many cases the systems we may be trying to change are working exactly as someone else may have intended.

CV Harquail’s three questions to guide decision-making get closer to it: Who benefits? Who is left behind? Is this what we want?

These simple questions, for me, at least start to surface key issues in systems change – that benefits and impacts won’t be the same for everyone, and we must be conscious of the choices we make. It also connects well with my overall belief that we are exiting the age of externalities, where we can blithely write off the impacts of our actions as external to our value calculations.

So from my perspective, we can’t say what the “right” objectives are without having a declared point-of-view on the kind of systems change we are looking for.

The Berkana Institute has its “Two Loops” model which describes the move from an old paradigm to a new one. I think this model is useful in thinking through the non-linear move from one ruling set of norms to a new one. 

But setting intentional about designing the shift from one system, or one paradigm, to another asks of us what we plan to leave behind, what we want to bring into being, and why. It also begs the question of what right we, as individuals, have to make that change. I’ll address the “who” question separately. For now, I’ll focus on whether a set of objectives in driving systems change are “right” or not.

I offer these guidelines for whether a set of systems change objectives are the right ones. I would be suspicious at any kind of change effort that:

  • Benefits an in-group at the expense of out-groups
  • Leverages weakness or imbalance to propagate its effects
  • is based on limited definitions 
  • Internalizes gain and externalizes harm

These are the kinds of “system change” that I think – using the Two Loops Model, represent the paradigm we should be leaving behind. Think through the accomplishment of your objectives to the fullest extent – what would result from achieving them? What would the second and third-order effects be?

Keep in mind John Sterman’s admonition: “There are no side-effects – there are only effects.”

The test for objectives that represent “healthy systems change”, then, would create change that:

  • Is seen as improving the conditions for all parties involved
  • Is supported by the contributions of parties across the system
  • Is informed by networked knowledge from across the system
  • Does not create harmful consequences beyond its own scope

I’m curious about the tests that others use to ensure their efforts will create truly “positive” change – what rules do you use to keep yourself on track?

The Accidental Cyborg

As we close out a year that started with a conspiracy-fuelled assault on the US Capitol, and is ending with all the buzz around the Metaverse; as we find ourselves wondering how fringe opinions became mainstream and Generals warn of impending civil war, I thought it would be helpful to lay out simply how we got here, from a technical perspective.

For those aghast at the vulnerability that log4j represents, have a seat; we have a worse one in our heads.

In our unmediated processing of the world, we build mental models from experience, which then allow us to navigate the world without much thought. If something doesn’t match the model, it changes based on new experience.


I’ll try to keep it short, but if you’re in a rush…


TL/DR; we accidentally became cyborgs, and our brains have been hacked through old, unpatched flaws that allowed others to tell us what to think. This is an outline of those human design flaws and how to exploit them.


First, the unpatched flaw in human cognition, laid out in a few steps:

1.) Humans operate mostly on autopilot, guided by mental models generated from infancy to make navigating the world easier

2.) Those models are created through hard, direct experience (that’s sharp, that hurts, that one burns, that is tasty)

3.) Once we have a mental model, we don’t revisit it again unless we have to…we just use it for autopilot (oops, there was one more stair on this staircase than I thought…attention!)

4.) Those mental models apply not just to the physical world, but also to the social and conceptual world (hence “what is the sound of one hand clapping”)

While original impressions are formed from direct experiences, systems can intervene to interact with our mental models to reinforce and shape them in new directions. Perceptions of frequency and availability are increasingly shaped by this interaction.


This flawed, but useful system has served our species well enough (we thought up science, cities, supply chains and music) with some notable drawbacks (racism, wars, genocide, climate change). 
While in the past this flaw left us vulnerable to xenophobia and charismatic leaders, it also created a much bigger opening: an unprotected opening for a human/machine interface.

Mental models and Kahneman’s whole construct of “System 1 and System 2” (reflexive use of mental models and considered thought, respectively) is based on a system of inputs and outputs, which, especially in the social and conceptual world, is vulnerable to man-in-the-middle attacks (I’ve kept this expression gendered, because, well, in this case, the perpetrators generally fit the idiom).

Here’s the hack:


1.) In digitally mediated interactions (web searches, social sharing, reading, looking at photos, GPS navigation, shopping, adjusting your thermostat) you can watch patterns to find the edges of someone’s mental models (habits, sexual preference, gender identity, political views, cat/dog person)

2.)  Staying within “n” degrees of that mental model allows injection of information that won’t trigger conscious thought

3.) Adding peripheral information allows exploitation of “availability and representativeness” heuristics (if I see it a lot, there must be a lot of it)

4.) Most people are like at least some other people; finding similar groups allows you to test how far you can push similar groups.

As the information ecosystem becomes largely shaped by the MITM system, new inputs are injected and the perception of frequency and representativeness replace direct experiences in a self-reinforcing loop.


And there it is – though this is obviously the simplified version.


I’ve laid this out in a few simple models – unmediated, mediated and captured.

As our information ecosystem becomes increasingly mediated through digital channels, it is important to understand that our cognition is co-evolving in response to our sources of stimuli. The natural environment we evolved in was indifferent to us, but the digital environment is not. Go ahead; test the matrix. Change your behaviour on the platforms you use, and see how it shifts.

Now, how the stimuli are served to the humans entirely depends on the priorities coded in to the algorithm. What is it optimizing for? Is your GPS teaching you the fastest routes, or the ones that pass by the most McDonalds? Is your news feed optimizing for balanced viewpoints, or engagement? Are search results based on rigour and authority, or sponsorship?


Unfortunately, for this software update, you’re going to have to patch yourself.