The Barrier Transition Matrix

The Barrier Transition Matrix


While discussing the Singularity Passage framework, I paid special attention to the barriers that are present in each of the quadrants of the scheme:

  • The barrier of consumer inertia in the Recombination Factory
  • The overlord rejection barrier in the Control Bureau
  • The niche market barrier in the Anti-Pain Laboratory
  • The failed expectations barrier in the Nonconformism Spark

Each such barrier is a figurative representation of a set of obstacles that the respective players need to overcome in order to move to the next stage towards the central attractor.

The logic of the Singularity Passage assumes that the players solve their "existential barriers" decomposing them into more specific objectives, which can also be called barriers but already of a "production" nature: technological, financial, social, legal, etc.

But the framework itself does not say what exactly happens when one or more barriers are overcome, what new properties or categories of products and services this leads to, and generally how to think about it in detail.

To solve this problem, there is another "finer" tool – the Barrier Transition Matrix.


The Barrier Transition Matrix is inspired by the illustration drawn by Alex Kipman (until recently the head of mixed reality at Microsoft, and at that moment one of the "fathers" of Kinect and Hololens) in his interview for Fast Co.Design (2017). In the interview, he discussed how he reflects on the challenge of "computer understanding the fullness of human experience".

Let me quote from the publication:

To understand just how computers might one day understand the totality of human existence, his argument can be broken down into a 3×3 box. On the X axis, you have input, output, and haptics. On the Y axis, you have human, environment, and object. Each square is an order of magnitude harder than the last. So tracking a human? That’s hard. But tracking environments, with all their nuances is 10 times harder than tracking people. And tracking objects, with all their textures and variances in context? That’s 100 times harder than people.

On the Y-axis Alex put: input, output, and haptic.

So Kipman, being what he calls a “lazy” engineer, focused on the simplest square in his matrix to solve–the 1×1 problem, as he put it. Human input. That meant computers had to understand gestures and voice.
“People say I invented Kinect,” says Kipman. “I didn’t invent Kinect. I went through this table and identified the [easiest opportunity].”

In this reflection, the "2x2 cell"–as Alex calls it the "10x10 problem"–already matches not the Kinect but the Hololens capable of mapping space into the eyes and ears of a person through holograms. Alex notes that by the time you approach the solution of the next cell, the previous one has already undergone development, and, in fact, after 5-10 years, the large Kinect is reduced to the size of the iPhone camera (the camera of the first Kinect for Xbox 360 in 2010 was based on the technology of the Israeli company PrimeSense acquired by Apple in 2013 for integration its features into the iPhone camera).

The Kipman's Matrix

It was 2018, and I worked at Microsoft. It was in the midst of promoting the Digital Transformation theme, and we had a small group of employees developing the Inclusive Transformation Framework and some new thinking tools. The Kipman's Matrix was one of them.

As you can guess from the context, I called "Kipman's Matrix" a slight generalization of the drawing and logic of Alex's reasoning from the article above.

The Kipman's Matrix is "very simple":

  • We rotated the axes in the way we are used to in Europe, and pinpoint the future to the upper right corner. (To the Moon!).
  • For each of the axes, we define a key parameter for the development of the system.
  • On each of the axes, we put three marks to define the stage or state that we want to achieve. And we assume that from today's point of view, it would take 10-fold more effort to achieve each next mark.

Ready. Then it remains to fill the cells. And voila!

Example. Human-machine vision

Let's take a look on an example. Kipman was describing the evolution of "Total Experience" in general, and we will try to delve into one particular aspect: how the technology of "human-machine vision" might evolve.

This example was created in 2018, I added diagonal lines to highlight the technology development then and by the end of 2022.

So we have two main development vectors:

  • Seeing Features – how well we understand where and how a particular person is looking and what they see.
  • Interface Adaptability – how well the digital world is able to adapt and respond in response to that knowledge.

On each of these axes, we can distinguish several stages with an order of magnitude increase in complexity (as it seems to us):

  • Tracking the movement of the eyes is difficult, but understanding the gaze direction is "10 times more difficult." And to understand what and how a person actually sees is 100 times more difficult.    
  • It is difficult to interpret a person's gaze in an interface, but it is even more difficult to turn the gaze into a way to control it. And to make the world as a whole responsive to the sight is 100 times more difficult.

Then we sequentially fill in the cells, trying to understand what each new opportunity gives us and what product or property can we pack it into.

So we move from eye-controlled interfaces to more advanced gaze trackers, to "tunnel worlds" with focal rendering, more advanced APIs (most likely available in helmets with built-in eye-trackers) for adapting to the gaze direction, neuro-vision trackers, Castaneda's style  "crow worlds" controlled in dreams, the ontology of vision (of a specific person) as an API and "sleepy worlds".

Usually, for such a study, we are given some kind of context in which we discuss and consider evolution. For example, "human-machine interaction" in general, but we could also talk in the context of any particular mixed reality platform like the Meta's "Quest-Horizon" pair. Or we could speculate about what a player in one of the Singularity Passage quadrants can do with these breakthroughs. Depending on such a context expert's knowledge will be projected in various forms and assumptions.

Something is missing here

Despite the outward simplicity and interpretability of the finished matrix, in practice, compiling the Kipman's Matrix turns out to be a non-trivial task that requires a certain amount of expertise.

The imaginary (and real) Alex Kipman drawing the matrix for mixed reality possesses several properties that he implicitly uses:

  • He knows the state of the industry as a whole and somehow knows which barrier in a given direction will be next. And, based on this, he can name the next target state. Generally speaking, this is non-trivial knowledge, given the request for an order-of-magnitude complication. The proximity of Microsoft Research here is very welcome, but more on that later.
  • (Although it's not in the interview or his picture, we know it from other talks.) He formulates product hypotheses about what kind of experience can be gathered when we overcome the barriers. That is, he does not just talk about achieving new technical characteristics, creating new algorithms, etc., but can package them into a product. Here again, just product-building experience is not enough, this is an inner knowledge of why you need to overcome some kind of barrier.

Along with this, we, following Alex, are forced to limit ourselves by putting on the "lazy engineer" hat. The transition to each next stage is 5-10 years of technology development, and the entire matrix, respectively, describes the horizon of 10-20 years, considering that the first cell is achievable "tomorrow". Therefore, we can dream about what will happen then, but we can hardly describe it in specific products and their versions.

It is no coincidence that Alex's matrix is "empty", and my example above: the farther in time, the more "blurred" terms I use.

If you noticed, in this and the previous section, I emphasized fuzzy statements. We went through all of these challenges on our own experience while working with customers and drawing matrices during our ideation and product discussion sessions.

The following "updates" to the original model are based on our attempts to overcome these "implicit" issues. So let's discuss how we could refine the Kipman's Matrix and how it became the Barrier Transition Matrix.

The Barrier Transition Matrix

Barriers and solutions evolution

The first step is to deal with the barriers and to understand how they are resolved and where the order-of-magnitude increase in complexity comes from.

We look at all the barriers from today's point of view. In this sense, their actual complexity decreases over time as the technological landscape evolves. What today seems to us 100 times more difficult than we can afford, in some 5-10 years will seem like a challenge harder just by one order of magnitude. But, new long-term objectives will appear on the horizon, which will be again 100 times more difficult. By convention, the matrix is infinite.

Frankly speaking, we do not know which barrier is correct to take as the next in the complexity line unless we are talking about the problem with specific measurable parameters (e.g., power consumption) then this is a relatively trivial case to formulate. But in general, we can peek! To do this, we just need to answer the question: where are they currently working on the barrier, which is much more complex and, accordingly, is separated from mass practical application for the next 5-10 years?

The answer to this question leads us to a simple sequence:

  • The α-barriers of today are solved by technologies, products, and services available on the open market. That is, we can open a browser and ask in the search box (e.g., ChatGPT) how to solve this problem, and, with some probability, we will find a ready-made solution. Perhaps even for free or open source when it comes to software technology. Or some approach is described in a book with real cases. Or we know from history that someone has already done and implemented it.
  • The β-barriers of +T time are solved in the closed R&D vaults of corporations and startups, spun off from scientific laboratories or those same corporations. You cannot buy a ready-made solution on the market (without acquiring a company or a team), but, most likely, you might spot the first signals that the problem solution is practically achievable from investment and patent activity (as well as the labor market).
  • The γ-barriers of +2T time are solved in relatively open academic circles. The realization of barrier presence emerges from researching extreme cases (manifestations of mismatches), you can learn that from scientific publications. But it is too early to talk about the generalization of technology or the transition to the production cycle, basically, we are talking about private niche solutions, the accumulation of an experimental base, and theoretical research.
  • The δ-barriers of +3T time and beyond are not resolved but rather discussed in science fiction and philosophical writings. At best, we can expect that people who grew up in these works by their mature academic age will approach the formulation of appropriate problems as the subject of their research.

Since there is little practical use in δ-barriers from today's state, we leave them outside the matrix scope.

Let me once again draw your attention to the elegant substitution that we just made, replacing our ignorance with global mankind knowledge (to the extent accessible to us), which already takes into account the diversity of thoughts.

In a resume, for a given direction, we get three generations of barriers. We can formulate them based on our understanding:

  • What is ready-made on the market?
  • What corporations are working on and what are promising startups?
  • What modern research is carried out at key universities and research centers?
The barrier card in the Barrier Transition Matrix

Cherry on the cake: we remember the connection between restrictions and inconsistencies and understand that barriers are the same inconsistencies, which in turn can be inverted as a driver or trend. Thus, after overcoming each identified barrier, we can name the driver (demand) that pushes us to do it.

Cherry on the cake: if we take into account the connection between restrictions and mismatches (in Inclusive Design and Spectral Thinking) then it is clear that barriers are basically mismatches as well. So we might invert them as drivers or trends. Thus, behind the objective to overcome each identified barrier, we can name the driver (demand) that pushes us to do it.


The second big difficulty that we faced in practice is filling in the cells of the matrix. The "Kipman's Matrix" didn't give us clear instructions on how to do this, except that the movement should be diagonally from the bottom left to the top right according to the flow of time.

All the same, the question remained unanswered: here we have some barriers overcome, for example, we have reached the indicated states on the axes – so what? How should we apply it? How to mix the states on the axes?

And here we get a quite obvious answer to the right question: we need to specify the zero element and explicit rules of "evolution".

Thus, we put the initial "basic" element (checkpoint 0), which we already have today, and further develop it in the matrix, adding up new features or effects to reach new states. It's time to remember the rules and experience of playing the classic alchemy game.

What is important, although the main progress occurs diagonally (in the original by Kipman - in 1x1, 2x2, and 3x3 cells), we understand that there are also side development paths that "pump" only one of the evolution vectors and they have the right to exist. It often turns out that these are niche products that serve more as a product wrap of technology (innovation) than a product with its value bundling several breakthroughs.

The Grand Union

We are now ready to combine the original Kipman's matrix, our understanding of barrier evolution, and the "logic" of alchemy into a single schema:

Two matrix-filling sequences are recommended:

"Context - details", from the industry context: 0-(α-β-γ)-1-2-3. First, we describe the reference state as of today (0) and barriers along both axes, noting transitional achievements along the way (new technologies, features, etc.). Then we fill out diagonal layers of the matrix cells from 1 to 3.

This approach works best if you know the industry well and have a strong picture of what's going on in it, from what's available on the market to the most secretive and specialized academic research. You can immediately describe the frame for further discussion and concentrate on the wave logic of the evolution of your object of interest.

As a side note: intuitively we could extend lines 14-15 and 16 beyond the matrix, expanding the diagonal layers into subsequent barriers and achievements. This is a correct feeling but in the linear logic of sequential evolution. The catch is that in a large diverse and competitive development landscape (this is a substantial precondition!) there will always be someone who combines two linearities into a more complex exponential result, without waiting until it becomes obvious to everyone. For example, cells 14-15 will be obvious when we overcome δ-barriers, and 16 after passing ε-barriers.

Iterative, from product: 0-α-1-β-2-γ-3. First, we describe the reference state as of today (0) and the nearest "low-hanging" barriers (α) opening the state (1) in just one jump. Next, we indicate the barriers β and fill in the submatrix (2). Next, barriers γ and fill in the submatrix (3). Along with barriers, we note transitional achievements (new technologies, features, etc.).

This approach works better if you are not moving from knowledge of the industry (sorry) but rather from knowledge and vision of your product. Then each updated vision of the target object evolution (the next submatrix) suggests the next barrier if you want to develop further. Raise the stakes!

Example. Human-machine vision

I used the iterative approach to fulfill this matrix

Let's try to put the familiar example we discussed above into the Barrier Transition Matrix and see what changes.

Checkpoint 0 (steps 1-2-3). This is where the mainstream technological sector is today - pointer and touch interfaces where we have a fair amount of insight into where user points or clicks and how they are reacting to our sites and apps through heatmaps images (but we still don't know what exactly the look at).

Checkpoint 1 (steps 4-5-6). 4) Through specialized agencies or if you're big enough internal UX research labs you might use eye trackers and neuromarketing tools to iteratively improve user experience and more reliably plan advertising campaigns. This is still a delayed cycle: you understood something about customers and their reactions, updated the interface, layout, and visuals, and rolled out the new version. Repeat.

5, 6) At this stage, you probably heard that there are people with disabilities who control sites with their eyes using special equipment, live sites that react to attention are in your best dreams, but deeper and more personalized analysis and adaptation are beyond ambition. But there are startups that promise to reduce the cost of tracking through conventional and mobile cameras.

Checkpoint 2 (steps 7-8-9-10-11). 7) Means of adaptation to the peculiarities of vision become the norm first in games, and then in operating systems. For some software solutions gaze control becomes a regular feature, and eye movement trackers are built into advanced virtual and mixed-reality helmets.

8) We observe the first experiments using such helmets for creating heatmaps of physical spaces (e.g., in retail for laying out shelves), neuromarketing is standard practice not only for advertising but also for high-budget media products, there are new cases of using gaze trackers in medicine, especially kids care and early diagnosis of deviations.

9) Truly live interfaces and ad units react to user emotional reactions by recognizing and parsing the webcam stream. Game characters "understand" where the player is looking and can comment on it. Early examples of “breaking the fourth wall” technology appear in media products, allowing AI to adjust the image to an individual viewer's location and point of view.

10) Enthusiasts dream of more cost-effective focal rendering that takes into account the human gaze, and the creation of living worlds that are "aware" of the presence of the human gaze and able to respond to it. Our interfaces are still made of stone.

11) We still operate with statistical representations and do not know how an individual human (or other creature) sees, what they notice, what they are staring at, whether are they looking or it is just a frozen gaze, whether their brain really saw something or reconstructed something from memory.

Checkpoint 3 (steps 12-13-14-15-16). 12) The focal rendering is added MR / VR helmets (at first in pro versions, but after 2-3 years in the mass consumer segment), and new micro-scenarios emerge that shift interaction to another level: remote assistants who know exactly where a user looks to, game NPCs got proper eye contact. You can "feel" the world with your eyes.

13) A new generation of tools and techniques for brain and mental disorders diagnosis related to vision is emerging, which is also reflected in educational practices and specialists' qualifications. The first studies of the visual model reconstruction of other creatures based on the analysis of the optics of the eyes and neural networks of the brain are published.

14) The paradigm of sixth sense interfaces gradually captures its place under the sun, reacting not only to a gaze but also to non-verbalized intention and attitude. The "world of Avatar" becomes possible, reacting to the presence, sight, and intention of a person. New game engines allow you to lay "animation under the gaze" out of the box, "liveness" is a basic property of computer models.

15) At the operating system level, APIs are capable to adapt and compensate for the features of vision and reaction, and a standardized ontology of vision appears. 3D apps and games get a new layer of information to react to – the focus map. Eye tracking and visual surface tracking are built into manned vehicles.

16) Sleepy worlds controlled without body movements by the power of the mind and gaze, supplemented by voiceless commands, dynamically rebuilt with the help of AI generation algorithms. Psychotherapists can tap into the dreams and visions of patients and manipulate the setting.

Let us now discuss what has changed (or not) in the transition from the Kipman matrix to the Barrier Transition Matrix:

  • The general dynamics and logic of moving into the future remained the same. Moreover, I admit, the initial Kipman's Matrix fulfilled in 2018 turned out to be generally a good starting point. It was convenient to peek into it, also filling it then took much less time than completely filling the Barrier Transition Matrix. In this sense, if you urgently need to draft a vision with experts, and not explain why it is right, start with the Kipman's Matrix.
  • There is a system in place now. We are forced to stop at every step and think not just about what prevents us from moving to the next cell, but where we are moving at all and what will push us to this transition. Such clarifications are expected from the structure of the matrix, but I will emphasize again: we first try to describe the general logic and ambition of the changes, and then we land them in a specific context. In this sense, the Barrier Transitions Matrix gives a more verified and balanced result.
  • The bulbous approach. The explicit logic of "alchemy" with the addition of achievements in which the results of the previous checkpoint become the basis for the next layer is more convenient than the abstract logic of transition through conceivable obstacles in the Kipman's Matrix. When filling out the Kipman's Matrix you need to be explicitly prepared for a question like: “OK, what will be the next step, which is an order of magnitude more difficult”? Often this cannot be done, for example, experts simply give incremental additions or rephrase the previously achieved result. The Barrier Transition Matrix is fractal and self-repeating: we explicitly fix the achievement of the previous stage and consider it as given for the next iteration. It's a mental trick, but it works.

Appendix A. Narrowing time

In practice, for making business decisions (including product ones), the use of the Barrier Transition Matrix sometimes raises the question of narrowing the time horizon.

The question is literally:

Is it possible to use steps not in 5-10 years, but in 2-3 years? So that in total the matrix describes not 10-20 years, but 4-6 to align with the traditional long-term planning horizon of most technology companies.

Yes, but you will not like the full answer because of its "impracticality".

Look: we initially proceeded from a request for an order-of-magnitude complication, which in everyday language means that we cannot do it tomorrow, but we assume that we will be able at the next technological turn. In the context of the Singularity Passage framework, such a turn often means several parallel transitions in different quadrants at once.

For example, if we are discussing the development of AI for, say, the generation of beautiful avatars, then the underlying technology like Stable Diffusion should not only crawl out of the depths of laboratories but also be worthy at the same time in order to: 1) aim at reshaping entire segments of the illustration market in a wide range of scenarios where reducing timelines and accessibility to the average user is a blue dream (nonconformist spark), 2) breathe new life into outdated products like presentation designers (it’s no coincidence that Microsoft is building a new Designer app) (recombination factory), 3) cause a ton resentment among designers and illustrators, whose work not only turns out to be unsatisfactory but also unregulated by copyright when reused for training those same neural networks (control bureau).

In other words, you cannot move to a new cycle alone, you need to wait for general readiness. Ideal technology pulls together several barriers at once.

Now, what happens if we narrow down the search window? Our window of perception narrows down to one, maximum of two quadrants. Accordingly, the amplitude of the transition may be sufficient in one segment and seem quite a breakthrough for you personally, but not for the market as a whole. In other words, I'm not saying that you can't do this, it just has this consequence: your innovation will be local and limited in impact.

And one more thing: when we thought about the evolution of barriers, we became attached to the "places" of looking for answers. Now we decided to shrink the timeline twice: it means that we are not interested in γ (academic research), and we should divide the segment α-β into intermediate stages.

Then we leave out of the scope the rumors and speculations that some company is probably preparing another virtual reality helmet and has started hiring people for this (what a thriller!), but we begin looking at closer and more concrete prospects:

  • The α+ barriers are faced by quite enough companies and individuals so that among them there are some who not just recognized these issues but also built a private solution and probably then opened it to the world. We can suggest that they hit the right barrier based on community reaction (e.g., the number of stars on GitHub).
  • The α++ barriers captured in strategic foresight directing the big money whales: corporations and investment funds. We see their body movements in M&A transactions and we can guess which barrier this or that company is trying to solve, OR through defensive reactions when one or another player submits a patent application, starts lobbying for standards or bills, OR we see a synchronous spin-up by several unrelated players of a new topic (web3, for example).

The rest of the guide for medium-term planning works the same way as for long-term planning.

Appendix B. Barrier Transition Matrix for Singularity Passage Barriers

In the discussion above and barrier examples, I mainly provided technological cases as a reference. Although in the beginning, we started with the barriers that arise in the quadrants of the Singularity Passage, and I mentioned that "production" barriers can be of a different nature, and the Passage barriers themselves are also barriers (just by definition).

Question: how can we use the Barrier Transitions Matrix to analyze the top-level problems of participants in different quadrants?

The answer will be applied – I compiled a table on where you can monitor the barriers faced by the participants of various quadrants in different stages:

Accordingly, if your point of view or focus of attention is aimed not only at the development of technologies or products but also at the evolution in this context of the actions of the players of one of the quadrants of the Singularity Passage, then you can put appropriate barriers on one of the axes of the Barrier Transition Matrix.

Appendix С. It will not be enough, everything is more difficult for us!

Another rare request: what to do if there are more than two development vectors? Accordingly, there are more than two groups of barriers. For example, in the example about the development of human-machine vision, we could additionally single out a line about neural interfaces, or the biochemical effects on vision, or about augmentation of vision into ranges and directions atypical for humans. Or, for example, we might want to understand what the corporations will invest in, and where the non-conformist revolution with their dream of Superman vision will lurk.

For such sources, there are two answers. Simple and complex.

The simple answer: combine and construct the cube

If you have three vectors, then you successively fill in three matrices of barrier transitions, cyclically changing the axes. Pay attention to the repetition of the axes - having filled the barriers once, the second time you simply copy them into the next matrix.

In general, it can be imagined as a 3D cube, but in practice, it is very difficult to fit into consciousness a combination of three evolving factors at once, so we work with projections on the faces of the cube and not its interior.

In practice, one can also use the hexagonal matrix representation:

Obviously, if one day you decide to work with four directions, then you will have not three, but six matrices.

The complex answer: fold and multiply

The second option is to first fill in the Barrier Transition Matrix along the main development trajectory, and then use the diagonal checkpoints as a ready development vector for the new matrix. At the same time, the second vector is some kind of additional context that should either ground the development or give it an additional shade.

Accordingly, two scenarios are most typical:

  • A restriction or projection from another logic is imposed on the main development vector P. For example, you described the development logic of generative text algorithms (GPT3-4-5+), but now you want to understand how it will land in the corporate player's quadrant (recombination factory) or how the state (or a monopolist like Google) will react to it.
  • An additional boost (engine) coming from outside is superimposed on the main development vector P. For example, you described the logic of the development of artificially grown protein crops (meat), but your parent company is considering an M&A deal, believing that it will give a synergistic effect. Then the new vector C is the basic logic for the development of such an addition, and on these two vectors P and C we extend the description of synergistic effects.

Further, this process can be repeated with new additions or projections. Geometrically, this process can be represented as a movement along the diagonal of a cube.

One more answer

An important interpretation follows from the scheme above: after we have projected the development vector P, we "apply" to it the force of the event vector C, which deviates the trajectory towards the new vector Q (1*, 2*, 3*).

Likewise, if you have a system development vector P(1, 2, 3) as a given variable (right diagram), you can "put" it in the context of other trends, say A and B, and interpret empty cells as the result of an impact or deflection. development trajectories.




The Barrier Transition Matrix framework, templates, and this tutorial were created by Constantin Kichinsky and distributed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International.

How to specify a license:

The "Barrier Transition Matrix" framework by Human Spectrum Lab, Constantin Kichinsky, CC BY-NC-SA 4.0