There’s a bar in London powered by AI: in certain zones, facial recognition technology will identify who got to the bar first, and line them up in a virtual queue, circling faces on a screen so that bar staff can work out who to serve next. The service is offered as a reasonably cheap subscription for the bar, with some data (generated by the company that makes the system) about how it can increase profit and volumes of alcohol sold.
I’m interested in it from a different perspective: the evolution of dominant narratives, in this case, the specific social context of queueing. Growing up, entering the social scene, was often a journey of learning how to queue and get served at the bar, with a whole range of concomitant social behaviours, rituals, and cultural connotations.
For my American friends, who have not visited the UK, it may be worth mentioning that the experience of a bar is very different on both sides of the pond: in the UK, bar staff are paid (usually minimum wage), and your position in the serving queue may be a function of aesthetic appeal as much as order of entry. In the US, where bar staff are tipped, you can more readily buy your way to good service. Certainly I have found that the bar area in the US is a far more social scene than here in the UK, partly because bar staff have a vested interest in making you feel welcome, but also because remaining sitting at the bar is more accepted and normalised.
Here in the UK, I have to resort to prominently holding a ten pound note, catching people’s eyes, and engaging in the time honoured ritual of pointing to my neighbour and saying ‘he was here first’. I once waited forty minutes to be served on New Years Eve.
Facial recognition and crowd filtering sorts all this out (although imagine if it introduced an overlay of who tipped best, and hence imposed a taxonomy within a queue). Time at the bar could be reinvested: you could read a book, check your emails, or even talk to a stranger, with no need to keep pushing to the front, or figuring out who smiled at you last. In such a world, what do we lose, and what do we gain?
This is the social context of technology: it facilitates and enables, it reinforces and promotes difference and inequality, it provides fairness and democratisation, as long as you can afford it. But certainly it is challenging and fracturing dominant narratives of the past, and providing space for new ones to emerge. In my own writing, I am finding the language of Dominant Narratives, to describe existing social scripts and behaviours, particularly useful to understand and describe this.
Last week I shared the final piece of writing about Apollo, a piece that was harder to write, describing the accident on Apollo 1, and the death of the three astronauts: I used this to consider complexity, risk, and humility, or arrogance, of both systems, and leadership.
Exploring Learning Science
From then, I’ve been focussed on the new work around Learning Science, which is the first module I’m building out fully for the new Modern Learning Capability Programme.
This first piece aims to set the context: that ‘Learning Science’ is a broad discipline, based upon multiple established sciences, and that our role is not to ‘master’ it, but rather curate a space of interest.
This second piece encourages individuals to consider the Organisational philosophy of learning, and recognise and reflect upon reductionist, constructivist, and emergent approaches. I suspect that this second piece is a clearer indication of where this work is evolving.
This was a short piece, written at the end of a busy day, but often brevity equates to clarity, and I rather like it: it considers how we are each, ultimately, an island, holding our own personal understanding of ‘meaning’, and making occasional voyages to share this with others.
This final piece starts to build out the Learning Science work: it begins to explore existing disciplines, and to make explicit links back to what it means in practice. This area will be my focus next week.
What I’m Reading
I’m half way through Serhii Plokhy’s book on ‘Chernobyl: History of a Tragedy’, the first definitive account of the nuclear accident, from 1987 right through to 2018. It’s a fascinating read, and three of the first perspectives I’m taking away are these:
- The level of ignorance and misunderstanding around the risks and radiation are staggering: the reactors were built, and run, on a highly vernacular model. The complexity was managed by committee, and process, not distributed capability and comprehensive understanding.
- The paralysis of the system in responding to the disaster is a good illustration of where one frame persists well beyond the point at which it is broken: in this case, nobody admitted the reactor had exploded, despite the clear evidence in front of their eyes. The older frame, that this was just a fire, persisted for many hours. This speaks to the risk of speaking out against a dominant narrative.
- Complexity is additive, a phrase I started to explore in the Apollo writing: a system is not necessarily ‘simple’, or ‘complex’, but rather the layers of complexity create meta effects. That may either be insight, or blindingly obvious, but perhaps something I will think on further.
In The News
AI in the NHS
This piece, which includes an overview of a few applications, provides a timely reminder that AI is most certainly moving mainstream. Aside from the medical diagnostic applications, I thought the point about how AI may help identify those patients more likely to miss appointments is a good illustration of how technology is increasingly disruptive of social norms (or Dominant Narratives), that I mentioned in the introduction.
The AI Bar
And here is the press release about Datasparq’s AI powered bar.
This piece, behind the hyperbole and fear, speaks clearly to how technology will disrupt education. I share it not specifically in terms of the pedagogy, but rather as commentary on how prepared both the education, and technology, sectors are to exploit this. I suspect the next decade will see the widespread intrusion of disruptive, and global, players, possibly with established entities retreating to remain simply as brand names, owned by tech.
#WorkingOutLoud on the Certifications
As you can see from my writing above, I have started the work on Learning Science with energy: I have felt a bit daunted about building out the Modern Learning Programme, because although I have all the material and structure in outline, which I use with small groups, this is a programmatic piece, and will be offered at greater scale.
I am particularly excited that I will be publishing the full curriculum and materials as I go: this will take the form of a series of Social Age Guidebooks, the first of which will be ‘The Learning Scientist Guidebook’. As with the other Guidebooks, these will each be sub 10k words, and have a practical focus.
So far, I feel I am holding my head above the complexity, but yesterday I did find that I lost my way a bit.
The core idea is this: we will explore the landscape of Learning Science, with a view to understanding what each discipline looks at, and where it is heading, and then curate your own personal discipline as a Learning Scientist: this is the key for me, in helping us use the totem of the word ‘learning science’ as a thing, and moving to practical ideas for what we explore, what it will inform, how it is limited, and where it may fail us.
What I'm Thinking About
I’m mainly considering my own learning, in two contexts: often we work within an existing domain of knowledge, creating new meaning, or applying what we know to a matter in hand. But with the Apollo writing, and now the Learning Science, I am consciously trying to fracture or expand my own domain. This takes me to a place that is both more exciting, and more frightening, because certainty comes with familiarity, and in this strange land it is harder to be certain.
I felt this in my writing on neuroscience yesterday: I know my way around this area, I am comfortable with the language, and I have an underlying conceptual model of how it all fits together, but there is, of course, a gulf between the type of knowledge we have, to have to understand something, and the language we need to explain it.
In practice, what this means is the writing sometimes ties itself in knots, as a very visible representation of my thinking taking shape. And sometimes it falls down a rabbit hole altogether.
The main risks are either of staying too high for too long, describing the challenge as infinitum, without ever getting to detail, or conversely, falling into radical detail, and losing momentum for the overall journey.
Part of the reason why I am focussed on writing a new Social Age Guidebook out of this work is that it will force me to work to a 10k word overview, so if I use half of that up with a structural description of the brain, or an interesting aside on imaging technology, I will fail.
The other challenge is that, whilst I consider my understanding to be incomplete in ways that I know, it is, naturally, incomplete in ways that I do not yet understand. Comprehension is always a series of false summits, on a mountain that is infinitely tall.
Or as Terry Pratchett used to say, learning helps you to become ignorant on a whole new level.
Still: I am enjoying the stretch, and better still, enjoying my new plateau of ignorance.
If you enjoyed this, please sign up and share here: