Pages

Tuesday, April 14, 2026

Lessons from the Far Side of the Moon

“human in the loop”

I am sure you have seen and heard this phrase that has become almost a default in conversations about AI and work. I have been thinking about it for months. It is supposed to be a reassuring phrase. That, in everything that AI is taking over, humans are still somewhere in the system overseeing, validating, and intervening when needed. But in the last few weeks, the more I have mulled over this phrase, the less reassured I have felt.

But something shifted last week.

Image credit: NASA
Seen during Artemis II’s lunar flyby on April 6, 2026, the Moon and Earth align in the same frame, each partially illuminated by the Sun.

On April 10, 2026, four astronauts completed a mission to fly around the far side of the Moon. They went further from Earth than any humans in over 50 years. On their way back, for about 40 minutes, they were completely cut off from NASA mission control. 

There was no signal and no loop; it was all human

While I am not a 100% sure, I bet the team at NASA focused a lot on how to build and nurture the competencies of the four astronauts for this 40-minute blackout when the Orion module would be cut off entirely. All these efforts would be in preparing the humans to respond to unknown and unpredictable situations and trusting that the humans inside the spacecraft will know what to do. 

Image credit: NASA
The Artemis II crew – (clockwise from left) Mission Specialist Christina Koch, Mission Specialist Jeremy Hansen, Commander Reid Wiseman, and Pilot Victor Glover – pause for a group photo inside the Orion spacecraft on their way home.

NASA did not send AI to the Moon, they sent humans because fundamentally, there are many things where humans are not just meant to be in the loop, they are meant to be in the lead. I was reassured by such a mission where humans were not just a checkpoint or validation point in the system, instead they were the ones leading the design and the decisions. And if you have seen all the pictures and the beautiful, poetic expressions of the astronauts trying to reflect on what they were seeing and feeling and learning, you will agree that humans are not just validators; we are value creators. 

It was Accenture CEO Julie Sweet who said, "AI future should be human in the lead", and I love this reframing. This difference between loop and lead is not a subtle one. It is a powerful way to think about how organizations must use AI, but not at the expense of losing their own ability to think, learn, grow, and become more intelligent.

When an organization like NASA sends humans to space through missions like Artemis II, it is not because machines are incapable. It is because, in environments where there is ambiguity and uncertainty and the consequences are as real as it gets, we can't afford to outsource our judgement.

So, as AI is reshaping our work and our world, instead of thinking about whether or not we keep humans in the loop, we have to think more intentionally about how to keep humans in the lead! This means thinking about how we design roles, systems, training and work so that humans continue to build their capability and thinking and they continue to feel and express and to question and engage with each other. 

I have said this before. I don't think the real risk of AI is replacement. I think it is the erasure of our unique fingerprints. It is the things that each of us leaves behind as breadcrumbs that highlight the mix of who we are through our choices, decisions, and connections with other humans.

In the race to AGI, if we focus only on making AI more intelligent, we may end up designing systems and organizations that make us less intelligent. Which also means that perhaps the real work ahead is not just building smarter AI, but redefining human intelligence itself.

Tuesday, March 3, 2026

Learning and Development in the New World

I finally read Donald H Taylor's L&D Global Sentiment Survey (GSS) 2026 report last week, and I haven't been able to put it down. No, it didn't have all the answers but what it did say confirmed something that I have been watching build for years. 

Donald calls it the “New World”. 

For me, the most striking data point in the entire report was that the word "human" appeared in 64 challenge responses this year for example in comments like: “how to properly utilize AI while not removing the human element”. The word human didn't appear at all in 2022 or 2023.

Donald's read on this: "On the surface, this comment is about design, but it could also be about something more profound: about L&D's very sense of identity coming under threat from AI." (p.21).

I think that's exactly right.

Donald notes that "The biggest riser this year is 'Showing value', up from #7 last year to #5 this year, with its highest-ever share of the vote. Anecdotally, it seems L&D is feeling the pressure to justify its existence."

Yes. We don't have a technology or AI problem; we have an existential problem about who L&D is and what our role is in this New World.

Donald frames L&D's new role as "a shift from content creation to performance consulting, from training delivery to capability ecosystems, and from reactive service to proactive partnership" (p.17)

I agree. But I don't think this shift will happen through better technology or more AI. Instead, it will happen through bringing even more humanity into our work.
I think the shift is contingent on L&D practitioners who know their craft deeply enough to make it human, visible, and connected to outcomes that matter to the business.

The survey's conclusion lands on a note of cautious optimism: "while we have no map, we do have a direction." (p.22)

Yes, we may have a New World and perhaps the directions to get there may look somewhat intriguing. But after 25+ years in this field working at the intersection of workforce development, competency assessment, and learning design across industry sectors, I can say that the destination for L&D has always been the same.

Does the worker know more, perform better, and contribute more fully as a result of what L&D did? Everything else is just the method we chose to get there.

The GSS 2026 report is available at https://donaldhtaylor.co.uk/research_base/global-sentiment-survey-2026/

Well worth your time.

hashtag#LearningAndDevelopment #WorkplaceLearning #TalentDevelopment #GSS2026 #HumanFirst #PerformanceConsulting #FutureOfWork
hashtag
hashtag

Wednesday, February 18, 2026

RPL: Where Rigour Meets Relationships

Image by DAMIAN NIOLET from Pixabay

This week, I read an article that really resonated with me, and I say that as someone who has spent more than two decades designing and implementing learning and RPL/PLAR (Recognition of Prior Learning) systems internationally and across multiple Canadian industry sectors.

"The Art and Science of Facilitating RPL – Why recognition is a craft; not a checkbox"
https://hbta.edu.au/the-art-and-science-of-facilitating-rpl/

Vanessa Solomon articulates here something that many of us in this field have observed and felt. RPL is fundamentally a practice that requires empathy, critical thinking and professional judgment, not just a mechanical exercise in evidence mapping.

The framing of RPL as an interplay between art and science is useful because it allows us to think more about how much of the RPL practice is an act of facilitation, coaching and mentoring beyond knowing the technical standards, reviewing the evidence criteria and mapping competencies to credits. The instructional designer in me was genuinely chuffed to see a nod to Marzano and using instructional design theory to elevate RPL as a discipline rather than simply a process.

I have worked across so many sectors from early childhood education to automotive trades to the digital economy and have found that if we only focus on the rigour without any relationship, we get compliance but no advocacy. If we only build relationships without much rigour, we get paper credentials that are not valued. To get to meaningful recognition, we need both rigour and relationships.

Where I'd like to push this further is the focus on the system. How do we build RPL tools, processes, and assessor support structures that help us strike the right balance between rigour and relationship, between art and science? A lot of good RPL work is more upstream than we imagine. I am talking about how competencies are drafted, communicated and 'lived in' by the sector. I am also thinking about the quality principles for the design of assessment tools and processes to be inclusive and usable.

Those caveats aside, I'd love for Vanessa's article to be read widely and for people to hear the point that she makes referring to "Marzano’s core insight: that effective professional practice is never purely technical or purely relational, it is both."

Full article by Vanessa Solomon at HBTA:
The Art and Science of Facilitating RPL – Why recognition is a craft; not a checkbox https://hbta.edu.au/the-art-and-science-of-facilitating-rpl/ #RPL #PLAR #VPL #Recognition #PriorLearning #RecognitionOfPriorLearning #Skills #LifelongSkills #WorkforceDevelopment #Validation #SkillsDevelopment