Joshua Huang Joshua Huang

New Era of Skilling

It all begins with an idea.


Amid trills and whistles, toils and travails, we are relearning what it means to learn.

By joshuahuang@microsoft.com


In a recent ruling for TikTok’s divestiture, the Supreme Court wrote in its per curium opinion, “we are conscious that the cases before us in­volve new technologies with transformative capabilities. This challenging new context counsels caution on our part…. we should take care not to ‘embarrass the future.’” It becomes clear as we formulate our thoughts for this article that it’s in our best interest to follow such advice.


What makes the current AI revolution particularly arresting—and fundamentally different from, say, the Internet in the ’90s—rests on the question: What exactly is being disrupted? Steam engines, petroleum, microprocessors, when first introduced, disrupted the old way of us. But this time, it disrupts what is uniquely us, what we hold on to as ours: the ‘know-how,’ the creativity, and the finesse. This also invokes a fascinating thought: Historically, the acquisition of cognitive ability meant hiring additional humans or upskilling existing ones. The former – get more headcounts (brains) – no longer rings true: we now rent or purchase cognitive labor on a consumption basis.

Then by extension, what about the latter, skilling? In five years, will it purely become a matter of hardware upgrade? Will there be a superlative ratio of human and AI to empirically play out?

In one aspect, human skilling has been a solution to scarcity. It’s about constrained optimization – how societies, firms, individuals assess tradeoffs subject to constrains they have in terms of resources: Within the proverbial Edgeworth Box, possible exchanges of time, human intelligence, or the ability to solve something, endlessly reposition to form optimal tangency, or in academics, the Contract Curve. Given the needs of the employer, city, or nation, we upskill incumbent workforce when marginally utility-improving.

[Visual Illustration of Edgeworth Box]

Now we argue that this box will have no lid, as the non-human counterpart offers intelligence that’s becoming infinitely cheap to consume. The collective Pareto allocation at a macro level can shape like we have never seen before.

In the other aspect, the 2-decade long system of web-centric digital world – the system with which we acquire, supply, market information (or do everything with the internet) – is under pressure. GenAI-based tools that enable other primary sensory organs to interact with the internet directly are poised to challenge the existing ‘middle layers’ between information and human brains. We may no longer be dependent on web browsers, keyboards, or touch screens. The shift to new platforms and departure from conventional web-hosted or app-based environments can usher in completely new paradigm of learning.

To take this one step further, as data aggregators are going away[1], the entire landscape of online IP (intellectual property) can unravel (ea., when no one reads your website then who would type again?) – and this in turn can deteriorate an important source of AI in the long run, that’s human-generated data. Now what about SaaS platforms? Would the entire app ecosystem be under threat too? We are ahead of ourselves. New protocols or regulations are sure to come into play. But the bottom-line is, if your organization still plans to measure learning with web2.0-era telemetry (page visits, module completions, certifications), be mindful that such data may increasingly be a mirage of decade-long behavioral inertia.

[Visual Illustration of Web-centric interactions]

 

Recap Skilling: A First Principles Approach

 

We want to invite readers to pause for a second and define “skilling,” not in terms of a departmental function or the charter for L&D initiatives, but of its essence. If we applied a first principles approach—stripping away assumptions—what would we find as the irreducible core of skilling? What is its fundamental promise?

At its core, we argue, skilling promises 3 things.

[Visual Illustration of the 3 Core Promises of Skilling – Source, Absorb, Retain]

The first and perhaps the more unintuitive is the improvement in information sourcing. Consider software developers: much of the daily workflow involves searching for solutions, cloning references, and verifying them. They go on to Stack Overflows, Reddit, documentation, other people’s code. The hard-earned skill required here is to rapidly, accurately, and precisely pinpoint information to solve the problem at hand. We all probably can recall certain lectures offered in colleges aimed to previously upskill such abilities.  

The second promise of skilling is the maximization of the ‘initial download’ – how much knowledge a learner absorbs in the first critical moments following exposure to new material. This is what we have in mind when we design curricula, package developer guides, or run a webinar. We want the audience to learn as much as possible, given their aptitude, attention span, or willingness. A good example of this is how recently many companies gamified their technical learning materials to improve cognitive loads per seating. New forms of information delivery modalities, such as podcasts, short-form videos, or click-through labs sprung out to match the changing human preferences.

The third is retention. This is why you might decide to read your marginalia. It is about converting what’s initially absorbed into truly yours. Retention enables application; it transforms knowledge into judgement, expertise, and mastery over time. When we describe someone as experienced, we often denote his or her cumulative retention of essential information. In modern skilling, common practices to enhance retention include assessments, simulations, and hackathons, and we use test scores and certification as measurements.

As we will illustrate next, all three promises should undergo change.

 

Reassess Skilling: What’s set to change

 

Fueled more by vague hints of wonderment than steady assessment of purpose, led more by intuition by reason, in the next paragraphs we want to lay out a handful of predictions. This is the section where to embarrass our ‘future selves,’ but the hope is to logically deduce some possible scenarios of future that show us how to chart an early course forward.

1.     Divergence in relative proportions of memory systems  

In some workplaces today, machines already ingest, comprehend, and organize audio, visual, and textual information from our business meetings and reproduce them on-demand. If you think of it, we are adding additional units of memory, like a battery pack. This lessens our need for what’s called the Phonological Loop memory system, a subtype of active memory, which temporarily stores spoken and written language. In turn, it saves us more bandwidth for instantaneous reasoning; it also invokes a different type of memory system (known in academia as Episodic Buffer) to digest the post-meeting summary report – our brain turns into a temporary workspace to combine pieces of information of different formats into one coherent ‘chunk’ for processing. As we more widely adopt these practices of bringing AI agents to work and other daily activities, the asymmetry of skill requirement among memory systems will likely widen.

Similar developments can occur to passive, long-term memory systems. We have long relied on external storage – web-browser bookmarks, USB drives, carvings on a rock – to enhance what’s called Declarative Memories. This is the memory of ‘what’ – the facts and information we want to consciously recall and verbalize – and perhaps how most people would define human memory as. We don’t foresee this to go away, but we argue the depth required in declarative memory system will become increasingly shallow. Analogically, instead of needing to remember a door number (while machines memorize the content within the room for us), we can expect AI to offer a more intuitive retrieval system, so that we will only need to remember the name of the apartment building.

Imagine we fetch memory by commanding: “A group of people and I repaired the prototype drone that was shelved last year. I saved a sundry of diagrams and test footage under somewhere, pull all the schematics and flight videos, and frankly any other materials for which I was a contributor, and walk me through them as you watch me rebuild a new one.” In this example, the person does not need to access memory at the level of ‘folder names,’ types of assets,’ or ‘which month.’ We offload our deeper memory nodes to external systems and while knowing we can intuitively query granular retrieval later.

[Visual Illustration of memory system dependency]

It’s important to note that what’s also happening here is the transition from declarative memory (what) to procedural memory (how) - the memory system for skills, tasks, routines, often hard to verbalize after the fragmented declaration memories condensate and reform overtime. In other words, the necessity of ‘remembering what’ can start to wane.

In many cloud storage platforms, tools are offered today to automatically re-classify stored data, for example from ‘hot’ to ‘cold.’ It’s reasonable to project that AI will also aid each of us to monitor and auto-allocate our memories, transitioning from one system to the next, as a way to optimize our overall cognitive output.

 

2.     The Economics of Skilling

In the last millennia, there has been a sustained effort to actuarially price the value of employee upskilling. In the 50s, the Nobel-winning Human Capital Theory treated training as investments in human capital, not dissimilar to the ROI calculation for machinery. With statistical models maturing two decades later, institutions forecasted lifecycle value and productivity curves of workforce from upskilling. Fast forward to the 90s, coincided with globalization and the initial wave of internet boom, major consulting firms helped reshaping how many nascent organizations quantify the value of skilling, notably with McKinsey’s framework on Workforce Transformation as a Strategic Asset. And more recently, big data has enabled L&D leaders to apply machine learning techniques to approximate skilling to attrition, promotion and performance, and tie financial measurements (NPV, IRR) to such efforts.

Now, A new round of calculations is underway. Workers and firms will face shifting cost structures and labor relationships. As the marginal cost of integrating AI plummets, the firm’s combined labor isocost line pivots to allow higher quantities of AI deployment with the same budget. The aggregate production possibility frontier (PPF) as well as Return-to-Scope (human + AI) almost definitely will improve as we can now effectively ‘trade’ with a new supplier of intelligence with extreme comparative advantage. As aforementioned, the Edgeworth Box will have no ceiling. On the flip side, Monopsonic tendencies can strengthen in the labor market should employers continue to easily substitute lower and mid-level skills and recalibrate overall wage levels.

At an individual level, we, the new symbiotic learners, may also begin to make different intertemporal choices when it comes to learning. Initial commitment to many new knowledge domains is vastly reduced: we can do research and articulate about legal precedence, converse in multiple languages, or design a neural-net experiment, without first needing to invest five years of our lives studying. In a way, the new economics of skilling re-molds our collective risk profiles as regards the investment of time and energy in formal education.

Much of the above in our humble opinion can be and will be computed, some at the societal level and more prolifically within commercial organizations. To infer the future economics of employee skilling, with your company’s own telemetry and unique preference is sound advice; you want to be ahead of the curve before the market or your competitors define the math for you.

 

3.     Learner Behavioral Change

Since the dawn of internet, pioneers of website UX have tirelessly run A/B tests to improve discoverability and navigation of content for users[2]. We invented different page types (product listing page, ‘editor-choice’ page, search result page, product detail page, etc.) to highlight hero information or surface long-tailed inventories; we refined endlessly the information hierarchies on our pages or apps. Moreover, we designed and scaled countless digital, marketing channels to serve information to the right cohort of audience at the right time; businesses were created and ‘religions’ were born in the pursuit of better SEO (search engine optimization) – the art and science of ranking your website higher in Google’s search result. In short, we have been getting really good at making the information easy to source and to take home.

These interface-specific techniques for web and mobile users can start to lose traction with growing behavioral acclimation to hands-free and screenless modes of interaction. Emerging technologies now enable systems to interpret human posture and movement through radio frequency (RF) signals, allowing machines to sense presence and behavior without cameras or touch. At the same time, digital clones—representing fragments of an individual’s voice, memory, and cognitive patterns—are beginning to serve as intermediary agents, interacting with AI systems on behalf of users. These partial identity proxies, imbued with a portion of one’s knowledge and behavioral traits, open new possibilities for more efficient, personalized machine interaction. Voice-based commands remain another key channel, reinforcing this shift toward more ambient, multimodal interfaces.

These advances also might render our existing belief in human learning patterns obsolete. For centuries, we sought authoritative learning paths; knowledge is given by experts who tell distinct and full stories. We learn microeconomics separately from macroeconomics, horticulture from corporate finance; we are accustomed to reading 300-page textbook cover-to-cover because that’s assumed the right sequence of information intake. These notions should be challenged. With the teacher (AI), one can infer that human authority would still set a start point and an end point of learning, but how individual progress during that ‘semester’ can permutate into infinite paths. We are now capable of branching out at every sentence of a ‘book’ to a different sentence of a different book: we are now unstoppable learners. 


Closing

So here we are, at the edge of something vast. Not just a new chapter in learning, but a new grammar of cognition. The way we source, absorb, and retain knowledge is undergoing a tectonic shift, and with it, the very architecture of skilling. Amid this, one truth persists: the future will not be inherited by the most informed, but by the most adaptable. As Alvin Toffler once wrote, “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” If skilling once meant filling a vessel, it now means equipping a compass. And as the maps grow stranger, we might find the truest north not in mastery, but in the courage to learn again.

End



[1] Case in point is the drastic 95% decline in paid subscription users for Chegg.com in the 3 months following the launch of ChatGPT. Chegg is a data aggregator for learning materials.

[2] Users: person(s) seeking information or in the process of learning something.

Read More