Last year, a plant manager I talked to had an operator cause a $200,000 quality escape. When he pulled the training records, everything checked out. LMS: complete. Work instructions: acknowledged. The operator had even been "certified" in the procedure.
What the records didn't show — couldn't show — was that the operator had watched a training video on a Monday morning and clicked "complete" without absorbing a word of it. The system said trained. Reality said otherwise.
That gap — between what your systems say and what your workers can actually do — is costing manufacturers millions. And the industry is spending billions to make it wider.
$8.6 Billion. Zero Proof of Competency.
The connected worker market is valued at $8.6 billion today. It's projected to hit $20 billion by 2030. That's a lot of smart glasses, tablets, digital work instructions, and real-time guidance platforms heading to the shop floor.
And almost none of it answers the question that actually matters: Can your worker do the job?
This isn't a knock on connected worker technology. Digital work instructions reduce errors. Guided procedures help new workers ramp faster. Real-time data connectivity is genuinely valuable. These tools have earned their place.
But they were built to guide workers, not verify them. There's a difference. A big one.
What "Trained" Actually Means in Most Plants Today
Walk through a typical manufacturing operation's training stack:
The LMS tells you a worker completed a course. It doesn't tell you they understood it. It definitely doesn't tell you they can apply it. It tells you they clicked through slides and passed a quiz — probably multiple choice, probably on content they've seen before. Completion is the metric. Competency is assumed.
Digital work instructions tell you a worker acknowledged a step. They saw the instruction. They tapped "done." Whether they executed it correctly is invisible to the system. The step is marked complete because the worker said so.
Connected worker platforms tell you a worker was present, logged in, and active. Activity is tracked. Guidance was displayed. But the platform has no idea if the worker followed that guidance well, poorly, or not at all. It's time-on-task data dressed up as training data.
Every one of these systems is answering a different question than the one you need answered.
They're answering: Did they go through the process?
You need to know: Can they do the job?
The Blind Spot That's Hiding in Your Quality Data
You've seen it in your NCRs. Operator error traces back to a worker who "was trained on that." Pull the records — yep, trained. Certified, even.
But was the training verified? Or was it just logged?
There's a retiring machinist at every plant I've ever talked to. Forty years of knowledge in his hands. He knows how to feel when a part is right, how to listen for a machine that's about to give trouble, how to run a procedure in a way that keeps it from failing downstream. None of that is in your LMS. And when he walks out the door, it's gone.
The connected worker platforms were supposed to help with knowledge transfer. Capture the expert. Build the work instruction. Display it to the next worker. And they do — to a point. They capture the what. They display the how to.
But they never close the loop. They never verify that the next worker actually absorbed it.
You've digitized the instructions. You haven't digitized competency verification.
The HoloLens Wake-Up Call
If you're running Microsoft Dynamics 365 Guides on HoloLens, you already know what's coming. HoloLens is sunsetting in December 2026. Microsoft hasn't announced a hardware successor. D365 Guides customers are being orphaned — mid-investment, mid-deployment, mid-promise.
It's an uncomfortable position. You built workflows around a platform. Trained workers to use it. Proved ROI to leadership. And now the hardware that underpins the whole thing has an expiration date.
The natural question is: what do we migrate to?
But the better question is: what were we actually trying to accomplish?
D365 Guides told workers what to do. That was the value proposition — display the procedure, guide the steps, reduce errors through in-the-moment instruction. It was "tell" technology.
If you're replacing it, don't just replace "tell" with more "tell." This is the moment to ask whether you want workers who can follow a guided procedure — or workers who can actually perform it.
The Shift: From Tell to Show
There's a fundamental difference between a system that tells a worker what to do and one that asks a worker to show they can do it.
Every connected worker platform on the market is, at its core, tell technology. Step 1: do this. Step 2: do that. Confirm complete. Move on.
That's not useless. For genuinely complex, rarely-performed procedures, live guidance reduces errors in the moment. But it doesn't build competency. It builds dependency.
If a worker needs a guided checklist to get through a procedure they've done 500 times, that's not competency. That's a crutch. And on the day the tablet battery dies, or the glasses fog up, or the system goes down — you'll find out whether your workers actually know what they're doing.
Competency means: the worker can perform the procedure correctly, without being told each step, because they know it.
Proving competency means: having them show you.
That's the shift.
How skillia.AI Works
The premise is simple. The execution is precise.
An expert records the procedure. Not a training video — a performance recording. The expert, doing the job correctly, in full. This becomes your SOP baseline.
That recording is reviewed and approved. Not by AI. By your team. Your process engineers, your quality leads, whoever owns that procedure. They watch it. They say: yes, this is what correct looks like. That approval is the ground truth.
Workers record themselves performing the same procedure. Using a phone or smart glasses. No special setup. They show their work.
AI compares the worker's performance against the approved SOP recording. Motion by motion. Technique by technique. It generates a competency score: what they did right, what they missed, where they deviated.
You get verified proof of competency — with video evidence attached. Not a checkbox. Not a quiz score. An actual demonstration, scored against an actual expert, with the footage to back it up.
For re-certification, same process. Worker records. AI scores. You know.
Why We Don't Trust AI — And Why That's the Point
Here's something we say openly at skillia.AI: we don't trust AI to define what "correct" looks like.
AI doesn't know your process. It doesn't know your tolerances, your customer requirements, your institutional knowledge about why step 7 matters even though it seems redundant, or what a good torque application actually looks and sounds like on your specific assembly.
Your experts know that.
So the SOP recording — the baseline everything is compared against — is human-verified. Your expert performs it. Your team approves it. AI doesn't set the standard. It measures against it.
This matters because it's the only way to make the competency verification trustworthy. If AI decided what "good" looked like, you'd have one black box judging another. That's not defensible in an audit. It's not something you can stand behind in a quality review.
But when an expert-approved recording is the standard, and AI is measuring performance against that human-verified baseline? That's defensible. That's auditable. That's something a quality director can put in front of a customer and say: here's the proof.
The AI is the measurement tool. Your experts are the judges. That's the right division of labor.
What This Looks Like in Practice
A new operator is onboarded to a critical assembly procedure. Instead of clicking through an LMS module and getting certified on paper, they:
- Watch the expert SOP recording once
- Practice the procedure
- Record themselves performing it
- Receive an AI-generated score showing exactly where they hit the standard and where they didn't
- Repeat until they're ready for final certification
At final certification, they record again. The score is logged. The video is attached. HR has a competency record with evidence. Quality has a baseline for that worker on that procedure. If there's a future NCR, you can pull the certification footage and see exactly what they were shown and how they performed.
That's not a completed training course. That's verified competency.
For an experienced worker recertifying after a procedure change: same flow. Show us you know the update. We'll score it against the new SOP. Fifteen minutes, verified, done.
The Question You Should Be Asking
You've invested in connected worker technology. Maybe you're getting real value from it. The guidance is reducing errors. Workers are finding procedures faster. That's real.
But when your best process engineer retires, and his knowledge is now in a digital work instruction — have you verified that the next worker can actually execute it the way he could?
When a critical customer quality issue traces back to an operator, and you need to demonstrate that your workforce was properly qualified — can you show video evidence of that operator's last competency certification?
When you're onboarding 50 new workers this quarter to hit a production ramp — are you confident that "training complete" means they can do the job?
If any of those answers is "not really" — that's the gap.
Is Your Connected Worker Competent?
We can prove it either way.
If your workers are truly competent, skillia.AI will confirm it — with evidence you can show a customer, an auditor, or a regulator. That's a quality story worth telling.
If there are gaps, you'll find them before they show up as quality escapes, safety incidents, or failed audits. Finding gaps is the whole point.
The connected worker market is building the most sophisticated shop-floor guidance systems in history. That's good. But guidance without verification is still just hope.
You don't have to hope your workers are competent. You can know.