

It's the question every skeptical estimator asks during a demo.
You see the AI identify beams on a clean CAD PDF. It's fast. But you've been in this industry long enough to know that "demo drawings" aren't real life. Real life is a scanned set from 1995 with coffee stains. Real life is an engineer who invents their own symbols. Real life is a last-minute revision where the architect changes the naming convention for the third time.
So you ask the practical question: "What happens when I feed it my messy projects? Does it break, or does it learn?"
If you're using traditional software, it breaks. Static software only knows what it was programmed to know on day one.
Machine Learning (ML) works differently. Platforms like LIFT aren't just tools; they improve with use. The more you feed the system real projects, the more accurate it becomes.
In this guide, we'll explain how machine learning actually works in steel fabrication, why human estimators are the primary trainers of the system, and how the software you use today performs better six months from now.
If you want to see how this learning engine fits into your complete estimating workflow, start with The Ultimate Guide to Steel Estimating: Best Practices for Fabrication Success. That guide covers the full process from drawings to final bid, while this article focuses on a specific question: how LIFT learns from your projects and improves over time.
To understand why LIFT improves, consider why older software doesn't.
Traditional estimating software relies on rules written by programmers. A command might read: "If two parallel lines are 12 inches apart, flag it as a beam."
This works on perfect drawings. But introduce a crooked line, a revision cloud that overlaps the edge, or unusual spacing, and the rule fails. The software cannot adapt. It will keep failing until a programmer rewrites the code.
LIFT wasn't programmed with rules. It was trained on examples.
Starting in 2021, SketchDeck.ai collaborated with AISC-certified fabricators to feed the system real structural steel drawings. We didn't tell the computer "look for parallel lines." We showed it 50,000 examples of a W12x26 beam, clean, messy, rotated, faint and labeled them all as beams.
The system's Neural Network learned to recognize the pattern of a beam, regardless of noise or variation. It learned to tell a structural line from a scanner artifact.
This is the core difference: static software gets worse as projects get more complex. Learning systems adapt.
You don't need a PhD in computer science to understand the mechanics. Three concepts drive the engine.
CNNs are the architecture used for reading drawings. Think of them as digital filters that process your PDF in layers.
This hierarchical approach lets LIFT "see" a drawing the way a junior estimator does building from lines to meaning.
Transfer Learning explains why LIFT works specifically for steel.
The model started with foundational understanding of geometry (pre-trained on millions of generic images) and was then specialized for structural steel documents. It's like hiring an architect and teaching them your trade. They already know how to read plans; now they're learning steel details.
The system doesn't learn from all data equally. It uses Active Learning to focus on what it doesn't know.
If LIFT is 99% confident about a beam, seeing it again teaches it nothing. But if it encounters a unusual connection and is only 40% confident, that's valuable data. The model targets these gaps for learning.
This is where you become central to the system. The biggest misconception about AI is that it replaces human judgment. The opposite is true: human feedback is what powers the machine.
Consider GPS mapping apps. When you drive and encounter a road closure that isn't on the map, you take a detour. The app notices. When enough drivers do the same, the app learns: "Road closed." Users make the map better.
LIFT uses the same approach: Human-in-the-Loop (HITL) architecture.
When you run a takeoff, LIFT generates a "draft" with detections. Most are correct (95–99%). Some may be wrong, a missed beam under text, a mislabeled connection.
You spot the error and correct it. To you, that's a quick fix. To the system, that's a training signal.
Every correction tells the AI: "You missed this pattern. Here's what it actually looks like."
SketchDeck CEO Daniel Kamau described this relationship in a technical briefing:
"We can train our machine learning models to improve whenever drawing quality is low... It's like a junior estimator preparing a set for you, and you're adjusting the elements."
Your messy drawings aren't a problem. They're the curriculum that trains the system for the next messy drawing.
A practical concern: in software, things change.
A static AI tool from 2021 would be accurate for 2021 drawings. By 2025, as conventions shift, accuracy would decline. This is Model Drift.
Continuous Learning prevents this degradation. LIFT is cloud-based and continuously retrained on new data from across the industry. If a new notation for "Moment Connections" emerges in California, and estimators there correct the AI to recognize it, the model learns that pattern. Your software gains capabilities it didn't have when you started.
For a practical view of how these improvements show up in your bid cycle, reduced cycle times, improved win rates, and better risk checks, pair this article with The Ultimate Guide to Steel Estimating, which maps these gains across your complete workflow.
Theory matters less than what happens on the job.
Fabricators using the Human-AI workflow report efficiency that compounds over time. Teams adjust to the review process, and the model handles more edge cases.
These gains come from two sources: the system's baseline accuracy (95% handled without human input) and the speed of human review (5% corrected and learned).
If you want to see how these time savings work on complex, multi-phase projects, How Steel Estimators Handle Complex Projects Without Burning Out: 5 Workflow Strategies That Cut Takeoff Time by 80% shows real examples of the workflow in action.
Be clear about limits. LIFT gets better at detection (finding the beam) but depends on you for judgment (knowing if it can be built).
The system learns to recognize patterns, pixels, and text:
It cannot learn business logic:
The goal is "Co-Pilot," not "Auto-Pilot." The system handles quantification, getting faster and more accurate each month. You handle the engineering, logistics, and strategy, the parts that actually win profitable work.
When you evaluate software, don't just look at what it does today. Look at where it goes.
Traditional software is a depreciating asset. It's best the day you buy it, then slowly becomes outdated until you pay for a new version.
Machine Learning software is an appreciating asset. The more you use it, the more data it processes, the more feedback it receives, the more valuable it becomes.
LIFT currently processes millions of structural elements. Every correction by an expert estimator adds to a collective intelligence that lifts the entire user base.
The question isn't "does it work?" It's "will it grow with my team?"
Ready to see how LIFT handles your specific drawings?
Book a Demo with SketchDeck.ai and run a live takeoff on one of your past projects. See the speed, check the accuracy, and decide for yourself if your workflow is ready for a change.
Bring your messiest PDF. Let's see what the system has learned and what you can teach it next.
