AI for Engineering Productivity

I Tested Claude AI on Real Mechanical Engineering Tasks. Here's What Happened.

I Tested Claude AI on Real Mechanical Engineering Tasks. Here's What Happened.

I Tested Claude AI on Real Mechanical Engineering Tasks. Here's What Happened.

We tested Claude AI on real mechanical engineering tasks including material selection, tolerance analysis, and design review. Here's what worked and what didn't.

·

5 min read

Michelle Ben-David

Product Specialist, Leo AI

Product Specialist, Leo AI

Mechanical Engineer, B.Sc. · Ex-Officer, Elite Tech Unit · Aerospace & Defence · Medical Devices

Mechanical Engineer, B.Sc. · Ex-Officer, Elite Tech Unit · Aerospace & Defence · Medical Devices

Michelle Ben-David is a mechanical engineer and Technion graduate. She served in an IDF elite technology and intelligence unit, where she developed multidisciplinary systems integrating mechanics, electronics, and advanced algorithms. Her engineering background spans robotics, medical devices, and automotive systems.

BOTTOM LINE

Claude AI performs reasonably on basic calculations and general technical questions but fails on tasks requiring engineering-specific data, standards compliance, organizational context, and CAD file access. For mechanical engineers, the gap between general AI and purpose-built engineering AI shows up in every real-world task.

Every engineer I know has tried feeding engineering problems into ChatGPT or Claude at least once. Some swear by it for quick calculations. Others gave up after the first hallucinated material property. The truth, as usual, is more nuanced than either camp admits.

I spent two weeks testing Claude AI on actual mechanical engineering tasks that come up in my daily work. Material selection, tolerance stackup analysis, standards lookups, design review feedback, and part search. Here's what I found.

Task 1: Material Selection for a Pressure Vessel Component

I asked Claude to recommend a material for a pressure vessel component operating at 450F with a design pressure of 500 psi. I specified ASME Section VIII requirements.

Claude recommended 316 stainless steel, which is a reasonable starting point. It provided some general properties and mentioned corrosion resistance. But the yield strength it cited was for room temperature, not 450F. It didn't reference the ASME allowable stress tables. It didn't mention that Section II Part D provides the actual design values. And it didn't ask about the corrosive environment, which would significantly affect material selection.

An experienced engineer would catch these gaps. A junior engineer might not. And that's the danger: the answer sounds complete and authoritative when it's actually missing critical design considerations.

When I ran the same question through Leo AI, it pulled the allowable stress values directly from ASME Section II Part D for the specified temperature, cited the exact table and edition, and flagged additional considerations including sensitization risk at that temperature range and alternative materials worth evaluating.

IN PRACTICE

Customer Quote

It's the only AI for Mechanical Engineers that actually understands CAD, PLM, and the realities of enterprise design work. With Leo, our team improves design quality, reduces mistakes, and shortens time-to-market. - Uriel B., Field Warfare and Survivability Specialist

Task 2: Tolerance Stackup on a Four-Part Assembly

I described a simple four-part assembly with bilateral tolerances and asked Claude to calculate the worst-case stackup.

Claude set up the problem correctly and got the arithmetic right. This is where general AI does well - it's basically algebra. But when I asked it to recommend tolerances for a press-fit interface in the same assembly, it gave values that would work for a clearance fit, not a press fit. It confused the ANSI tolerance classes.

This is the pattern with Claude on engineering calculations. The math is usually right, but the engineering judgment around the math is unreliable. Knowing which formula to apply matters as much as computing the answer.

Task 3: Finding an Existing Part in Our Library

This is where general AI falls apart completely. I asked Claude to find a bracket in our PDM system that could support a 200N cantilevered load within a specific envelope.

Claude couldn't do it. It has no access to PDM systems, no integration with SolidWorks PDM, Windchill, or Teamcenter. It offered to help me calculate the required dimensions for a bracket, but it couldn't search our existing designs.

This matters because part reuse is one of the biggest efficiency levers in engineering. Engineers spend roughly 35% of their time reinventing parts that already exist somewhere in the organization. An AI that can't access your design history can't help with that.

Leo AI, connected to our PDM, found four existing brackets that met the load and envelope requirements in under two minutes. Two of them had been designed by a different team that I didn't even know had worked on a similar project.

Task 4: Design Review Feedback

I showed Claude a screenshot of a sheet metal part and asked for DFM feedback.

Claude identified some general sheet metal design principles: minimum bend radii, hole-to-edge distances, and grain direction considerations. The advice was generically correct but not specific to our part. It couldn't reference our specific manufacturing capabilities, our preferred vendors' equipment limits, or our internal design standards.

More importantly, Claude was looking at a screenshot. It had no access to the actual CAD data: wall thicknesses, bend relief dimensions, or K-factor values. Its feedback was limited to what it could visually interpret from an image, which is like asking a consultant to review a design by looking at a photograph of it from across the room.

Task 5: Standards Lookup

I asked Claude for the ASME Y14.5 definition of true position and how to calculate it for a pattern of holes.

Claude provided a mostly correct explanation. It defined true position accurately and showed the basic formula. But it cited ASME Y14.5-2009 when the current standard is the 2018 edition, and it didn't mention key differences between the editions that affect how datum reference frames are established for pattern tolerances.

This is a recurring issue: Claude's training data has a time lag, and it doesn't distinguish between current and superseded standards. For engineers whose work needs to comply with specific editions, that gap matters.

The Verdict

Claude AI is useful for general technical reasoning, basic calculations, and work that doesn't require specialized engineering data or organizational context. It's a decent first-pass tool when you already know enough to validate the output.

For anything requiring accurate material data, standards compliance, access to your design history, or engineering-specific judgment, purpose-built engineering AI delivers what general models can't. The difference isn't subtle. It's the difference between a competent generalist and a domain expert who knows your systems, your standards, and your data.

FAQ

Test Leo AI on Your Tasks

Engineering AI that knows your systems.

Run your own comparison. Try Leo AI free and see how purpose-built engineering AI handles the tasks that general AI can't.

Schedule a Demo →

#1 New AI Software Globally - G2 2026

Enterprise-grade security

Trusted by world-class engineering teams

Recommended

Subscribe to our engineering newsletter

Be the first to know about Leo's newest capabilities and get practical tips to boost your engineering.

Need help? Join the Leo AI Community

Connect with other engineers, get answers from our team, and request features.

#1 New Software

Globally

All Industries

#12 AI Tool

Worldwide

G2 2026

Contact us

160 Alewife Brook Pkwy #1095

Cambridge, MA 02138

United States

Subscribe to our newsletter

Be the first to know about Leo's newest capabilities and get practical tips to boost your engineering.

Need help? Join the Community

Connect with other engineers, get answers from our team, and request features.

#1 New Software

Globally

All Industries

#12 AI Tool

Worldwide

G2 2026

Contact us

160 Alewife Brook Pkwy #1095

Cambridge, MA 02138

United States

Subscribe to our engineering newsletter

Be the first to know about Leo's newest capabilities and get practical tips to boost your engineering.

Need help? Join the Leo AI Community

Connect with other engineers, get answers from our team, and request features.

#1 New Software

Globally

All Industries

#12 AI Tool

Worldwide

G2 2026

Contact us

160 Alewife Brook Pkwy #1095

Cambridge, MA 02138

United States

Subscribe to our engineering newsletter

Be the first to know about Leo's newest capabilities and get practical tips to boost your engineering.

Need help? Join the Leo AI Community

Connect with other engineers, get answers from our team, and request features.

#1 New Software

Globally

All Industries

#12 AI Tool

Worldwide

G2 2026

Contact us

160 Alewife Brook Pkwy #1095

Cambridge, MA 02138

United States

© 2026 Leo AI, Inc.

Test Leo AI on Your Tasks

Engineering AI that knows your systems.

Run your own comparison. Try Leo AI free and see how purpose-built engineering AI handles the tasks that general AI can't.

Schedule a Demo →

#1 New AI Software Globally - G2 2026

Enterprise-grade security

Trusted by world-class engineering teams