|
By Jocelyn Sarrantonio, PE | Technical Director at MeyerFire If you follow me on LinkedIn, you may have noticed that I’ve started taking my technical QA/QC job a little too seriously, pointing out errors in the AI slop that has now become the norm on the platform. You know the ones I’m talking about. A very FAKE example of the type of Ai-generated infographics that circulate online I can’t scroll by without trying to spot the errors. Why bother? Why do I care so much? At first, it was just about accuracy. Someone on the internet was wrong! About something I know about! One of my favorite xkcd comics (https://xkcd.com/386/) Now it’s become kind of a sick sport. I’m not on a crusade against AI. I certainly use it as a tool in my work life; AI is the best proofreader around! What bothers me is when AI slop is presented as helpful material, yet it’s riddled with errors. What is that teaching anyone? UNOBVIOUS ERROR It should not come as news to anyone that AI struggles with accuracy. I’m not proud to say I’ve gotten frustrated with a computer that gets commodity classification or nuanced (copyrighted) code interpretations wrong. Remember the “Will Smith eating spaghetti” videos that used to circulate as proof that AI wasn’t quite there yet? They were easy to laugh at because the mistakes were so obvious. Now the errors aren’t so noticeable. The hands have five fingers, the spaghetti looks real, and I find myself wondering if Will Smith recorded a video to troll us. The mistakes are getting much harder to spot. Full Disclosure, I used Gemini to alter this already AI image to make my point AI is not the villain; it’s just a tool that people use to present their ideas. People said the same things about Photoshop, the internet, and computers. I wasn’t around, but maybe they said the same thing about the typewriter! Tools are there to help the person wielding them. But here is the point: we are our work. Our output reflects us. When the materials are targeted to be basic or fundamental, the audience may only have a vague background of the content; they can’t simultaneously learn and error check. So how do they know what’s accurate within an AI diagram, and what’s incorrect? And when errors are pointed out, it really grates on me when the response is, “Ignore those details! Just take the main point.” When people are learning something new, how are they supposed to know what’s correct and what’s AI hallucinations? The wrong conclusions may stick, and they’re just trying to learn! CALLING IT OUT This is not new, even though AI is. I’m sure we have all sat through a presentation where it was clear the speaker wasn’t prepared, or a webinar where someone slipped up and said the wrong thing. If you’re sitting in that room, either in real life or virtually, how do you handle that? Theoretically, the speaker would be receptive to constructive criticism in the right environment. Scrutiny makes content better. To me, it’s the same thing as commenting on AI slop, but is that really the best strategy? Is that helping anyone? WE ARE WHAT WE 𝚂̶𝙿̶𝙴̶𝚆̶ POST Just like in engineering, when we affix an engineering seal to a drawing, set of calculations, or a report, it doesn’t mean we personally drew every line, or wrote every word, but it does mean we’re responsible for the output. And we are required to stand behind it once it’s out in the world. There are formal processes for formal documents, like responding to permit comments and RFI’s. But informally, or when there are no processes, we still have to stand behind our work. Whether it’s stamped or posted, we own what we put into the world. It reflects us. I won’t equate an infographic with an engineering report, but if you post it, share it, or stamp it, it’s yours. You can’t blame ChatGPT any more than you’d blame a drafter or an intern. And it’s also respectful of your audience’s time. If it’s truly not worth your time to put together, why should we expect someone else’s time to read it? If you’re willing to put your name on it, you’re responsible for it. Whether you typed it or prompted it. DEALING WITH IMPERFECTION Now, I’m not perfect. We’re not perfect. None of us is. Mistakes are an assumed part of the process, that’s why there are built-in checks and layers to construction. Because this stuff is that important. We’re dealing with life safety, and often of an unaware public. And I assure you, we at MeyerFire certainly make mistakes too. Turns out it’s really hard to produce mistake-free content, no matter how many eyes are put on it. And we truly (truly truly) appreciate the superfans who challenge us when something isn’t correct, so we can fix it for the next learner. At the time of this writing, we have six outstanding PE practice questions and two videos that need correction on our site. As engineers, we won’t hit perfection, but responding to mistakes says a lot about how we operate. Just ask my 11-year-old daughter, who gave me the death stare when she was rehearsing her upcoming presentation, and I pointed out that Italy actually uses Euros instead of dollars. Or my husband, who is probably sick of me reviewing his materials. Or me! Who has definitely flubbed a few lines in recordings for my courses. TAKING THE HIT So often in the consulting culture, we’ll say we don’t have time for QA/QC or time to train the new hires, we’re barely keeping up with our workload. Quality work takes time, and we all know there’s a noticeable difference in our work when we’re prepared versus when we’re unprepared. But scrutiny makes content better. Ultimately, I’d rather take a friendly edit from my peers, my family, or my boss than a comment from a stranger. But I have to be ready for both if I’m truly owning my output. I would often tell my team that when you’re reviewing a submittal, you can tell a lot by how the information is presented. The same goes for a set of drawings. If you see mistakes on the small stuff or very obvious errors, you start peeling back the layers and find mistakes everywhere. On the other hand, if you can tell someone took pride and care with how the information is presented, that likely follows through with the care and attention taken with the technical aspects, not just the visuals. HOW TO ADDRESS IT? How do you call out errors? So, what is the correct way to call out errors?
We know how to do it with permit submittals or submittal reviews, but how would you address an error in an in-person training? What about a webinar? And bad AI on LinkedIn? Is it a public correction or a private message? Or, silence. Does it even matter? When we work in life safety, I think it does. I’ll stand on that hill. We are our work output. And it’s important to stand behind our work, whatever it is. Especially in construction and life safety, the details matter, and affixing our engineering stamp to work means a high level of professional accountability. If you’re willing to put your name on it, it’s your responsibility. AI doesn’t change any of that, it’s just made it easier to forget.
2 Comments
|
ALL-ACCESSSUBSCRIBEGet Free Articles via Email:
+ Get calculators, tools, resources and articles
+ Get our PDF Flowchart for Canopy & Overhang Requirements instantly + No spam
+ Unsubscribe anytime AUTHORJoe Meyer, PE, is a Fire Protection Engineer out of St. Louis, Missouri who writes & develops resources for Fire Protection Professionals. See bio here: About FILTERS
All
ARCHIVES
March 2026
|
RSS Feed
