Have a quick question for you - what is the #1 challenge to improving you and your team's technical capability?
So this would include improving the skills (production, software, reports, communication) as well as the technical ability (knowledge) for you and your team. What are the biggest barriers to improving?
After voting, you can see the results in the poll above, and we'd welcome your comments below as well!
1 Comment
Two weeks ago, we pondered – where is AI today, in March of 2025? How do baseline, now-popular large language models (LLMs) compare to a practicing Fire Protection Engineer? Do the models themselves make much of a difference? That’s both an easy and difficult question to answer, and it raises more questions downstream, too. A FAIR DISCLAIMER For a little context, I’m not arguing that AI is replacing humans in fire protection. I’m not losing sleep over our industry adapting to changes in technology. I’m not trying to hype AI. I’m not arguing for more use of ChatGPT in our practice. I am monitoring the ability of AI LLMs compared to our industry benchmarks, and as with everything else, I do favor finding ways for us all to adapt, improve, and make use of resources for our industry. AI VERSES THE FIRE PROTECTION P.E. EXAM Here’s what common AI LLMs score on a practice Fire Protection P.E. Exam, today, with 70% correct being an approximation for a passing score: Source: MeyerFire 2025. Test conducted with models outlined, twice, with simple prompt on March 20, 2025 against a full length practice Fire Protection P.E. Exam. There are a few ways I look at this.
IT DIDN’T PASS... TODAY First, is that I find it somewhat interesting that despite a strong foundational knowledge of math and overall ability, models like ChatGPT’s o1 don’t already pass the exam. The exam tends to steer further from practical industry-needs-you-to-know-this knowledge and instead lives in a theoretical world of hand-calculated but impractical application. That seems like it would lend itself to favoring an AI model that understands theory better. ENCROACHMENT Second, the progressive 4.0, 4.5, and o1 models are quickly encroaching on a passing score. The dates below the models are when each model was introduced. Are we six months away from a model that does pass the exam? If not, a year away? Or does simply crafting a better prompt (we kept it as straightforward as possible) get AI over the hump? Either way, the capabilities of AI specific to fire protection engineering are making up ground quickly. Even with the same AI model, I’ll be interested to run this periodically and see about changes in time. PRACTICING ENGINEER Third, the exam itself isn’t easy. There is a very wide variety of content on the exam (wide subject range), lots of theory, lots of math, and many things that an experienced practicing engineer wouldn’t be readily capable of answering at any given moment. Just because someone (say like myself) passed the exam ten years ago, doesn’t mean I could pick it up and pass today without studying up. The exam, like any, reflects a snapshot in time and even despite working in prep all the time, I simply don’t carry around all the top-of-mind knowledge that’s needed to pass it on any given day. So, while the LLMs are not passing the exam, are they actually more comparable to a walking, licensed FPE today? Perhaps. Maybe not the walking part, but the knowledge part? Possibly. WHAT WE ACTUALLY SHOULD KNOW This brings up a reasonable question. If we have reasonable tools (now or soon) that provide instant context or feedback (albeit with varying levels of quality and result), what knowledge becomes unimportant for us to carry with us, and what knowledge becomes more important for us to have? What is it, that we actually should know? When calculators were first mass-produced and readily available, education went through a crisis. Do we continue to promote memorizing math facts if the answer is available instantly with complete accuracy? Do we still even study multiplication and division tables? Does memorization become important in industry when every professional using math will have a calculator at their side? Some fought calculators vehemently, and others adopted and adapted. Using calculators is now a relatively minor and trivial part of K-12 education. In some environments, it’s a must (graphing abilities within Calculus or arrays in linear algebra); in other environments, it's banned (fourth-grade multiplication tests). There’s a place for calculators and a place to exclude them. I feel that AI is in the same spotlight today. AI is just begging us to reassess what we should know and carry around with us as professionals. Do memorized facts about standards become less important over time (a pull station needs to be no more than 5 feet from the exit), and higher-level skills like thinking analytically, creatively, communicating, leading, or relating to others become far more important? I think it’s possible. HIGHER-LEVEL WORK When we’re relieved of mundane memory tasks (just as the calculator relieved humanity of rote memorization), where does that leave us in terms of what we should know? What new, personalized, or differentiated skill should we better adapt? Is code analysis more important now? Ability to reason? Ability to adapt? To relate and motivate? Will we each be able to grow in new areas and develop far more skill than we previously thought possible? That’s what we’re seeing, just with today’s AI. BETTER TESTING If we find the ability to conduct a code path, provide quality engineering judgment, or discern truth from AI hallucination, how can we test for that? If AI is good or becomes great at anything written (e.g., multiple-choice tests), how do educators step up our game and truly evaluate relevant knowledge? What relevant knowledge should we value in the new age of AI? We’re at a crossroads regarding the future of what we deem valuable as fire protection professionals - not a crisis, but a crossroads. How can we test relevant skills and knowledge? More importantly, how should we test relevant skills and knowledge? IDEAL ASSESSMENT Do we test beyond what we know that AI can handle (for now), or do we exclude AI in testing environments (when we know it’ll be regularly used in the industry)? Or, better yet, do we revamp how we test and assess skill? Can we move past written exams and freely consider how assessment could be more telling, less biased, and more authentic to the learner? Is that a situational assessment? Virtual simulations? Hands-on assessment? Project-based portfolio? Peer review? It’ll be interesting to tinker with and monitor over time, both at the university level in formal education and professional learning environments. I think there are many new possibilities for what we can now do. Perhaps just as important is questioning our long-standing assumptions about what skills and knowledge we want professionals to have, seeking out and developing those, and validating them through better means. Plenty of doors have opened since the LLMs came onto the stage 30 months ago, and it’s up to us to use them for the better. |
ALL-ACCESSSUBSCRIBEGet Free Articles via Email:
+ Get calculators, tools, resources and articles
+ Get our PDF Flowchart for Canopy & Overhang Requirements instantly + No spam
+ Unsubscribe anytime AUTHORJoe Meyer, PE, is a Fire Protection Engineer out of St. Louis, Missouri who writes & develops resources for Fire Protection Professionals. See bio here: About FILTERS
All
ARCHIVES
April 2025
|