Close Menu
Machinery Asia
  • Home
  • Industry News
  • Heavy Machinery
  • Backhoe Loader
  • Excavators
  • Skid Steer
  • Videos
  • Shopping
  • News & Media
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Machinery Asia
Subscribe
  • Home
  • Industry News
  • Heavy Machinery
  • Backhoe Loader
  • Excavators
  • Skid Steer
  • Videos
  • Shopping
  • News & Media
Machinery Asia
You are at:Home » What AEC Companies Need to Know: A Guide to Responsible AI Adoption
Industry News

What AEC Companies Need to Know: A Guide to Responsible AI Adoption

Machinery AsiaBy Machinery AsiaOctober 17, 2024No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email Tumblr

Jeff Albee

Jeff Albee, vice president of Stantec

With the world scrambling to adopt artificial intelligence (AI), AEC industries are under immense pressure to keep up and integrate AI into how they operate and what they deliver to customers . Mistakes in AEC can have devastating consequences, so before companies rush to adopt AI, they must ensure that they do so in a compliant and error-free manner.

It’s hard to overstate the transformative potential of AI for architecture, engineering and construction. It has the ability to streamline various project workflows, reducing both labor costs and off-work hours spent by architects, scientists, and engineers. As with nearly every other industry, the excitement of AI’s transformative promises is resulting in a race among competitors eager to move faster while reducing long-term costs.

AI jargon is making its way onto corporate marketing pages as CTOs and CIOs find themselves under pressure to let their customers know they’re at the forefront of technological advancement. Whether it’s a simple increase in productivity with the help of CoPilot or an in-house solution built to address a specific engineering problem, AEC companies are scrambling to make sure they can tell to investors and customers that your company is leveraging AI.

The reality is that the rush to adopt AI can lead to over-reliance on systems that are not fully understood or properly evaluated. This is very risky in the AEC world, where legal and safety compliance is mandatory and quality standards are non-negotiable.

The consequences of not properly evaluating and implementing AI could be catastrophic, potentially leading to engineering failures or other serious problems that could endanger lives.

The problem for companies (and the customers who use them) is that the understanding of how to bring AI systems under the compliance umbrella in our industry is relatively immature, and the mysterious processes that drive AI and Machine learning (ML) is a black box. which are often left unexplained to the consumer of the results produced by these services.

As companies use more generative AI tools to prepare customer deliverables, questions arise about the extent to which AI-generated content should be disclosed, reviewed, and measured. For example, if an AI model generates part of a design scheme, who is responsible for ensuring that these design elements meet regulatory and safety standards? Should there be an ingredient label to reveal that AI has been employed in the creation of the work? A warning label? And if so, how should a customer distinguish between a widely available AI system (such as Microsoft’s CoPilot) that is tried and trusted and a perhaps lesser-known proprietary model?

This lack of clarity could lead to scientific errors or over-promising design flaws, and worse. This obviously creates a massive potential liability for AEC companies.

To navigate the complexities of AI adoption, AEC companies must look to established standards and frameworks that provide guidance on quality and compliance. Fortunately, several organizations, both in the United States and internationally, are developing guidelines to help companies manage their use of AI responsibly.

The White House, for example, has issued a draft for an AI Bill of Rights, which outlines five principles to protect people from potential harm from AI. These principles include ensuring that AI systems are safe and effective, that people have the right to know when AI is being used, and that AI systems do not exacerbate discrimination. Similarly, the European Union’s AI Law aims to regulate the use of AI by categorizing applications based on their level of risk, with stricter requirements for high-risk applications such as those in AEC sector.

Additionally, the US National Institute of Standards and Technology (NIST) provides one of the most comprehensive frameworks for understanding and managing the use of AI. The NIST AI Risk Management Framework (AI RMF) emphasizes four key functions: map, measure, manage, and govern.

map: Identify and analyze AI system use cases and associated risks. This includes understanding the intended function of the AI ​​system and the context in which it will be used.

Measure: Evaluate AI system performance and reliability against established benchmarks. This step involves evaluating the accuracy, robustness, and fairness of the AI ​​system.

Manage: Implement strategies to mitigate identified risks and improve the benefits of the AI ​​system. This includes developing contingency plans and continuously monitoring system performance.

govern: Establish policies, procedures and practices to ensure AI systems are used ethically and meet regulatory requirements. This also involves fostering a culture of accountability and transparency within the organization.

These features provide a roadmap for AEC companies to assess the quality and compliance of their AI systems, ensuring they are not only effective but also secure and reliable. As regulatory bodies increasingly turn their attention to AI, compliance with these standards will likely become mandatory. Companies that proactively align their AI practices with these frameworks will be in a better position to adapt to future regulatory changes and maintain their competitive advantage.

We don’t have to wait for the first catastrophic failure to find out. Using these frameworks will now allow companies to open the door to understanding the broader risks posed by AI.

It is from this understanding that companies can take the steps to ask more strategy-based questions. As they do, they can begin to unlock the use of AI in AEC and move broadly to models of all kinds that are more anchored in data than theory.

AI can push industry in new directions, but only if industry first pushes AI responsibility.

Jeff Albee is Stantec’s vice president and director of digital solutions

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleRound table: Has Inclusion Week made a difference?
Next Article Harris and Trump get construction endorsements
Machinery Asia
  • Website

Related Posts

Data centers propelled Turner to record $29.2 billion in revenue by 2025

March 3, 2026

Excavator incident halts PA demolition project

March 3, 2026

Warehouse conversions raise questions about IBC and floodplain compliance

March 3, 2026
Leave A Reply Cancel Reply

  • Facebook
  • Twitter
  • Instagram
  • Pinterest
Don't Miss

Data centers propelled Turner to record $29.2 billion in revenue by 2025

Excavator incident halts PA demolition project

Warehouse conversions raise questions about IBC and floodplain compliance

DHS warehouse conversions raise questions about IBC and floodplain compliance

Popular Posts

Data centers propelled Turner to record $29.2 billion in revenue by 2025

March 3, 2026

Excavator incident halts PA demolition project

March 3, 2026

Warehouse conversions raise questions about IBC and floodplain compliance

March 3, 2026

DHS warehouse conversions raise questions about IBC and floodplain compliance

March 3, 2026
Heavy Machinery

Buying guide for open aluminum trailers for long-distance vehicle transport

March 3, 2026

How to choose the right car trailer tool box

March 3, 2026

What is the safest speed for hauling car trailers

February 28, 2026

Aluminum car transport trailer for long distance towing and transport of heavy vehicles

February 25, 2026

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

Type above and press Enter to search. Press Esc to cancel.