May Winfield is global director of commercial, legal and digital risks at Buro Happold
It’s impossible to have missed the explosion of both excitement and availability of new AI-powered generative tools in recent months. The future of AI promises dramatic changes in the way we live and work. Generative AI tools could improve access to information, education, health and transportation.
“Any input to ChatGPT, Open AI, and similar AI tools could be considered published in the public domain”
These tools currently include ChatGPT, OpenAI, Microsoft Bing, Google’s Bard, and others. Technology is evolving rapidly and the potential to increase and develop what we do is significant. Among all the press releases, hype, buzzwords and noise, there are definitely huge potential benefits for the construction industry to use AI to improve safety, cost and time, reduce errors and even overcome some aspects of the labor shortage.
Some examples of where generative AI can be used include meeting minutes, creating and summarizing documents and presentations, creating RACI matrices, obtaining answers to questions, testing additional tools with artificial intelligence, producing images , design check and health and safety warning systems.
However, do you really know what you’re getting yourself into when you enter this data into ChatGPT or other similar tools? What are the risks when you rely on AI-powered tools to perform a design or compliance check, or to prepare your estimate, proposal or presentation to the client?
Some of the key risks and issues in using generative AI in construction can be broadly divided into a few categories:
Inaccurate results
Do you question the accuracy of Google search results or ChatGPT answers? Possibly not. There often seems to be an innate tendency in many of us to trust the accuracy and validity of results generated by technology. However, the validity of these results inevitably depends on the original data input. As the saying goes, “bad data in, bad data out.”
For example, AI-powered generative design tools are used to detect and mitigate clashes in 3D models. While this is a huge time and cost saving tool, reducing rework and redesign, it also depends on model elements being labeled correctly to allow collisions to be detected. Over the years, I’ve seen serious matchup issues go unreported. For example, a column is labeled as a beam within a model.
If you use an AI tool to produce estimates or other details for the submission of an offer or customer proposal, there may be errors in the original figures or inaccurate details summarized in the results. These inaccuracies or errors may not be apparent without manual checks and review. However, one remains responsible for the consequences of using these results: we cannot simply blame the AI tool as a defense for the resulting delay, wasted costs, or lost bids. Using AI tools can save time and cost and help improve accuracy and quality, but it requires rigorous quality control and risk management procedures to mitigate highlighted issues.
Copyright
There has been an ongoing debate within the legal community as to whether AI-generated images and products are owned by the entrepreneur, the creator, the programmer, or the AI itself. The law on this seems, unsurprisingly, relatively untested, although the common philosophy seems to be that when a party uses AI tools to assist them, copyright remains with the creator in the normal way. However, images and content created solely by the AI tool without human input may not attract such copyright protection.
confidentiality
Once you enter data into generative AI tools, you cannot delete or delete it. You effectively lose control and, in some cases, may lose exclusive ownership of the data. Anything entered into ChatGPT, Open AI, and similar AI tools could be considered published in the public domain. For example, there have been a number of news articles reporting how some Samsung employees enter confidential work-in-progress code into ChatGPT.
The entry of client or project information is also likely to be an express breach of confidentiality agreements or contractual obligations, for which there could theoretically be a potential claim for damages by the party to whom the data of which they have been introduced in this way. There is then a question mark as to whether this claim would be insured under a standard professional indemnity policy.
These are the important questions we also need to address with professional advisors and insurance brokers as we keep an eye on what data is fed into any AI tool.
This article is an opinion only and is not legal advice. Independent professional advice should be obtained before taking any action.
