- Adopting AI can pose very real financial and security risks and training an AI can prove very expensive, especially if the task being automated is complex and requires an advanced AI.
- Fostering a culture of transparency around the risks of AI will help drive industry application and protect consumer goods companies and customers from the potential pitfalls of this evolving technology.
Consumer goods companies need to understand the technical, financial, and organisational requirements of any AI application to reliably assess the level of risk that application represents, an industry expert said.
Rory Gopsill, Senior Consumer Analyst at GlobalData, said that consumer goods companies need to fail – and fail fast – in their AI initiatives to gain the knowhow.
“Consumer goods companies need to consider how an AI should be trained to enable it to function cost-effectively. They also need to consider which delivery model is the most suitable from a data security and infrastructure cost point of view.”
According to GlobalData’s Q1 2024 Tech Sentiment Poll, over 58% of respondents believe AI will significantly disrupt their industry.
However, consumer goods companies should remember that the technology has limitations and risks. Chatbot failures caused Air Canada and DPD financial and reputational damage, respectively, in the first quarter of 2024. DeepMind’s own CEO warned against AI overhype in April 2024.
Gaining the knowhow
“Industry professionals remain bullish about AI’s potential to disrupt numerous industries, including consumer goods. In reality, adopting AI can pose very real financial and security risks. Training an AI can prove very expensive, especially if the task being automated is complex and requires an advanced AI,” Gopsill said.
Moreover, if an AI application requires training data that is commercially sensitive or confidential, he said that a company may choose to train the AI in a private cloud environment rather than a less secure public cloud. Purchasing and maintaining the necessary IT infrastructure for this would be very expensive and organisationally demanding.
“Consumer goods companies need to be aware of these (and other) risks when choosing to develop AI applications. If they are not, their AI initiatives could fail with serious consequences. For example, sensitive data could be exposed, development costs could outweigh the application’s benefits, the quality of the AI application could be diminished, or the project could simply never get finished,” he said.
Understanding these risks, he said will enable consumer goods companies to fail early and safely and to learn from that failure.
“This will equip them with the knowledge to implement AI in a way that is safe and profitable. Fostering a culture of transparency around the risks of AI will help drive industry application and protect consumer goods companies and customers from the potential pitfalls of this evolving technology.”
Related Posts:
- ChatGPT fails to judge patient’s cardiac risk
- CIOs geared up for AI but their organisations aren’t
- LLMs are highly error-prone when mapping medical codes