With artificial intelligence (AI) comes the opportunity for advanced automation across all processes. This has been music to the ears of a manufacturing industry whose ability to advance has been slowed by manual processes and operations—many of which can be augmented and automated through AI. As AI adoption gains momentum, so does AI regulation, leaving manufacturers a small window of opportunity to align their AI practices.
AI excellence
When AI and machine learning (ML) emerged, they provided a cornucopia of value for manufacturers. The promise of predictive, prescriptive, automated insights, and actions beckoned through the dust of the past, and use cases extended to advanced asset management, predictive maintenance, and anomaly detection. The result is improved outcomes in decision-making, agility, and speed.
A fantastic example is how Italian high-end brake manufacturer Brembo’s manufacturing department uses AI-infused analytics for machine tools’ predictive maintenance and lifespan prediction. Data input yields insights into as granular a level as clustering cooling curves of its aluminium foundry.
Performing predictive and prescriptive maintenance via advanced equipment monitoring draws data from AI/ML on sensors that can surface information on appearance, vibration, temperature, or noise to predict failure and prescribe remedial actions. Another example is asset management optimisation via digital twins or an AI-infused virtual representation of a physical product to predict potential outcomes.
We know AI works for manufacturing; research from McKinsey highlights the improved performance of 10% or more by those who lead with AI, showcasing it as a distinct competitive advantage.
AI growth drives regulation
As much as AI can make a notable difference, the public is still understandably suspicious of it, thinking it can threaten job security or result in social displacement. Even results such as research by the Harris Poll and Google Cloud, showing that 64% of manufacturers use AI daily with 25% using 50%+ of overall IT spend going towards it, do little to quell fears.
This has led to a move to reclaim AI from the hands of tech companies, supported by governments and policymakers who have plans afoot to regulate its use to reduce the potential for harm. Which leads to the question: what is the responsible use of AI? The bottom line is that AI needs to become auditable, transparent, and interpretable for it to be trustworthy.
How can this be done? In manufacturing, an operations manager should understand why AI is, for example, showing potential defects through a visual representation. They can’t assume the AI is right. AI also needs to be simplified and democratised, allowing everyone from data scientists to the workforce to understand it. The potential for flawed AI to cause accidents and injury exists, especially when a human who can override it is not kept in the loop.
Regions put their foot down
The European Union (EU) is forging ahead with its draft Artificial Intelligence Act (EU AIA), and member states will be required to abide by it in the next two years. Two years is not as long as you think, and manufacturers would do well to start looking at what the proposed legislation (currently waiting for member state input) requires. Remember GDPR? It snuck up on everyone, and the EU AIA promises to do the same as everyone who is an EU citizen, regardless of where they live, will be protected by the Act.
Proposed regulations seek to regulate the use of high-risk AI, for example, those in safety environments, critical infrastructure, employment, education, border control, law enforcement, and financial services. Non-compliance could result in fines up to €30M, or 6% of worldwide annual turnover (whichever is higher) and €10M or 2% of turnover if a company is considered to have given misleading or incorrect information on its AI landscape.
Manufacturers must also remember that using personally identifiable data from employees, business partners, suppliers, or customers supporting an automated decision falls under Article 22 of the GDPR. And in the US, AI guidelines have been proposed on an agency-by-agency basis, with The Federal Trade Commission recently stating publicly that it is “Aiming for truth, fairness, and equity in your company’s use of AI”.
In its National AI Strategy, the UK has stated that it wants Britain to be a global AI superpower and has partnered with the TURING INSTITUTE to get input on AI technical standards and governance. The UK’s approach is aligned with ethics, fairness, non-discrimination, and trustworthiness.
For manufacturers, the time to build transparency and auditability into AI systems and processes is now. The rules don’t just account for people working on-site and in-country. They will follow a company, people, and customers no matter where they are serviced or where products are used.
Preparing for AI regulations
Preparation, as with anything, is and will be key. The reality for many manufacturers is that AI has become so embedded in processes that separating them will be like trying to separate curry from rice.
Start with identifying stakeholders who need to be involved in the AI regulatory compliance process. If you already have a GDPR task team, that is a great place to start, but don’t forget that you will need to have Diversity, Inclusion, and Equity (DEI) representation, as those groups may be disproportionately affected by flawed AI.
Next, contact partners, third-party technology providers, and suppliers and map out where AI is used or can be found and document it. And get your executive team onboard. Many business leaders have a Hollywood view of AI, so educate them on how it is used to benefit manufacturing and its legal, ethical, and responsible use.
Another good idea is to set up an Analytics Centre of Excellence. You should have one already, but if not, put it in place and mandate it to create a better understanding of AI projects. Lastly, stay informed and watch how potential regulations unravel—knowing is half of the battle.
Don’t panic and use AI
AI in manufacturing is not about to exit the scenes because of proposed regulations. But it will need to be accountable, traceable, and transparent. No one expects you to ditch automation. Instead, this should be used as an opportunity to get the best out of what is already a transformative and disruptive (in a good way) technology.