Having an AI-ready culture where digitalisation is perceived as an augmentation of human abilities and where strong governance is embedded from the outset will give projects a greater chance of success.
Positive customer engagement is critical for AI applications in the business-to-consumer (B2C) market. If customers do not trust AI, they may be reluctant to provide their data, although in the case of ‘narrow’ AI it is often invisible to the end user. For this reason, communicating the outcome and objectives of AI solutions could make them more trustworthy and reliable for consumers: the risk of sharing data and the concern of privacy need to be outweighed by social gains and company reward.
Clarifying the benefit of using customer data in both the short and long term, it is important to promote data sharing, respecting the business-to-government (B2G) and business-to-business (B2B) guidance in the private sector. This includes the re-use of data based on commercial and non-commercial conditional data-sharing agreements that does not need to be unlimited.
Utilities need to show that they will safeguard consumer data. The European Commission has set out guidelines for trustworthy AI drafted by an expert group that say three components are necessary: (1) it should comply with the law, (2) it should fulfil ethical principles and (3) it should be robust. The guidelines list seven key requirements that AI systems should meet in order to be trustworthy:
There is so much to be gained from digital transformation, as long as utilities treat the consumer with respect and in an all-inclusive and ethical way. Some consumers will engage proactively or passively with new flexibility tools offered through smart meters, while more vulnerable, less tech-savvy customers may be left behind. As progress moves beyond affluent early adopters of renewables to the majority population, there is an increased risk of discrimination. Early use cases should focus on those that support the most vulnerable customers and monitor unintended bias.
Lack of trust can also be highly detrimental in B2B relationships. Wider governance mechanisms should be set up to address the ethical issues and to ensure there is transparency and trust by the public and utilities’ workforce on how algorithms are used.
Data owners, AI providers and customers could set up their own 'fair codes of conduct’ in order to achieve transparency, accountability, causality and fairness, which are key features for the development of AI. In fact, the transparency and explainability of decisions are key factors for their acceptance and can help to detect anomalies and therefore reduce the risk of bias. Moreover, the quality of AI based decisions is strictly related to the quality and the quantity of available data.
For this reason, data donor schemes and data monetisation algorithms, always based on consent management and clear and transparent rules, defined within a clear scope and purposes, should be welcomed and considered in the codes of conduct. Another key factor to increase the availability of data is to have architectural and technical solutions for the exchange and sharing data across European borders.
With a more informed understanding of models, end users might more readily accept the products and solutions powered by AI, while growing regulator demands might be more easily satisfied. However, effective messaging can be complex and highly dependent on a host of variables and human factors, precluding anything resembling a “one-size-fits-all” approach. Intelligibility is an area of cutting-edge, interdisciplinary research, building on ideas from machine learning, psychology, human-computer interaction, and design. Achieving intelligibility in practice through a prescriptive regulatory model would be unworkable.
Any “explanation” of how a deep neural network works needs to be simple enough for a consumer or other end-user with no training or expertise in data science to understand. A governance standard would allow teams to efficiently debug systems when issues around fairness are experienced by end-users. Additionally, given that the principal source of solutions aimed at increasing the intelligibility of machine learning models is research, policymakers should ensure appropriate funding and investment in this area, as well as foster cooperation with the private sector.
Further guidance on explainability prepared by the UK’s Information Commissioner’s Office and the Alan Turing Institute can be found here. Several international organisations have adopted AI principles, for example the Organisation for Economic Cooperation and Development and the World Economic Forum, and consulting firms such as Deloitte have published trustworthy AI frameworks.