Building trust in AI to foster hydrogen development

Political and business leaders have been descending on Bletchley Park in the UK to analyse the challenges associated with Artificial Intelligence (AI).

The AI Safety Summit 2023 has been considering the risks of AI, especially at the frontier of development, and discussing how they can be mitigated through internationally coordinated action.

Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models.

The Bletchley Declaration, signed by 28 governments – complemented by ’11 Guiding Principles’ adopted by the G7 – stated, “We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”

... to continue reading you must be subscribed

Subscribe Today

Paywall Asset Header Graphic

To gain access to this article and all our other content, you will need to subscribe to H2 View.

From the latest print editions, to 24/7 online access to exclusive interviews, authoritative columnists and the H2 View news archive, a subscription is the best way for you to stay up to date with developments in the hydrogen community.

Please wait...