What CYB would like to see at the Government’s AI Summit
For businesses across the world, all eyes this week will be on the UK Government’s landmark AI Safety Summit. The two-day meeting at Bletchley Park will discuss the threats and opportunities that come with the widespread advancement of artificial intelligence – and with politicians and policymakers around the world due to attend, there is likely to be significant movement on how AI could, and should, be implemented.
At Caution Your Blast Ltd (CYB) we are no different – we will be watching intently. Now we are working in the AI space with Large Language Model (LLM) tools for conscientious and concerned clients, we are acutely sensitive to the fact that AI tech needs to be implemented ethically and responsibly.
Ahead of the summit, members of the CYB team from various disciplines write about the outcomes and policy direction they would like to see come from the summit.
Action on the environmental cost - Ben Stewart, Founder and Managing Director
An important priority for the summit alongside considering the many risks to people is that AI providers should be forced by UK law to give statistical information on the ecological cost of every query that is run via their API's. I strongly believe that every use of AI needs an impact measure - how much energy was used, where the energy was sourced from, how much water was used and similar. We cannot blindly use any tool - we must assume every tool has impacts and hence we must assess those impacts with their use. Otherwise, as a company committed to sustainability, net zero carbon, and net biodiversity gain, we cannot responsibly use AI. Without clear measurements, eventually we would be forced to constrain our use to experimental investigations only and would actively advise against the use of AI until the impacts are clear to users.
Risk mitigation, transparency and education - Jen Duong, Full Stack Developer
The first thing I would like to see is the immediate adoption of the EU's AI act, which imposes different rules on different levels of risk - unacceptable risk, high risk and limited risk, as well as rules for the use of generative AI. We should be able to ban AI systems that pose a threat to people.
And while it is highly unrealistic that artificial general intelligence (AGI) is to cease – Pandora’s Box is already open – there should be a move to adopt more transparency about how AI models are trained. The introduction of a licence to conduct training on smaller models and the need for case-by-case approval for each training run should be considered. The latter would make revoking access for abuses and non-agreed usage easier.
We also need transparency on the harm being done by AI right now – not just the potential for harm in the future. This is particularly important for women and people of colour who suffer the consequences of biased systems – we need specific information on the current harms and how they will be eradicated.
Finally, I would like to see a government education programme which aims to improve digital literacy, including in social media and AI.
A plan to tackle bias and robust data control - Rachael Grant, Senior Performance Analyst
I would like the summit to address issues contributing to discrimination, specifically by delving into the nuanced challenges faced by each marginalised group and plans that will address the issues by group. These challenges must be understood individually and thoroughly to minimise their impact and ensure inclusivity.
I would also want to see a robust plan for data control. This should include clearly defined plans to maintain data quality, respect privacy, and uphold transparent data management processes that can be implemented across the board . These expectations reflect the dedication to transparency, data integrity, and accuracy.
Specifics on the role of government in the curation of AI tech - Richard Grove, Director of Digital
I strongly agree with the Ada Lovelace Institute’s Regulating AI in the UK Report, which advocates three pillars for effective AI regulation in the UK: coverage, capability and urgency.
These are hugely significant. Where are the safeguarding minimums and what are those legal requirement going to be, which keep us safe? These questions are even more complex if you make it contextual or sector specific as per the current UK proposals. We’ve seen what happens with putting curation in the hands of big tech companies in the past - I’d like to see government learn from what did and didn't work then and take that into this crucial area - I hope the outcome is an extremely positive one for the UK!
A move towards user-centred design - James Alden-John, Product Manager, Give A Little®
I would like to see the UK Government encouraging its departments, and the public sector more widely, to maintain a user-centred design (UCD) approach whilst delivering AI solutions. In many cases this will lead to the conclusion that AI is actually not the right tool for the job, and that is fine. There are so many efficiency and cost savings still to be made by simply applying UCD and agile software development methodologies (which will often involve very little direct technological investment), that the government should not get too distracted by AI as a silver bullet to all inefficiencies.
AI is a fantastic tool when applied to the correct problems. When applied incorrectly, it will cause more work and more inefficiencies. Unlocking AI’s full potential will require the same rational UCD way of thinking.
Consideration of usage and the impact on people - Katie Alden-John, Head of User Research
One thing that is unclear at the moment is exactly how we decide to use AI – how, when and where it is used and whether the impact, especially on people, is fully considered. We know that the impact on the most marginalised in society has the potential to be extremely dangerous, but there seems to be no framework in place for the conditions in which AI technology can be used.
I would like to see a move towards including people impacted by AI being involved in its development.