Author: George Woodhams

AI, Robotics, and the Future of Discovery: Insights from Evidence Week in Parliament

Imperial’s Dr Sam Cooper briefs Parliamentarians  

From climate modelling to drug discovery, AI is reshaping science by accelerating data analysis, enhancing modelling, and introducing new forms of automated discovery. With the UK government expected to publish an AI for science strategy imminently, this year’s Evidence Week was an opportune time to meet with Parliamentarians to discuss what policies and investment is needed to accelerate AI-enabled scientific discovery in the UK. 

Evidence Week is the flagship annual science engagement event of the charity Sense About Science. The week is built around a series of events designed to bridge the gap between scientific research, the policymaking process and the public. At this year’s event, Dr Sam Cooper, Associate Professor of Artificial Intelligence for Materials Design, took his team to Westminster to brief Parliamentarians on how AI and robotic laboratories can reshape the UK’s innovation landscape and accelerate progress in clean energy and advanced materials.

Three key takeaways 

  • LLMs can accelerate and unify exploration across scientific domains. In a 2024 paper Dr Cooper and colleagues found that Large Language Models (LLMs) can extract experimental parameters and insights from scientific literature with up to 90% accuracy, reducing the manual burden on researchers and accelerating hypothesis generation.  

    Diagram of LLM capabilities and potential materials-science related applications as seen in G. Lei , R. Docherty and S. J. Cooper , Materials science in the era of large language models: a perspective, Digital Discovery, 2024, 3 , 1257 —1272 https://pubs.rsc.org/en/content/articlehtml/2024/dd/d4dd00074a
  • Combining AI with robotics is key to solving the reproducibility crisis in science. Traditional lab work is slow, inconsistent and hard to reproduce. Automated labs like Imperial’s DIGIBAT project generate reliable, high-quality datasets that allow AI models to test and learn faster—making science more trustworthy and efficient. 
  • Infrastructure investment is vital to stay competitive. Large language models (LLMs) and other AI systems can revolutionise materials R&D, but only if they’re trained on data produced by automated labs. Public investment in compute, robotics and open data will help the UK remain a global leader in frontier science. 

From research to policy 

Dr Cooper’s discussion with Parliamentarians focused on how policymakers can create the right environment for responsible AI-enabled science. He outlined three priority actions: 

  1. Fund national-scale robotic science facilities. Treat high-throughput automated labs as critical infrastructure—on par with supercomputers or particle accelerators—to generate the reproducible datasets that drive innovation. 
  2. Establish a UK-wide materials data platform. Develop open, standardised databases so that data from robotic labs can be shared, reused and used to train AI models across academia and industry.  
  3. Expand AI talent pipelines. Support doctoral programmes, cross-sector apprenticeships and retraining initiatives that blend materials science, machine learning and automation. 

Real-world impact 

Automating the design, execution and analysis of experiments can compress research cycles from months to weeks. By pairing robotics with AI, scientists can explore thousands of potential materials faster and more reliably accelerating breakthroughs in batteries, catalysts and sustainable manufacturing.

In 2024 Dr Cooper spun-out a company from Imperial. Polaron uses generative A.I. to design and optimise the microstructure of materials (such as battery electrodes) — enabling manufacturers to explore thousands of variants in days instead of years.  

Polaron’s models have shown over a 10 % boost in battery energy density, and their technology is also finding application in pharmaceuticals, composites for wind turbines, alloys for jet-engines and even food-texture optimisation. 

Dr Sam Cooper speaking with Lord Clement-Jones (far left) at Evidence Week in Westminster. Lord Clement Jones is Co-Chair of the APPG on AI.

Looking ahead 

It was fantastic to hear Parliamentarians ask thoughtful questions about access to facilities, data sharing and how regulatory frameworks should adapt to AI-driven experimental work. Parliament will continue to play a crucial role scrutinising whether the UK government’s actions are realising the potential of the UK’s existing strengths in AI for science.  By acting now—through establishing national facilities, open data infrastructure and talent programmes—the UK can secure its position as a world leader in AI for Science and deliver the materials and technologies that underpin a sustainable future. 

To learn more about how AI is revolutionising scientific discovery across Imperial, please contact George Woodhams.  

 

From Theory to Practice: How 20 Civil Servants Are Building AI Capability Across Government

Nine months ago, twenty senior UK Civil Servants set out to answer a deceptively simple question: how can government harness AI effectively and responsibly? Through the Imperial Policy Forum’s AI Policy Fellowship, they have been developing solutions that matter—from pilot AI tools for the criminal justice system to mapping pathways toward sovereign AI capability.

Owen Jackson, Director of Imperial Policy Forum speaks to our 2025 cohort of AI Policy Fellows.

Drawing on Imperial’s community of researchers working at the cutting edge of explainable, secure, and trustworthy AI, Fellows have grappled with real challenges: what is the public perception of government’s use of AI? How can AI applications be adapted to safety critical systems? How do you build capability without creating new vulnerabilities?

At the programme’s final in-person session—one of four held throughout the Fellowship—participants presented their findings and wrestled with these questions alongside Imperial academics and government stakeholders. What emerged wasn’t just a collection of projects, but a shared understanding of the challenges and opportunities for adopting AI in public service.

Emerging Reasoning Capabilities

Opening the session, Professor Tom Coates delivered a presentation on the rapid evolution of AI reasoning models over the course of the nine-month Fellowship. He explored how advances in

prompt engineering and reinforcement learning are enhancing the reasoning capabilities of large language models (LLMs), offering new ways to improve performance.

Participants then discussed how these emerging techniques could be responsibly and effectively applied within the public sector, considering both their potential and the challenges of real-world implementation.

Embedding Evaluation

As Fellows have discovered, adopting AI isn’t just about choosing the right tool. It’s about knowing whether it’s having the desired impact. Colleagues from i.AI, the government’s AI incubator, and the Evaluation Task Force—a joint Cabinet Office and HM Treasury team championing evaluation best practice—joined the session to provide their expert insights and guidance.

Professor Tom Coates is also the course lead for our AI Fundamentals programmes for civil servants.

Their message was clear: robust evaluation must be embedded throughout the AI adoption lifecycle, not bolted on afterward. The discussion yielded five practical principles that Fellows are already applying in their departments:

1. Establish clear benchmarks before deployment. Robust benchmarks are the foundation for measuring impact. Baseline data—such as processing time, staff effort, or error rates—should be captured before implementation to enable comparison. In the public sector, this can be difficult where no clear “ground truth” exists and processes vary. However, even benchmarks that acknowledge uncertainty remain essential for reporting savings and efficiencies transparently.

2. User feedback is complementary to, not distinct from, model evaluation. User feedback provides insight into usability, accessibility, and operational fit. Formal model evaluation provides quantitative evidence of accuracy, performance, and impact. Both are needed: user insights highlight implementation risks and unintended effects, while evaluation establishes whether observed changes can be attributed to the AI system.

3. Distinguish between monitoring and evaluation. Routine monitoring (e.g. data collection and trend analysis) is essential but not sufficient. Evaluation requires analytical assessment to determine whether observed changes were caused by the AI intervention and whether those changes represent value for money.

4. Use random sampling and experimental methods where feasible. Randomised approaches remain the most reliable way to assess causal impact. The data produced by AI systems can enable cost-effective use of controlled trials or sampling. Properly designed evaluations strengthen confidence in findings, particularly when estimating time or cost savings.

5. Design user surveys with precise and measurable questions. Surveys can complement quantitative data but should focus on specific, time-bound measures (e.g. “How long did this task take last week?” rather than “How much time does this usually take?”). Questions about time spent are more reliable than those about time saved, which are often affected by perception bias.

Fellows’ projects

To conclude the day, Fellows shared insights and learning from their projects. Presentations from the Department of Energy Security and Net Zero, Department of Business and Trade, Food Standards Agency. i.AI and the Department of Science and Innovation and Technology covered the following topics:

  • The economic opportunity for export and import of AI for health in the UK
  • The potential for explainable AI (XAI) to be tailored to safety critical applications
  • How to adapt Large Language Models (LLMs) for evidence synthesis in government
  • The potential impact that AI will have on the Civil Service workforce
  • How to holistically assess the UK’s AI capability and explore the potential of decentralised compute

For the Imperial academics who mentored these projects, the Fellowship has been equally valuable. By working directly with Fellows tackling live policy challenges, researchers gained new perspectives on how cutting-edge AI research can inform, shape, and strengthen government decision-making. Dr Ahmed Fetit, one of the mentors on this year’s Fellowship, observed that through his discussions with policy officials, he “began to see not only the immediate challenges of deploying new technologies in the NHS, but also their long-term consequences on the healthcare system”. He further shared that:

“the experience deepened my appreciation of the need to bridge technical innovation with the realities of policymaking through dialogue”.

What Comes Next

Although this marked the final in-person day of the 2025 Fellowship, the work continues. The Imperial Policy Forum will keep collaborating with our Fellowship alumni as their projects evolve, translating research insights into tangible policy impact.

It’s clear that the UK government must build AI capability that’s both ambitious and accountable, innovative and trustworthy. This Fellowship has demonstrated that it’s possible—but only when policy expertise and technical knowledge work align.

Applications are now open for the 2026 AI Policy Fellowship. Join a growing network of innovators at the intersection of AI and public policy and help shape the future of responsible AI in government.

Prime Minister’s AI Advisor and Parliamentary Secretary for the Cabinet Office attend AI hackathon at Imperial

Last week, the Imperial Policy Forum hosted a two-day hackathon organised by 10 Downing Street’s Data Science team. Building on the success of previous hacks focused on clean tech and generative AI, this year’s hackathon was open to registrations from the public for the first time.

Over a hundred and fifty technologists, researchers and civil servants took part in the hackathon.

Over a hundred and fifty technologists, researchers and civil servants worked in multidisciplinary teams to tackle some of the most pressing challenges facing the UK. Examples of the challenges teams set out to tackle included missed GP appointments, costing the NHS millions and increasing waiting lists and early years education gaps that shape children’s long-term opportunities.

With input and advice from AI industry partners and data science experts from across the public sector, teams used a range capabilities and data sets to develop novel technological solutions at pace.

Imperial were delighted to welcome Jade Leung, the Prime Minister’s recently appointed AI Advisor and Josh Simons, Parliamentary Secretary for the Cabinet Office, to judge the solutions developed by a shortlisted group of six teams.

Jade Leung, the Prime Minister’s recently appointed AI Advisor and Josh Simons, Parliamentary Secretary for the Cabinet Office, joined us as judges of the hackathon.

The judges were particularly impressed by three teams, who will now have an opportunity to present their solutions to wider stakeholders in No 10. The solutions developed by these teams, included:

  • An online chatbot and voice-over-IP tool to triage primary care patients
  • A data dashboard for national decision makers to view local insights of social cohesion
  • A platform to streamline housebuilding by providing an overview of building and regulatory requirements

The event demonstrated the potential for open collaboration to push the boundaries of how AI can be used for public good. The Forum looks forward to hearing more about how these exciting proposals develop as we continue to support government’s AI adoption journey through our AI Fundamentals and AI Policy Fellowship programme.

Empowering civil service leaders to harness AI’s potential

In January, the UK government set out an ambitious vision for how artificial intelligence can revolutionise public services. Recognising AI’s potential to drive efficiencies, fuel economic growth, and introduce innovative new services, the Blueprint for Digital Government lays the groundwork for a smarter, more responsive state.

But to make this vision a reality, civil service leaders must be equipped to oversee AI adoption confidently and responsibly. That’s why the Imperial Policy Forum is thrilled to welcome twenty senior civil servants into its AI Policy Fellowship Programme, a unique initiative designed to bridge the gap between cutting-edge AI research and public sector leadership.

Now in its third year, the Fellowship pairs participants with academic mentors who will guide them in designing bespoke research projects. Over the next nine months, Fellows will have access to Imperial’s world-leading AI experts and domain specialists already harnessing AI to tackle major societal challenges.

AI opportunities across government

The 2025 cohort represents a diverse mix of senior leaders from sixteen departments, including the Cabinet Office, Ministry of Defence, Department for Education, Home Office and Department of Science, Innovation and Technology.A classroom of AI Policy Fellows at Imperial College London Among them are technical leaders driving technology adoption and data analysis, as well as senior policy and strategy officials tackling some of the country’s most pressing policy challenges. 

Some of this year’s projects will explore how AI can enhance government capabilities in areas such as energy resilience, cybersecurity, and environmental protection. Others will focus on shaping the governance and regulatory frameworks needed to ensure the UK government’s adoption of AI is secure, ethical, and trusted. 

Looking ahead

As well as their project work, the Programme will also feature four in-person sessions at Imperial College campuses, giving Fellows the opportunity to connect, share challenges, and identify opportunities for collaboration. These sessions will include academic-led lectures and workshops to enrich their research and shape meaningful outputs to be shared at the Fellowship’s conclusion. 

As the UK government develops its AI policy from ambition to action, this year’s programme couldn’t be timelier. The Fellows will come together for the first time in early-March. Follow this blog for insights and learnings from their journeys in the months ahead.