Ambitious initiatives have delivered lasting change before. The creation of the Government Digital Service (GDS) and programmes like Digital Outcomes and Specialists (DOS) and G-Cloud disrupted the traditional ‘Big IT’ model. They brought agility, innovation, and competition, helping the public sector work smarter. The Prime Minister’s AI push could have a similar impact, unlocking innovation and strengthening the UK economy.
Scrumconnect has led large-scale digital transformation across the UK public sector. Our expertise in data and AI, combined with our talented team of leading data scientists, engineers, and AI specialists, has helped public sector teams turn ambitious plans into reality. We’ve already supported some great public sector clients, including DWP, DfE, CDDO, MoJ and others, on this journey.
The future of AI in government is taking shape, and we’re excited to be part of the conversation, some of which our people share in this edition of AIdeas. But I’d also love to hear your thoughts and learn how the AI Opportunities Action Plan is shaping your digital priorities. Please get in touch!
The fact is that AI is no longer a distant concept, it’s here. And now is the time for AI to reshape government and public services, driving efficiencies, cutting costs, and improving citizen services.
AI relies on high-quality, accessible data. But without it, even the best AI initiatives will struggle to deliver meaningful change.
The UK government recently unveiled its AI Opportunities Action Plan, accepting all 50 recommendations made by tech entrepreneur and Advanced Research and Invention Agency chair, Matt Clifford CBE. It hopes to position Britain as a global leader in artificial intelligence, enhancing public services and driving economic growth.
In announcing the plan, Prime Minister Sir Keir Starmer emphasised AI’s potential to transform the UK public sector, revolutionising sectors like healthcare, education, and infrastructure. He highlighted its role in improving efficiency and service delivery across the public sector. This means improved citizen services, at a lower cost to taxpayers, via a more tech-enabled and integrated public sector.
It makes sense. The UK government has established one of the most extensive datasets in the world. From health records and traffic patterns, to land use and education outcomes, public sector data holds untapped potential to drive innovation and Britain’s ambitions for AI leadership. However, much of this data remains locked in silos, underutilised, or fragmented across departments. Breaking down these data barriers is essential to fuel Britain’s AI-driven tomorrow.
AI thrives on data. But it isn’t just about quantity because data quality and accessibility are often much more impactful. When datasets are interoperable and shareable across government departments, they can reveal patterns, identify inefficiencies, and unlock solutions that are both scalable and transformative.
Asylum processing is probably one of the most politically-charged tasks currently facing the public sector. It involves vast levels of document verification, background checks, evidence analysis, and assessing eligibility based on UK and international laws.
Information and data from multiple agencies such as border control, social services, and the NHS is routinely required.
The process is highly complex. So when these datasets are fragmented or inaccessible, delays and inefficiencies become inevitable, resulting in negative headlines and a corresponding strain on downstream services.
By enabling better data interoperability and access, a cohesive system that supports faster, more accurate decision-making can be created.
Not only would this improve the efficiency of asylum processing, but it would also demonstrate how AI innovation and data integration can transform one of the most administratively complex areas in the whole of the UK public sector.
The National Data Library, part of the Action Plan, hopes to address these types of challenges by creating a secure, centralised platform for data. This will reduce duplication and enable the kind of interoperability needed to transform service delivery.
But remember, as data is opened up for AI use, safety and responsibility must take priority. The moment AI interacts with sensitive information, particularly personal or identifiable data, clear ethical frameworks and robust safeguards must have already been agreed and implemented.
Ensuring AI is not only effective but also fair, secure, and complies with privacy laws is essential because without this, even the most ambitious AI initiative risks resistance and could face severe legal challenges.
What the public sector needs to know.
DeepSeek, a China-based AI developer, is shaking up the market with the unveiling of affordable, open source large language models (LLMs) that run on minimal processing power.
This has caused a severe market disruption which presents notable opportunities and challenges for public sector organisations in the UK, as well as the wider AI ecosystem.
The way forward requires thoughtful attention from CIO’s and data leaders.
DeepSeek’s innovation lies in its ability to deliver advanced AI capabilities with significantly lower computational requirements.
By reducing energy consumption and operating costs, DeepSeek makes cutting-edge AI accessible to organisations that have traditionally been constrained by limited budgets and infrastructure.
For the public sector, this presents a game changing opportunity to adopt AI at scale, and comes hot off the heels of the UK Government’s AI strategy.
But don’t be guided by unfettered optimism. Public sector organisations must honestly assess their real state of readiness, determining whether they possess the technical expertise and organisational frameworks needed to effectively integrate and manage such systems.
There is no need to dive feet first into DeepSeek (or any other LLM’s) for the time being. The technology isn’t going to disappear anytime soon.
DeepSeek says that it is committed to open source development. This further amplifies its disruptive potential, because unlike proprietary platforms, DeepSeek’s open source model allows for extensive customisation, enabling organisations to tailor solutions to their unique needs.
Gone are the days of relying on the proprietary-LLM's view of the market, their roadmaps and their timelines. We are now entering the era of rapid innovation, customisation and community.
But pretty much any open source project brings its share of risk. Security vulnerabilities and a lack of robust oversight could expose sensitive data to exploitation, particularly in sectors that handle confidential or highly sensitive data, or those that will rely on the LLM for a mission-critical use case.
Operating within China’s regulatory framework, DeepSeek faces heightened scrutiny over data security and geopolitical concerns, raising questions about its suitability for adoption in the UK public sector.
Questions about data sovereignty and compliance with data regulations like GDPR are critical. The potential for sensitive information to be exposed to unauthorised access, either through vulnerabilities or state influence, has not yet been tested and demands stringent safeguards.
Localising data storage within the UK and conducting regular third-party audits can mitigate some of these risks, though these measures may increase operational costs. There is also the potential of hidden ‘back door’ access which may expose vulnerabilities to sensitive data or even the wider estate.
To maximise the benefits of DeepSeek while addressing its risks, we recommend the following:
Strengthen data sovereignty: If you aren’t doing so already, reshore, store and process all of your data within UK borders to maintain control and to ensure protection against external interference. Invest in secure, localised infrastructure to manage ultra-sensitive data.
Validate open source claims: Engage a specialist partner to undertake an independent third-party audit of DeepSeek’s open source framework and codebase. Evaluate its architecture to confirm it aligns with your organisation’s long-term needs.
Maintain a diverse ecosystem: AI is evolving rapidly, and as DeepSeek has shown, can be severely disrupted without prior warning. Consider maintaining a diverse ecosystem of AI providers to prevent having an over-reliance on a single vendor.
Build robust test environments: Set up secure sandboxes to rigorously test DeepSeek’s models before releasing to production. Simulate real-world scenarios to assess performance, scalability, and security risks. Use these controlled environments to stress-test the technology against mission-critical use cases and potential cyber threats.
Assess existing models that have established trust: Many use cases don’t need advanced reasoning. Look at the wider ecosystem and assess their suitability to your needs. Small language models (SLM’s), for example, are highly efficient and backed by hundreds of well-researched, open source models.
Implement advanced monitoring: Deploy tools for continuous monitoring of anomalies, unauthorised access, or performance degradation. Pair this with automated auditing solutions to ensure compliance with data protection standards and to maintain acceptable functionalities.
Integrate secure DevOps: Enhance and tailor your software development lifecycle with secure DevOps methods to handle this new approach to open source AI.
Wait and see: Even if it’s on an initial proof of concept basis, do not be tempted to use DeepSeek’s open source, low operating cost, model as a reason to rush into deployment. The technology holds promise, but assess it as you would any other enterprise product.
In the years since the 2013 publication of the UK government’s ground-breaking strategy to become digital by default, many public sector departments have successfully invested in new GOV.UK digital services.
Scrumconnect has been privileged to co-design and build many digital services with departments such as the Ministry of Justice, Department for Work and Pensions, and Department for Education, to name a few.
We’ve seen first-hand the huge benefits for departments that embrace digital transformation, including significant time and budget savings, gains in productivity, and improved service levels.
One example is DWP’s Winter Fuel Payments digital service, which went live in 2020. This innovation enabled the department to decommission legacy technology and automate processing for 18 million data records, meaning payments for 12 million citizens were made earlier and in less time.
Our approach to this followed the Cabinet Office 7-point guidance for Ethics, Transparency, and Accountability Framework for Automated Decision-Making.
Further to these savings, digital services such as Get Your State Pension have embraced user-centred design, resulting in frictionless, easy-to-use, and accessible citizen-government interactions online.
Key to public sector digital transformation is the ability to automate data and decisions, coupled with straight-through processing principles.
Online interactions with the public reduce manual processing by government staff.
Waiting times are reduced.
Workloads are reduced.
Real-time outcomes are provided for users.
This means that staff can focus on more complex cases, driving efficiency.
Looking ahead, with the government now releasing its AI Opportunities Action Plan, artificial intelligence is set to play an increasing role in public sector innovation.
AI brings the potential for more intelligent and nuanced automation and decision-making capabilities in digital services.
Some use cases are particularly compelling.
In 2023, the government reported:
£8.3bn in overpayments of benefits
£1.4bn in underpayments to some of the UK’s most vulnerable citizens
By applying AI to benefit eligibility and entitlement processes, alongside the new National Data Library that will serve as an integrated and accessible data infrastructure that runs across the public sector, fraud and error can be massively curtailed.
Data analytics has already driven a £1.3bn reduction in fraud since 2022, but there’s still more room for improvement.
While AI is undoubtedly set to become a gamechanger in the delivery of public services, it continues to be a rapidly evolving technology — and in some respects, novel.
This means that innovation is often experimental, meaning some high-profile projects could fail while others thrive, providing valuable lessons for future progress.
With AI comes responsibility and public apprehension.
The rapid evolution of large language models like ChatGPT and generative AI tools such as DALL-E, in addition to sinister use cases like deepfake, highlights the pace of change and the uncertainties around ethical use.
Recently, the Centre for Data Ethics and Innovation (CDEI) published its latest wave of the Public Attitudes to Data and AI: Tracker Survey.
The research, undertaken by Savanta, reveals the continued trend of a growing awareness of AI among the British public:
97% of respondents were aware of the technology.
71% of people said that they felt they could confidently explain what AI is.
Nearly one third expressed concern about its societal impact.
Concerns include:
Potential misuse
Loss of human oversight
Machines replacing human jobs
This underlines the need for transparent and ethical AI adoption in the public sector.
The government has responded to these concerns by taking proactive steps to guide AI’s development and adoption responsibly.
The AI Policy Directorate (part of DSIT) is mandated to strengthen the UK’s position as a leader while addressing public unease.
The CDDO’s guidance on Generative AI in Government provides ethical guardrails to ensure innovation aligns with public expectations.
The Ministry of Defence has developed a strategy underscoring how AI will enhance strategic advantage and protect the UK’s security, while reaffirming that critical decisions involving the use of force will continue to remain under human judgment.
Some practical movements towards greater AI adoption in the public sector include:
HMRC: advanced AI added to its Connect system, which analyses over 55 billion data items to uncover tax evasion.
Rural Payments Agency: uses AI in its CROME dataset to classify land use and crop types.
GOV.UK chatbot: a large language model currently in testing, designed to resolve complex queries from the public in seconds.
Additionally, a new AI accelerator upskilling programme is in place to help civil servants in digital delivery and operations roles to become machine learning engineers. This will further increase AI capability across government departments.
Winning over society’s trust is a long-term game. By focusing on responsible innovation and transparency, initiatives like these will continue to address public concerns and benefit the implementation of the Action Plan.
Through thoughtful and ethical applications, AI will become increasingly accepted, transforming services and creating a more connected, efficient, and fair experience for everyone.
The delivery of the Action Plan requires close cross-government and private-sector collaboration.
By partnering with organisations like Scrumconnect, and continuing to foster a culture of responsible innovation, the government is not just addressing today’s challenges but building a foundation for future-ready public services.
It means transforming how citizens are served, from cradle to grave. And it will create a more connected, efficient, and equitable society for generations to come.
The potential use cases are vast — most will be routine, but many are impactful too.
For example, the public sector will be able to rely on AI to analyse its data sources to proactively tackle common blockers and the way that it responds to national incidents or crises.
Imagine if weather caused both the M1 and M6 motorways to close, causing major disruptions to the critical national backbones that connect many of Britain’s largest cities.
AI could help keep the nation moving by:
Helping National Highways plan its response
Prioritising cleanups and repairs
Automatically rerouting traffic via alternative roads
By rapidly analysing historical data on freight movement, commuter patterns, and congestion levels, AI could also find other non-road transportation and connection possibilities.
This would allow civil servants to proactively contain the economic fallout of the disruption, protecting critical supply chains and maintaining workforce productivity.
The power of data-driven AI lies not only in its ability to respond to incidents but also in its potential to transform day-to-day interactions.
By analysing patterns and identifying efficiencies in routine activities, the public sector can deliver services that feel seamless, proactive, and citizen-centric.
This isn’t just a step forward in efficiency, it’s a once-in-a-generation opportunity to rethink how public services are offered.
Proactivity, enabled by data-driven AI, is Public Sector 2.0.
The moment the quality of data is nailed, it becomes a powerful tool to improve efficiency and service delivery.
And in this sense, data can help the public sector to move from reactive to proactive service delivery.
Synthetic data is rapidly becoming the smart choice for organisations needing high-quality data without the risks of handling real-world datasets.
Unlike mock data, which is often manually created and lacks depth, synthetic data is algorithmically generated to closely mimic real-world information.
This makes it invaluable for training AI models, testing applications, and gaining business insights without the compliance and security risks tied to real data.
The difference between synthetic and mock data is significant.
Mock data may suffice for basic testing but cannot replicate the complexity of real-life datasets needed for AI training or strategic decision-making.
Synthetic data is engineered to reflect actual data patterns, ensuring AI models learn from realistic examples and business insights are grounded in accurate simulations.
For CIOs, this distinction is crucial when shaping their data strategies.
Synthetic data, when executed well, can:
Train AI models with precision
Enhance decision-making
But if poorly designed, it risks introducing biases and distortions, leading to flawed models and unreliable insights. The challenge lies in generating, validating, and implementing synthetic data with precision.
Governments, financial institutions, and tech firms are already harnessing synthetic data to overcome real dataset limitations. The UK has seen some of the most ambitious implementations.
Example: Office for National Statistics (ONS)
In a pilot study, the ONS successfully explored how synthetic data could create datasets that mimicked real data without compromising confidentiality.
Policymakers could analyse economic trends without risking regulatory breaches.
The effectiveness of synthetic data depends on its accuracy.
AI-generated datasets must be rigorously validated against real-world data to ensure they reflect genuine statistical properties and relationships.
If synthetic data fails to do so, AI models trained on it will produce flawed predictions and unreliable insights.
Bias is another concern.
If the original dataset contains biases, the synthetic version can replicate or even amplify them.
Without proper oversight, this can result in skewed insights and unfair decision-making.
Organisations must implement robust fairness testing and continuous monitoring to mitigate these risks.
Despite these challenges, synthetic data is gaining traction as a scalable, flexible, and ethical solution for AI development, business intelligence, and secure data sharing.
Its success depends on a structured approach that balances innovation with responsibility.
Synthetic data isn’t just an alternative to real-world datasets — it is an enabler of innovation, security, and efficiency in AI development and business intelligence.
But its success depends on precision and taking the right approach to help unlock the range of benefits it brings.
Put simply, synthetic data is already transforming industries, so now is the time to identify use cases and start small scale PoCs.
The leaders who act today will be the ones benefitting from tomorrow’s wave of data-driven progress.
We work closely with you to take a user centred approach to identify high-impact opportunities and navigate challenges. Our approach ensures clear project scope, measurable goals, and strong business cases for investment.
We work alongside you from concept to business as usual, ensuring your AI innovations drive the meaningful outcomes you expect.
We build robust and scalable data pipelines to efficiently process and manage large datasets.
Our engineering solutions ensure reliable data flow, enabling seamless integration and high performance.
AI success depends on high-quality data.
We clean, structure, and refine both synthetic and real-world datasets while securely integrating multiple sources.
Our analysis uncovers patterns and optimises data for smarter decisions.
We develop and train bespoke AI models tailored to your needs.
From PoC’s to production systems, we apply cutting-edge algorithms and techniques like ML, NLP and Gen AI to maximise accuracy and efficiency.
Our rigorous testing process ensures AI models are reliable and bias-free.
We validate performance in real-world scenarios, fine-tuning accuracy based on feedback to maintain consistent and effective results.
We seamlessly integrate AI into your existing processes and infrastructure.
Our approach prioritises security, compliance, and scalability, ensuring smooth implementation with minimal disruption.
We ensure seamless data management through automated processes and continuous monitoring.
Our Data Ops solutions enhance data reliability and streamline operations for maximum efficiency.
Everything from infrastructure to tooling, we safeguard sensitive data with encryption, access control, and AI-driven threat detection.
Our secure frameworks ensure compliance, prevent breaches, and enable safe data sharing and handling.
AI requires continuous oversight.
We monitor performance, resolve issues, and provide updates to keep your models optimised.
Our support includes training for both technical and non-technical teams, ensuring long-term success.
scrumconnect.com
There’s more online.