Data and AI Insights from Scrumconnect Consulting

Data and AI Insights from Scrumconnect Consulting



THE UK GOVERNMENT’S AI OPPORTUNITIES ACTION PLAN IS SET TO TRANSFORM THE PUBLIC SECTOR

The fact is that AI is no longer a distant concept, it’s here.
And now is the time for AI to reshape government and public services, driving efficiencies, cutting costs, and improving citizen services.


IT ALL BEGINS WITH DATA

Government’s AI Opportunities Action Plan hinges on breaking down data barriers.

The UK government recently unveiled its AI Opportunities Action Plan, accepting all 50 recommendations made by tech entrepreneur and Advanced Research and Invention Agency chair, Matt Clifford CBE. It hopes to position Britain as a global leader in artificial intelligence, enhancing public services and driving economic growth.

In announcing the plan, Prime Minister Sir Keir Starmer emphasised AI's potential to transform the UK public sector, revolutionising sectors like healthcare, education, and infrastructure. He highlighted its role in improving efficiency and service delivery across the public sector. This means improved citizen services, at a lower cost to taxpayers, via a more tech-enabled and integrated public sector.

It makes sense. The UK government has established one of the most extensive datasets in the world. From health records and traffic patterns, to land use and education outcomes, public sector data holds untapped potential to drive innovation and Britain’s ambitions for AI leadership. However, much of this data remains locked in silos, underutilised, or fragmented across departments. Breaking down these data barriers is essential to fuel Britain's AI-driven tomorrow.

AI thrives on data. But it isn’t just about quantity because data quality and accessibility are often much more impactful. When datasets are interoperable and shareable across government departments, they can reveal patterns, identify inefficiencies, and unlock solutions that are both scalable and transformative.

Take asylum application processing, for example.

Asylum processing is probably one of the most politically-charged tasks currently facing the public sector. It involves vast levels of document verification, background checks, evidence analysis, and assessing eligibility based on UK and international laws.

Information and data from multiple agencies such as border control, social services, and the NHS is routinely required.

The process is highly complex. So when these datasets are fragmented or inaccessible, delays and inefficiencies become inevitable, resulting in negative headlines and a corresponding strain on downstream services.

By enabling better data interoperability and access, a cohesive system that supports faster, more accurate decision-making can be created.

Not only would this improve the efficiency of asylum processing, but it would also demonstrate how AI innovation and data integration can transform one of the most administratively complex areas in the whole of the UK public sector.

Enter the National Data Library.

The National Data Library, part of the Action Plan, hopes to address these types of challenges by creating a secure, centralised platform for data. This will reduce duplication and enable the kind of interoperability needed to transform service delivery.

But remember, as data is opened up for AI use, safety and responsibility must take priority. The moment AI interacts with sensitive information, particularly personal or identifiable data, clear ethical frameworks and robust safeguards must have already been agreed and implemented.

Ensuring AI is not only effective but also fair, secure, and complies with privacy laws is essential because without this, even the most ambitious AI initiative risks resistance and could face severe legal challenges.


The transformative potential of an AI-supported public sector.

AI has the power to transform the delivery of public services while significantly reducing the cost and effort of doing so. By breaking down silos and enabling greater cohesiveness, AI-enabled innovation has the potential to shape a government that is both efficient and responsive to citizen needs.

The use cases in the UK public sector alone are vast. The result is faster, more transparent services for citizens, with the ability to redirect savings to critical areas of public need.

Duplication between departments can quickly be identified and eliminated, while overlapping programmes and inefficiencies can be independently flagged. It could even unearth redundant bureaucracy or duplicate policy areas buried within the complexity of the day-to-day running of government. Put simply, AI-driven public services have the potential to streamline operations, ensuring resources are used, and budgets spent, wherever is most meaningful.

But beyond improving the efficiency of the public sector, AI can also enable a shift from reactive to proactive public service delivery. By providing a unified view of citizen data, repetitive paperwork is eradicated, processes are sped up and more accurate outcomes are achieved.

Britain’s ambition to lead in artificial intelligence hinges on how data is managed, shared, and applied.

The UK government’s AI Opportunities Action Plan marks a pivotal step towards a smarter, more efficient public sector.

By accepting all 50 recommendations, the government has signalled a strong commitment to harnessing technology as a way of enhancing services and driving national economic growth.

However, it isn’t enough to just invest in cutting-edge algorithms. Success will hinge on an ambitious rethinking of how government data is managed, shared, and applied across departments.

Regardless of where data might sit, it should be seen as a national asset that can enable progress rather than obstructing it. The plan’s emphasis on breaking down silos and fostering collaboration between departments is critical to unlocking the full potential of the government’s ambition.

The question is no longer whether to act, but how quickly we should act. This demands leadership, technical innovation, and cross-sector collaboration to lay the groundwork that helps make data an enabler, rather than an obstacle.

By treating data as a strategic national asset, the future of AI in the public sector hinges on the decisions that are taken today.


DEEPSEEK OR DEEPSHOCK?

What the public sector needs to know

DeepSeek, a China-based AI developer, is shaking up the market with the unveiling of affordable, open source large language models (LLMs) that run on minimal processing power. This has caused a severe market disruption which presents notable opportunities and challenges for public sector organisations in the UK, as well as the wider AI ecosystem.

The way forward requires thoughtful attention from CIO’s and data leaders.

Efficiency is an opportunity. But don’t rush in.

DeepSeek’s innovation lies in its ability to deliver advanced AI capabilities with significantly lower computational requirements.

By reducing energy consumption and operating costs, DeepSeek makes cutting-edge AI accessible to organisations that have traditionally been constrained by limited budgets and infrastructure.

For the public sector, this presents a game changing opportunity to adopt AI at scale, and comes hot off the heels of the UK Government’s AI strategy.

But don’t be guided by unfettered optimism. Public sector organisations must honestly assess their real state of readiness, determining whether they possess the technical expertise and organisational frameworks needed to effectively integrate and manage such systems.

There is no need to dive feet first into DeepSeek (or any other LLM’s) for the time being. The technology isn’t going to disappear anytime soon.

Open source is a game changer.

DeepSeek says that it is committed to open source development. This further amplifies its disruptive potential, because unlike proprietary platforms, DeepSeek’s open source model allows for extensive customisation, enabling organisations to tailor solutions to their unique needs.

Gone are the days of relying on the proprietary-LLM's view of the market, their roadmaps and their timelines. We are now entering the era of rapid innovation, customisation and community.

But pretty much any open source project brings its share of risk. Security vulnerabilities and a lack of robust oversight could expose sensitive data to exploitation, particularly in sectors that handle confidential or highly sensitive data, or those that will rely on the LLM for a mission-critical use case.

Data security and sovereignty.

Operating within China’s regulatory framework, DeepSeek faces heightened scrutiny over data security and geopolitical concerns, raising questions about its suitability for adoption in the UK public sector.

Questions about data sovereignty and compliance with data regulations like GDPR are critical. The potential for sensitive information to be exposed to unauthorised access, either through vulnerabilities or state influence, has not yet been tested and demands stringent safeguards.

Localising data storage within the UK and conducting regular third-party audits can mitigate some of these risks, though these measures may increase operational costs. There is also the potential of hidden ‘back door’ access which may expose vulnerabilities to sensitive data or even the wider estate.


RECOMMENDATIONS

Strengthen data sovereignty:
If you aren’t doing so already, reshore, store and process all of your data within UK borders to maintain control and to ensure protection against external interference. Invest in secure, localised infrastructure to manage ultra-sensitive data.

Validate open source claims:
Engage a specialist partner to undertake an independent third-party audit of DeepSeek’s open source framework and codebase. Evaluate its architecture to confirm it aligns with your organisation’s long-term needs.

Maintain a diverse ecosystem:
AI is evolving rapidly, and as DeepSeek has shown, can be severely disrupted without prior warning. Consider maintaining a diverse ecosystem of AI providers to prevent having an over-reliance on a single vendor.

Build robust test environments:
Set up secure sandboxes to rigorously test DeepSeek’s models before releasing to production. Simulate real-world scenarios to assess performance, scalability, and security risks. Use these controlled environments to stress-test the technology against mission-critical use cases and potential cyber threats.

Assess existing models that have established trust:
Many use cases don’t need advanced reasoning. Look at the wider ecosystem and assess their suitability to your needs. Small language models (SLM’s), for example, are highly efficient and backed by hundreds of well-researched, open source models.

Implement advanced monitoring:
Deploy tools for continuous monitoring of anomalies, unauthorised access, or performance degradation. Pair this with automated auditing solutions to ensure compliance with data protection standards and to maintain acceptable functionalities.

Integrate secure DevOps:
Enhance and tailor your software development lifecycle with secure DevOps methods to handle this new approach to open source AI.

Wait and see:
Even if it’s on an initial proof of concept basis, do not be tempted to use DeepSeek’s open source, low operating cost, model as a reason to rush into deployment. The technology holds promise, but assess it as you would any other enterprise product.

BETTER SERVICES, HAPPIER CITIZENS

Thoughtful and ethical AI unlocks more connected, efficient, and fair experiences for everyone.

In the years since the 2013 publication of the UK government’s ground-breaking strategy to become digital by default, many public sector departments have successfully invested in new GOV.UK digital services.

Scrumconnect has been privileged to co-design and build many digital services with departments such as the Ministry of Justice, Department for Work and Pensions, and Department for Education, to name a few.

We’ve seen first-hand the huge benefits for departments that embrace digital transformation, including significant time and budget savings, gains in productivity, and improved service levels.

One example is DWP’s Winter Fuel Payments digital service, which went live in 2020. This innovation enabled the department to decommission legacy technology and automate processing for 18 million data records, meaning payments for 12 million citizens were made earlier and in less time. Our approach to this followed the Cabinet Office 7-point guidance for Ethics, Transparency, and Accountability Framework for Automated Decision-Making.

Further to these savings, digital services such as Get Your State Pension have embraced user-centred design, resulting in frictionless, easy-to-use, and accessible citizen-government interactions online.

Key to public sector digital transformation is the ability to automate data and decisions, coupled with straight-through processing principles. Online interactions with the public reduce manual processing by government staff, removing waiting times, reducing workload, and providing real-time outcomes for users. It means that staff can focus on more complex cases, driving efficiency.

Taking a look forward.

Looking ahead, with the government now releasing its AI Opportunities Action Plan, artificial intelligence is set to play an increasing role in public sector innovation.

AI brings the potential for more intelligent and nuanced automation and decision-making capabilities in digital services.

Some use cases are particularly compelling. In 2023, the government reported overpayments of £8.3bn in benefits and £1.4bn in underpayments to some of the UK’s most vulnerable citizens. By applying AI to benefit eligibility and entitlement processes, alongside the new National Data Library that will serve as an integrated and accessible data infrastructure that runs across the public sector, fraud and error can be massively curtailed. Data analytics has already driven a £1.3bn reduction in fraud since 2022, but there’s still more room for improvement.

But while AI is undoubtedly set to become a gamechanger in the delivery of public services, it continues to be a rapidly evolving technology - and in some respects, novel. This means that innovation is often experimental, meaning some high profile projects could fail while others thrive, providing valuable lessons for future progress.

With AI comes responsibility and public apprehension.

The rapid evolution of large language models like ChatGPT and generative AI tools such as DALL-E, in addition to sinister use cases like deepfake, highlights the pace of change and the uncertainties around ethical use.

Evolving public attitudes.

Recently, the Centre for Data Ethics and Innovation (CDEI) published its latest wave of the Public Attitudes to Data and AI: Tracker Survey.

The research, undertaken by Savanta, reveals the continued trend of a growing awareness of AI among the British public - 97% of respondents were aware of the technology. But scepticism persists. While 71% of people said that they felt they could confidently explain what AI is, nearly one third expressed concern about its societal impact. Issues such as potential misuse, loss of human oversight, and machines replacing human jobs remain top concerns, underlining the need for transparent and ethical AI adoption in the public sector.

The government has responded to these concerns by taking proactive steps to guide AI’s development and adoption responsibly. The AI Policy Directorate, part of the Department for Science, Innovation and Technology (DSIT), is mandated to strengthen the UK’s position as a leader in the field while addressing public unease. Practical frameworks, such as the Cabinet Digital & Data Office’s (CDDO) guidance on Generative AI in Government, provide ethical guardrails to ensure innovation aligns with public expectations. And regarding the role that AI plays in national defence, the Ministry of Defence has developed a strategy that underscores how AI will enhance strategic advantage and protect the UK’s security, while reaffirming that critical decisions involving the use of force and warfare will continue to remain under human judgment for the foreseeable future.

Practically, there have been some movements towards greater adoption of AI-enabled technologies in the public sector. The hope is that these innovations will drive value but will also demonstrate the responsible use of AI in digital public services.

Some examples include:

  • HMRC has added advanced AI to its Connect system, giving it access to over 55 billion data items and growing. The AI continuously analyses data from banks, social media, and online marketplaces to uncover and flag tax evasion.

  • The Rural Payments Agency is using AI in its CROME dataset to classify land use and crop types.

  • A large language model chatbot that will sit on the GOV.UK platform is currently in testing. When it’s publicly switched on, the chatbot will guide and resolve complex queries from the public in seconds.

On top of this, a new AI accelerator upskilling programme is in place to help civil servants in digital delivery and operations roles to become machine learning engineers. This will further increase the levels of AI capability across government departments.

Winning over society’s trust is a long-term game, but by continuing to focus on responsible innovation and transparency, initiatives like these will continue to address public concerns and benefit the implementation of the Action Plan.

Through thoughtful and ethical applications, AI will become increasingly accepted, transforming services and creating a more connected, efficient, and fair experience for everyone.

The delivery of the Action Plan requires close cross-government and private-sector collaboration. By partnering with organisations like ours, and continuing to foster a culture of responsible innovation, the government is not just addressing today’s challenges but building a foundation for future-ready public services. It means transforming how citizens are served, from cradle to grave. And it will create a more connected, efficient, and equitable society for generations to come.


REACTIVITY? THAT’S PUBLIC SECTOR 1.0

The potential use cases are vast - most will be routine, but many are impactful too. For example, the public sector will be able to rely on AI to analyse its data sources to proactively tackle common blockers and the way that it responds to national incidents or crises.

Imagine if weather caused both the M1 and M6 motorways to close, causing major disruptions to the critical national backbones that connect many of Britain’s largest cities. AI could help keep the nation moving by helping National Highways plan its response, prioritising cleanups and repairs, and automatically rerouting traffic via alternative roads. And by rapidly analysing historical data on freight movement, commuter patterns, and congestion levels, AI could find other non-road transportation and connection possibilities. This would allow civil servants to proactively contain the economic fallout of the disruption, protecting critical supply chains and maintaining workforce productivity.

The power of data-driven AI lies not only in its ability to respond to incidents but also in its potential to transform day-to-day interactions. By analysing patterns and identifying efficiencies in routine activities, the public sector can deliver services that feel seamless, proactive and citizen-centric.

This isn’t just a step forward in efficiency, it’s a once in a generation opportunity to rethink how public services are offered. Proactivity, enabled by data-driven AI, is Public Sector 2.0.

The moment the quality of data is nailed, it becomes a powerful tool to improve efficiency and service delivery. And in this sense, data can help the public sector to move from reactive to proactive service delivery.


GOODBYE, MOCK. HELLO, SYNTHETIC.

Mimicking real world datasets to accelerate AI training.

Synthetic data is rapidly becoming the smart choice for organisations needing high-quality data without the risks of handling real-world datasets.

Unlike mock data, which is often manually created and lacks depth, synthetic data is algorithmically generated to closely mimic real-world information.

This makes it invaluable for training AI models, testing applications, and gaining business insights without the compliance and security risks tied to real data.

The difference between synthetic and mock data is significant. While mock data may suffice for basic testing, it cannot replicate the complexity of real-life datasets needed for AI training or strategic decision-making. Synthetic data is engineered to reflect actual data patterns, ensuring AI models learn from realistic examples and business insights are grounded in accurate simulations.

For CIOs, this distinction is crucial when shaping their data strategies. Synthetic data, when executed well, can train AI models with precision and enhance decision-making. But if poorly designed, it risks introducing biases and distortions, leading to flawed models and unreliable insights. The challenge lies in generating, validating, and implementing synthetic data with precision.

Synthetic is already making an impact.

Governments, financial institutions, and tech firms are already harnessing synthetic data to overcome real dataset limitations. The UK has seen some of the most ambitious implementations.

The Office for National Statistics (ONS), for example, has been at the forefront of this shift. In a pilot study, the ONS successfully explored how synthetic data could create datasets that mimicked real data, without compromising confidentiality, but would enable policymakers to analyse economic trends without risking regulatory breaches.

The effectiveness of synthetic data depends on its accuracy. AI-generated datasets must be rigorously validated against real-world data to ensure they reflect genuine statistical properties and relationships. If synthetic data fails to do so, AI models trained on it will produce flawed predictions and unreliable insights.

Bias is another concern. If the original dataset contains biases, the synthetic version can replicate or even amplify them. Without proper oversight, this can result in skewed insights and unfair decision-making. Organisations must implement robust fairness testing and continuous monitoring to mitigate these risks.

Despite these challenges, synthetic data is gaining traction as a scalable, flexible, and ethical solution for AI development, business intelligence, and secure data sharing. Its success depends on a structured approach that balances innovation with responsibility.

Synthetic data isn’t just an alternative to real-world datasets, it is an enabler of innovation, security, and efficiency in AI development and business intelligence.

But its success depends on precision and taking the right approach to help unlock the range of benefits it brings.

Put simply, synthetic data is already transforming industries, so now is the time to identify use cases and start small scale PoCs.

The leaders who act today will be the ones benefitting from tomorrow’s wave of data-driven progress.