Attendees: 10 (including host). 5 cancelled on the last day.
FtW copies giveaway: 2
Summary: Theme: The Cost of Automation & The Cost of Automation Failures. Working framing: Britain is buying surveillance with money it doesn’t have, on rails that don’t work, from a state that won’t tell us what it’s doing.
Logos Circle 8 took the Big Brother Watch report Suspicion by Design (July 2025) as its starting point and used it to open a wider question:
What happens when a state with a 50-year track record of IT failure, the highest tax burden since 1948, and collapsing public trust decides to industrialise algorithmic profiling of its poorest citizens?
The data discussion converged on a single, hard-to-dispute fact: the DWP’s Targeted Case Review programme, which is being scaled toward roughly 20 million people, is currently flagging benefit recipients as suspected fraudsters at a rate where four in five reviewed claimants are already receiving the correct amount of Universal Credit. The circle spent significant time unpacking what this means morally, fiscally, and operationally.
The most consequential outcome of the meeting was a shift in posture. The circle moved beyond a pure campaigning frame (“call out the abomination”) toward an action-oriented question: what could Logos actually build for the people on the receiving end of these systems? A working concept emerged for a privacy-preserving reporting tool - a kind of structured whistleblower channel for benefit claimants caught in the algorithmic net - which could feed a journalist-facing evidence base and, in a later iteration, support a “social loan” mechanism for people whose payments are suspended during review.
Key Data:
Macro fiscal context (GBTT, ONS / OBR / DWP / DfT, Jan–Feb 2026)
-
UK national debt: £2.84 trillion (98.3% of GDP).
-
Tax burden: 37.1% of GDP - highest since 1948.
-
Welfare bill 2025–26: £323.1bn (23.6% of all government spending).
-
Civil service headcount: 554,000 - ninth consecutive annual rise.
-
Debt per person: £42,300.
The automation programme
-
£3.35bn in UK government AI contracts since 2018 (FT).
-
£573m in AI contracts awarded in 2025 alone.
-
£20bn/year in IT transformation spend (Public Accounts Committee).
-
£694m four-year Teleperformance contract for Targeted Case Review support.
-
6,000 staff funded for TCR through 2030 (Autumn 2024 Budget).
50 years of failure
A back-of-envelope tally was presented: NHS NPfIT (£10bn wasted, abandoned 2013); Post Office Horizon (900+ wrongful convictions, at least four suicides); Home Office PNC replacement (5+ years late, £400m+ overrun); Libra magistrates’ courts IT (£146m → £319m); Rural Payments (£75.8m forecast → £350m actual); DfT Shared Services (£57m saving promised, £81m extra cost delivered); 2020 A-level grading; Home Office sham-marriage screening. Cumulative estimated cost of state IT failures since the 1970s: ~£40bn (C4Change).
The DWP’s algorithmic black box
-
The Universal Credit Advances ML model profiles over 1 million advance claims per year.
-
Six fairness analyses (Dec 2023 – Jul 2024) showed statistically significant bias along age, reported illness (disability proxy), nationality, and couple status - every round.
-
The DWP’s own DPIA rated the impact of a biased model 4/5 in severity, then claimed undisclosed mitigations brought residual likelihood to 1/5.
-
Across the four pilot models (Capital, Living Together, Self-Employed Earnings, Housing), every disclosed fairness analysis found disproportionality along protected or proxy characteristics.
-
The General Matching Service operates across 16+ datasets and 400+ matching rules with no published DPIA or EIA, despite running 700,000 hits per year by 2016/17.
-
Risk Review Team: 188,000 UC claims paused April 2020 – March 2023; only ~7,000 reinstated. Suspensions of up to 11 months on record in Hansard.
-
Targeted Case Reviews: 4 out of 5 reviewed claimants were already receiving the correct amount. Error-detection rate halved between 2022 and 2025 as the programme scaled.
-
Housing Benefit Accuracy Award Initiative: 585,800 reviews flagged 2020–2023; ~204,000 resulted in payment changes against a forecast of ~392,000. Approximately 188,000 people went through intrusive checks for nothing.
Trust and legacy context
-
23% of UK adults trust government to act in the public interest (Ipsos, January 2026), down from 41% in 2019.
-
76% say public services have got worse over the last five years.
-
£4.9bn written off in fraudulent Bounce Back Loans; £22bn total fraud and error in COVID schemes.
The most generative thread was the move from critique to construction. The chapter began asking what Logos could build that would not exist in five years’ time without it: something privacy-preserving, low-friction, and structured enough to be useful to journalists and litigators.
Several design constraints surfaced:
-
The tool should not require submitters to prove they were wrongly flagged. Out of a meaningful sample, the statistical base rate already implies that most submissions will come from people who were flagged in error.
-
The data should be structured for downstream use, not just narrative testimony, but fields that map to model inputs, demographic categories, suspension durations, and outcomes.
-
Privacy preservation is the focus and shouldn’t be treated as just a feature, as these are people most exposed if their participation leaks.
A post-meeting conversation - picked up in the cab home from the venue added a further idea: a small social-loan facility attached to the reporting tool, providing limited financial support to claimants whose payments are suspended during a review, repayable once the review confirms the original DWP flag. This would not be a crowdfunding mechanism. It is closer to a structured benefit-of-the-doubt instrument, with the reporting data as the underwriting signal.
Full report here: Notion
