Stephen Bustin PhD FRSB MAE
26 October
FEC: The True Currency of British Science
Why Britain’s universities now prize overheads more than ideas.
Once upon a time, British universities judged scientists by the quality of their work. Now they judge them by the profitability of their grant overheads. The acronym FEC (Full Economic Costing) has quietly replaced IQ, H-index, and perhaps even integrity as the metric that really matters. It sounds like an accounting detail, but it defines what research is possible, who gets promoted, and which discoveries are made, or more often, not made.
FEC was meant to ensure that universities recovered the real costs of doing research: the electricity, technicians, and infrastructure that keep the lights on while the microscopes hum. In practice it has turned into a game of financial engineering. When a Research Council grant covers 80 percent of the FEC, the university pockets the remaining 20 percent as overhead, free money to feed the administrative ecosystem. Charitable or industrial grants, which rarely carry such generous top-ups, are therefore treated as philanthropic eccentricities. One can publish eleven papers in a year, generate novel diagnostics, and still be told one’s “income contribution” is inadequate because the wrong funding stream paid for it.
This is not a conspiracy; it is worse than that. It is policy. Managers are rewarded for “grant capture”, not discovery. Academics are encouraged to write funding applications rather than papers, since every unsuccessful proposal still demonstrates “engagement with strategy”. Entire offices exist to massage FEC spreadsheets, justify “pathways to impact”, and extract maximum overhead from minimal science. The result is a research culture that treats intellectual curiosity as a cost centre and bureaucracy as a growth industry.
Britain’s research councils continue to trumpet “excellence”, but excellence now means compliance: correctly formatted budgets, cross-cutting themes, and enough buzzwords to reassure a civil servant that innovation has been risk-assessed into submission. Real innovation, of course, is inconveniently unforecastable. If PCR had required an FEC justification, it would still be under review.
Stephen Bustin PhD FRSB MAE
12 October
The £1,000 Test That Should Cost £5: Bringing Reason Back to Prostate Cancer PCR Testing
A critique of overpriced, opaque molecular tests and the case for a transparent, same-day PCR alternative.
In May 2025, the office of former President Joe Biden confirmed that he had been diagnosed with an aggressive prostate cancer that had already metastasised to bone. He is reportedly receiving hormone and radiation therapy, but the disclosure has reignited a familiar public discussion: how, in an era of precision medicine, do so many prostate cancers still escape early detection while thousands of men continue to undergo invasive biopsies that reveal only indolent disease? The story has also drawn attention to a quieter problem: the growing dependence on expensive, opaque molecular assays that claim to guide diagnosis but rarely withstand scrutiny. The Biden case highlights what happens when decades of technological progress fail to translate into rational clinical practice.
The limits of current triage tools
For decades, prostate-cancer diagnosis has relied on serum PSA testing and imaging. PSA is a sensitive but profoundly non-specific marker; its elevation can reflect benign hyperplasia, inflammation, or even recent exercise. MRI has improved lesion localisation, but interpretation remains subjective and resource-intensive. The combination of these methods identifies many men who do not need intervention while missing others whose disease is already advanced. Each biopsy carries not only the immediate risks of bleeding and infection but also the psychological weight of uncertainty, repeated testing, and overtreatment. Biden’s illness is an extreme outcome, but it underscores a larger systemic failure: diagnostic methods that detect structural abnormalities more readily than biological aggressiveness. The critical distinction between indolent and clinically significant disease is molecular, not morphological.
Why a molecular approach is needed
Gene-expression profiling offers a direct measure of tumour behaviour. RNA from prostate-derived cells, shed into urine, carries a transcriptomic fingerprint that can distinguish between benign, indolent, and aggressive states. Several commercial assays exploit this concept, combining multiple RNA targets into risk scores that inform biopsy decisions. These tests have proven that molecular data can refine clinical judgement, but they remain expensive, slow, and opaque. Most require centralised processing, bespoke reagents, and analysis pipelines that cannot be independently verified. The result is limited accessibility and persistent uncertainty about analytical robustness. Yet the current commercial landscape is fragile. Urine-based testing presents real technical obstacles. RNA in urine is unstable, concentrations are low, and degradation begins almost immediately after collection. Even with stabilising buffers, recovery is inconsistent and results depend critically on how the sample is handled. Despite this, several marketed assays are promoted as definitive molecular solutions, although few disclose the analytical information required to evaluate them. Reaction efficiencies, reference-gene stability, and detection limits are rarely reported, and the algorithms that generate risk scores are proprietary. The outcome is a set of assays that claim quantitative accuracy without demonstrating it. Cost compounds the problem. A single commercial urine test can exceed a thousand pounds once sample shipping, central processing, and licensing fees are included. The underlying chemistry remains routine PCR, which, when properly optimised, costs only a few pounds per reaction. The barrier is not science but the business model that has grown around it. The result is a diagnostic ecosystem that rewards secrecy and scale rather than transparency and reproducibility.
Why PCR remains the most realistic platform
Real-time PCR remains unmatched in clinical versatility. It is quantitative, sensitive to single-copy targets, and already embedded in hospital laboratories. The technology itself is not the constraint; the challenge lies in its consistent execution. Most failures in molecular diagnostics stem from inconsistent sample handling, poorly validated reference genes, or unverified reaction efficiencies rather than from the instrumentation. Our work focuses on developing a same-day RT-qPCR workflow specifically adapted for urine. The aim is to stabilise RNA at the point of collection, reverse-transcribe and amplify it using a streamlined protocol, and deliver quantitative results within hours. The entire process can be performed on standard qPCR instruments using well-established chemistries. The goal is not to reinvent PCR but to apply it rigorously through precise temperature control, validated oligonucleotide design, and transparent analysis parameters that yield reproducible data rather than decorative graphs.
Proof of principle and the path to clinical translation
Early laboratory studies demonstrate that this approach is feasible. Using well-defined reference materials and controlled RNA inputs, we have shown that stable, interpretable expression patterns can be derived from urinary RNA in a single working day. The next phase is a clinical pilot, conducted with urology colleagues, to test reproducibility under routine conditions. Key variables include sample stability, assay precision, and correlation with biopsy and MRI findings. The broader objective is not simply another assay but a demonstration that molecular diagnostics can be decentralised without loss of analytical integrity. A same-day result would allow a urologist to discuss molecular risk with a patient before deciding on biopsy, integrating molecular, imaging, and clinical data into one informed decision rather than a sequence of disconnected tests.
Reframing the conversation
Biden’s diagnosis, and the public response to it, remind us that prostate-cancer detection still relies on imperfect surrogates. The challenge is not only to identify cancer earlier but also to distinguish which cancers matter. That distinction can only be made molecularly. The continuing reliance on slow, centralised tests reflects inertia rather than necessity. PCR, when applied with proper methodological discipline, remains the most powerful and democratic of molecular tools. It is rapid, inexpensive, and transparent. Its limitations are human: poor experimental design, casual interpretation, and the illusion that automation equates to accuracy. When those are corrected, PCR delivers data of clinical quality in minutes rather than days. A same-day molecular triage test is not speculative research but an achievable correction to a flawed diagnostic status quo. By focusing on accuracy, speed, and reproducibility, such assays could spare thousands of men needless biopsies while ensuring that those who truly need intervention are identified early. That is how diagnostic technology should serve medicine, quietly, efficiently, and with evidence rather than theatre.
Stephen Bustin PhD FRSB MAE
30 September 2025
The Five Biggest Mistakes in PCR Setup
PCR seems simple: mix template, primers and master mix, place inside thermal cycler, come back an hour later, collect Cq values and publish results. In reality, a handful of recurring errors undermine most published experiments. These five pitfalls are so pervasive that much of the qPCR literature is built on results that look convincing but are in fact artefacts.
1. Using poor-quality template
PCR cannot rescue bad input. Contaminated, degraded, or variable nucleic acids create scatter that no number of cycles can fix. Quantifying expression from unchecked RNA is guesswork dressed up as measurement.
2. Contamination - the invisible enemy
Carryover from previous runs, aerosolised amplicons, or careless pipetting generate false positives indistinguishable from real signal. Negative controls that are not consistently flat mean the data cannot be trusted. Pretending otherwise is self-deception.
3. Misdesigned or unvalidated primers
Primers that dimerise, anneal nonspecifically, or amplify genomic DNA yield misleading results. Software predictions are useful, but not enough. Every primer pair must be validated empirically for specificity, efficiency, and product identity.
4. Ignoring controls
No-template controls, no-RT controls, and positive controls are not optional. Without them, you cannot distinguish biology from artefact. The absence of controls makes the dataset uninterpretable, however “clean” the amplification plots look.
5. Misusing numbers: Cq values are not results
Cqs are intermediate measurements, not data to analyse directly. They are logarithmic quantities, so reporting them with means ± SD is misleading. The correct approach is to convert Cqs into copy numbers using the actual efficiency of each assay, then propagate error properly. Otherwise, the numbers give an illusion of precision while hiding substantial uncertainty.
Worked Example: When Efficiencies Differ
Suppose you compare a target gene and a reference gene. Both produce Cq = 25. On the surface, they look identical.
Target gene efficiency = 100% → copy number ≈ 32,768
Reference gene efficiency = 90% → copy number ≈ 18,895
Although the Cqs are the same, the calculated copy numbers differ by ~1.7-fold purely because of efficiency.
If you analyse this with ∆∆Cq and assume equal efficiency, the conclusion is “no difference.” In fact, you have introduced a systematic error close to two-fold.
Why this matters
These mistakes are not cosmetic. They determine whether an experiment produces data that can withstand scrutiny or collapses at the first attempt to replicate. PCR is a powerful tool, but only if its limitations and sources of error are faced head-on.
16th September 2025
Peer Review: Science’s Necessary but Flawed Gatekeeper
A guide for the curious reader: what peer review is, what it is not, and why it still matters
Most people take “peer-reviewed” as a shorthand for “reliable.” Journalists use it as a badge of quality. Politicians invoke it to bolster policy. But what actually is peer review, and why does it matter?
At its simplest, peer review is a checking system. When a scientist completes a piece of research and wants to publish it, the journal editor sends it to other experts, the “peers.” Their job is to read the manuscript, spot errors, ask questions, and suggest improvements. In theory, this ensures that flawed work is corrected before it enters the scientific record. At its best, peer review is collaborative quality control: the sharp eyes of colleagues helping refine an idea until it is clearer, stronger, and more useful.
That is the ideal. In practice, things are more complicated. Reviewers are human. They are busy, opinionated, and bring their own biases. Sometimes reviews are careful, constructive, even generous. Other times they are hasty or dismissive. Reviewers may lack familiarity with some methods, allowing weak results to slip through. Incomplete reporting can go unchallenged when it should have been flagged. And occasionally reviews cross the line into hostility or ridicule, more personal attack than scientific critique. When such comments are passed on unfiltered, they can do real harm. Authors may be demoralised, valuable work delayed, and younger researchers discouraged from staying in science. Different models of peer review try to address these problems. The most common is single-anonymous: reviewers know the author’s name, but the author does not know the reviewer’s. This protects referees but leaves room for bias. Double-anonymous (often called double-blind) hides both identities, reducing the influence of reputation or gender but not eliminating it. Open review publishes names and reports alongside the paper, which increases accountability but can make reviewers more cautious, especially if they are junior and critiquing senior figures. Each system has advantages and drawbacks, and none is perfect. What matters most is not the model but whether editors take their responsibility seriously, ensuring that reviews are fair, evidence-based, and respectful.
So what can be done? Some journals already screen reviews before sending them on. If a report is rude, vague, or inaccurate, the reviewer is asked to revise it. Others use structured forms to keep comments focused. Training schemes help newcomers understand what is expected. And patterns of reviewer behaviour can be tracked, so repeat offenders are not invited back. These are not radical reforms. They are practical steps that many journals could take tomorrow. Even with safeguards, it is important to remember that peer review is not a guarantee of truth. Papers that have passed review can still be wrong. Methods may be described poorly, data may be incomplete, and conclusions may overreach. Retractions happen. High-profile mistakes remind us that “peer-reviewed” is a filter, not a seal of perfection. Why keep it then? Because, flawed as it is, peer review remains the best system science has for collective error-checking. It slows down bad ideas, improves good ones, and provides a forum for debate. Crucially, it is built on the goodwill of the scientific community. Most reviewers give their time without reward, motivated by a shared incentive to uphold standards. For the general reader, the takeaway is this: treat “peer-reviewed” as a mark of effort and scrutiny, not as a guarantee. Science is a human enterprise, and like all human systems, it is messy. But by valuing fairness, accountability, and professionalism within peer review, we can make it closer to what it ought to be: a process that strengthens science rather than undermines it.
13th September 2025
Britain’s Vanishing Science Base
Brexit, broken funding and empty slogans are dismantling the very system that once made Britain a science powerhouse.
It is 2025 and the UK’s claim to be a “science superpower” looks increasingly hollow. Pharmaceutical companies are pulling research and development out of Britain, taking with them skilled jobs, clinical trials and access to new medicines. The press releases talk of “global portfolio optimisation” and “strategic realignment.” The reality is more blunt: fewer opportunities for patients to benefit from cutting-edge therapies, and fewer secure careers for the scientists who once made the UK a hub for innovation. This retreat has been years in the making. Brexit destroyed our positive grant balance with the EU and erected barriers that make it harder for postdocs to come here and harder for ours to go there. Universities face a funding crisis, while young researchers are stuck with insecure contracts, low salaries and unmanageable debt. Now, with pharma retreating, one of the few routes out of that insecurity is closing. Start-ups, much touted as the solution, are not a substitute: they are fragile, rarely internationally competitive, and do not generate jobs on the scale that industry once did. The result is a slow hollowing out. Britain still has world-class universities and laboratories, but without stability and investment, their discoveries will be developed elsewhere. And soon it will not just be that our scientists leave. It will be that a generation is never educated here in the first place.
The irony is that my own generation was given every opportunity. We paid no tuition fees, received grants to study, and entered a system that still offered the prospect of a secure academic or industrial career. What do first-year students see today? Debt, short-term contracts, shrinking prospects and now the disappearance of the industrial jobs that once offered an alternative to academia. And our Prime Minister, ministers and MPs, all of whom enjoyed these benefits, keep the drawbridge pulled up behind them.
Yet decline is not inevitable. The solutions are straightforward, if politically inconvenient. Rejoin Horizon Europe fully and restore the freedom of researchers to move in and out of the UK. Stabilise university funding and stop relying on overseas student fees as a survival strategy. Replace insecure short-term contracts with sustainable research careers. Make the UK worth pharma’s while again by offering a predictable regulatory environment, faster trial approvals through the NHS, and a clear national framework for drug pricing. And above all, stop fetishising start-ups as if they can replace the scale and stability of industrial R&D.
Britain still has extraordinary scientists, laboratories and ideas. But without a government prepared to move beyond empty slogans and offer a genuine vision for science, those strengths will be wasted. Knowledge is universal and does not belong to one country. Yet Britain has always been known for attracting the best minds and working with them to move science forward while also enriching this country. How can we abandon all that?
8th September 2025
When Bad Science Fuels a Public Health Crisis
It is 2025 and children are dying of measles in Europe and the United States.
Once-eradicated diseases are in danger of returning. And in a staggering policy reversal, some US states are now starting to roll back long-standing childhood vaccine mandates.
The consequences will be catastrophic, but they did not come out of nowhere. They are the aftershocks of a failure that began more than two decades ago, when a single scientific paper helped ignite a global wave of vaccine hesitancy.
In 2002, a peer-reviewed publication claimed to have found traces of measles virus in the gut tissue of children with autism. The result appeared to bolster the false belief that the MMR vaccine might somehow cause autism. The findings were based on laboratory tests, but the study was based on a combination of wrong data, contamination as well as selective and misleading reporting. The results were never replicated, whilst hundreds of reports involving hundreds of thousands of children demonstrated the opposite. And yet, the damage was done.
Two decades later, we are still living with the consequences. In the United States, vaccine refusal is now a mainstream political identity and measles outbreaks have returned there as well as in Europe. And globally, trust in public health guidance has been fractured. That a single study could ignite such long-lasting harm might seem absurd. But the real absurdity is that the problems behind that paper were not unique. Scientists knew then, as they do now, that many biomedical studies rest on shaky foundations.
Generally, the issue is not fraud. It is more insidious than that. It is the quiet corrosion of scientific standards under the weight of institutional pressure. Today, scientists are judged not by the quality of their work but by the quantity: how many papers they publish, how much funding they bring in, how frequently they are cited. Time-consuming work that involves numerous controls, validating methods and verifying results, often comes second to speed and impact. Coupled with the tendency of funders to reward novelty and the publicity generated by bold claims, this means that the need for scientific rigour is often overlooked.
Conventional media have not helped as scientific findings are now press-released before they are peer-reviewed, preliminary results are treated uncritically as breakthroughs and equal time is given to proven expert opinions and pub-originated beliefs. And then comes the backlash, amplified and accelerated by social media, which thrive on outrage, distrust and conspiracy. In that environment, science no longer appears cautious and self-correcting. It looks contradictory, politicised and incoherent. And these cracks are then politicised and exploited by reckless narcissistic politicians.
Take the COVID-19 pandemic. The scale-up of testing was, by any measure, an extraordinary achievement. Within weeks, laboratories around the world were detecting a novel virus with impressive speed and accuracy. But beneath the surface, a longstanding issue re-emerged: the technology powering those tests, the polymerase chain reaction or PCR, was already known to be problematic. For years, researchers had warned that PCR was frequently misused. Poor test designs, weak validation and opaque reporting had become common. Together with a group of international experts, I helped publish a set of guidelines in 2009 on how to conduct and report these types of experiments. Despite nearly 20,000 citations, they remain widely ignored. During the pandemic, those technical shortcomings collided with political urgency and media sensationalism. False positives, conflicting test results and confusing terminology all fuelled suspicion. The fact that most tests worked well became irrelevant. What lingered in the public imagination were the failures and these were, and indeed still are, exaggerated and weaponised.
This is what happens when science fails to fix its own house. The credibility we take for granted can erode quickly. Not because the public are anti-science, but because we gave them too many reasons to be sceptical. Fixing it will not be easy. But it starts with humility. With a willingness to look critically at the tools we use and the culture we have built around them. And with an understanding that scientific integrity is not just about avoiding fraud. It is about valuing precision over prestige, transparency over trendiness and trustworthiness over speed. We cannot afford to get this wrong again.