Skip to main content

Accessing Data

Are TREs a bad idea?

19 February 2026

Trusted Research Environments — or TREs — are often pitched as the future of safe data access. Governments love them, big research programmes promote them, and policy papers talk about them like they’re the silver bullet for patient privacy.

But when you look past the branding, TREs come with some serious drawbacks. In fact, in many everyday research settings, they can slow innovation, complicate collaboration, and introduce more barriers than benefits.

So let’s break down the problems — plainly, practically, and with evidence.

1. TREs make research slower and more bureaucratic

One of the biggest issues with TREs is that they multiply the amount of admin researchers need to do, especially when a project requires data from more than one environment. Rather than pulling approved data into a single workspace, researchers often need:

  • separate access applications
  • separate technical checks
  • separate governance approvals
  • separate log‑ins
  • separate restrictions on what tools they can use

This can add months to a project timeline. [phgfoundation.org]

If your research involves multimodal data (e.g., imaging + genomics + clinical records) stored across different TREs, the situation becomes even worse: you might be stuck stitching together results in awkward formats because the systems don’t talk to each other. [phgfoundation.org]

In fast‑moving fields — like cancer research, infectious disease, or AI model development — TRE-related delays aren’t just annoying. They can make a study unviable.

2. They’re often “too closed” to be genuinely useful

To make data “safe,” TREs lock it down. But sometimes they lock it down too much.

TREs typically restrict:

  • moving data out
  • bringing external tools in
  • running custom code
  • exporting large outputs
  • collaborative work across institutions

This means researchers may not be able to:

  • use modern analytics tools
  • reproduce previous analyses
  • validate methods with external teams
  • do cross‑centre quality assurance

Several policy reviews note that TRE “closedness” directly inhibits analysis across environments and makes combining results slow and expensive. [phgfoundation.org]

Ironically, TREs can hurt research quality while aiming to protect data quality.

TREs don’t actually “build trust” — they remove the need for it

Here’s something surprising: the whole “trusted” part of Trusted Research Environments doesn’t really hold up under ethical scrutiny.

A Journal of Medical Ethics analysis argues that TREs don’t build trust at all — they eliminate the vulnerability that trust requires, because everything is locked down, audited, and restricted. [jme.bmj.com]

Trust comes from transparency, professionalism, and accountability — not from designing a system where trust becomes irrelevant because no one has meaningful autonomy. TREs solve a technical risk but create a social one: the public is asked to “trust the system” rather than the people running it.

This makes the name itself somewhat misleading.

TREs can block innovation at scale

TREs were never designed for huge, modern datasets like:

  • whole-genome sequences
  • national imaging repositories
  • drug‑discovery screens
  • population‑scale multi‑omics

According to a PHG Foundation briefing, TREs cause major problems for large‑scale and multimodal datasets, especially in genomics where data volumes hit petabytes. Even just storing these datasets centrally creates massive energy use, financial cost, and sustainability concerns[phgfoundation.org]

As big datasets get bigger, TREs become a bottleneck — not a solution.

They don’t play well together (yet)

A huge issue is that TREs across organisations or nations aren’t federated. This means:

  • No unified login
  • No shared technical standards
  • No standardised governance
  • No consistent analytical tooling

So researchers must effectively operate in multiple incompatible mini‑ecosystems, repeating work each time. [phgfoundation.org]

Several expert groups argue that TREs are not future‑proof unless they become interoperable — and currently, most are not. [phgfoundation.org]

TREs are expensive, resource‑intensive, and hard to scale

Setting up a TRE requires:

  • secure hosting
  • specialised software
  • strict monitoring
  • trained administrators
  • ethics and access management teams
  • continual auditing and compliance checks

This makes TREs disproportionately expensive for smaller institutions, charities, NHS services, or academic groups.

Even at the national level, building multiple TREs has created fragmented, duplicated systems rather than a single, efficient platform. Many groups (including ARDC in Australia) acknowledge that TREs require heavy coordination just to stay functional. [ardc.edu.au]

So… are TREs always bad?

Not at all.

TREs are a good fit when:

  • handling highly sensitive data
  • dealing with small, well‑defined research teams
  • working within a single institution’s dataset
  • clear audit trails and tight control are essential

But for most everyday research, especially multi‑centre or data‑intensive studies, the downsides far outweigh the supposed benefits.

TREs can be:

  • too slow
  • too fragmented
  • too heavily locked down
  • too technically limiting
  • too bureaucratic

And although they protect data, they can choke the research that data is supposed to support.

Final thought

TREs aren’t a magic answer — and in most circumstances, they’re not even a good one. They solve important privacy concerns, but at the cost of agility, collaboration, innovation and, ironically, genuine trust.

If we want research systems that are safe and scientifically productive, we need to think beyond simply locking data away — and build environments that support both privacy and progress.