Academic partnerships

RESEARCHWITHREALDATA

A continuous, multi-signal youth-wellbeing dataset across 19 institution types in a single measurement framework. Built for academic partners who want to ask questions current data cannot answer.

Use the open APIPropose a study
Public API todayPulse aggregates · no key · 60 req/min · weekly cadence
19 institution typesone measurement framework · secondary, PRU, special, FE, hospital, boarding, youth service, children’s home and more
No editorial controlpublish what you find · we do not negotiate co-authorship
Strong anonymisationk=50 · ε=1.0 · no individual student data, ever

The gap

Most youth-MH research runs on snapshots. The world doesn’t.

NHS England’s MHCYP has been published as annual follow-up waves since 2020. The most recent wave is from November 2023. APMS runs roughly every seven years (2023/24 most recent, Part 2 published November 2025). Both are gold-standard prevalence studies. Neither carries weekly cadence, and neither places mainstream secondary, PRU, special, hospital, boarding, FE, and youth-service settings inside the same instrument.

Pulse is a continuous complement, not a replacement. It cannot answer prevalence questions MHCYP is built for. It can answer cadence and cross-setting questions MHCYP is not.

Getting access

One live tier. Two honest pathways.

Public API today. Beyond that, no researcher portal, no accreditation workflow, no automated approval pipeline. We will not pretend otherwise. Anything past Tier 1 is negotiated case by case.

01Live
Live · public API

Start with the Pulse API.

National, regional, and institution-type wellbeing aggregates, updated weekly. No API key. No registration. 60 requests per minute per IP. The methodology JSON is served live alongside the data.

k=50 · ε=1.0 Gaussian · 4-week rolling · 19 institution types · 9 English regions plus the devolved nationsRead the API reference
02Proposal
Proposal · email-based

Want institution-level or longer time-window aggregates?

There is no /v1/research/* route today. Anything beyond the public Pulse aggregates is negotiated case by case. Send a one-page proposal: research question, ethics-committee status, data needed, intended outputs, named PI. We will reply within ten working days.

No portal · no accreditation workflow · no fee · email [email protected]Propose a partnership
03Proposal
Proposal · joint study

Want to co-design a longitudinal cohort or intervention study?

For matched cohorts, custom aggregation pipelines, or tracking specific interventions over time, we work alongside you as data engineers. You hold the research design and the publication. Your name on the paper. We do not retain editorial control.

Same proposal route · separate scoping conversation if it fits · honest about capacityOpen a conversation

Tier 1 is real code. Tiers 2 and 3 are honest pathways. We chose to label them as proposals because there is no portal, no automated approval pipeline, and no commercial gate. If we ship coded researcher access in future, this panel will be the first thing that changes.

Open empirical questions

Six questions nobody has had the data to answer.

Each is testable with the cadence and cross-setting breadth Pulse provides. We have no published findings on any of them. They are research questions for academic partners to lead, not claims we are making about results.

When in the year do things actually get worse?

Everyone assumes January and exam season. No-one has weekly population-level data across a full academic year. A continuous signal could map the real risk calendar and let pastoral teams prepare instead of react.

Do specific interventions actually change outcomes?

A school adopts a Mental Health Support Team. A local authority changes its pastoral policy. Did wellbeing measurably shift in the affected cohort, against matched controls? Pulse cadence and 19-institution-type coverage make this question testable for the first time.

What does a PRU do differently from a mainstream school?

Pupil referral units, special schools, boarding schools, hospital schools. Same age groups, very different environments. A common measurement framework lets you compare across settings without different surveys, different teams, different definitions.

When someone says they are fine, are they?

Students type one thing and say another. Text and voice carry different signals, including emotional tone. When the two disagree, which one predicts what happens next? That question has not had data behind it before.

Can a crisis be visible before the referral?

Cognitive shifts often precede distress: catastrophising, withdrawal, disengagement. If those shifts are detectable early enough at population level, intervention timing could move forward. The dataset exists to test whether that is real or romantic.

Where is the right alert threshold?

A boarding school at 10pm is not the same as a secondary school at 2pm. Setting alert thresholds is a calibration problem with real costs in both directions: missed cases and pastoral-team alarm fatigue. Cohort-aware thresholds are an open empirical question.

We have no published findings on any of these. They are open empirical questions we would like academic partners to lead. If you propose a study on one of them, we will support it without claiming co-authorship or editorial control.

Five Safes posture

Two of five satisfied in code. Three by partnership today.

The Five Safes (Desai, Ritchie & Welpton, 2016) is the de facto framework for trusted research environments in the UK, used by ONS, UK Statistics Authority, HDR UK, and ADR UK. We map honestly against each pillar. Safe Data and Safe Outputs are enforced in the publish pipeline. Safe People, Projects, and Settings are partnership-negotiated until their workflows ship.

The Five SafesDesai, Ritchie & Welpton, 2016 · ONS / UKSA / HDR UK
  • Safe PeoplePartnership
    Are the people accessing the data trustworthy?No researcher accreditation workflow is shipped. Tier-2 access is granted case by case after a written proposal and ethics-committee evidence. Aspires to align with UKSA Researcher Accreditation in future.
  • Safe ProjectsPartnership
    Is the proposed use of the data appropriate?No project-approval pipeline. Each proposal is reviewed against our published purposes (research on youth wellbeing for public benefit). We turn down anything outside that scope. Reasoning is documented in the reply.
  • Safe SettingsPartnership
    Does the access environment prevent unauthorised use?No Trusted Research Environment is shipped. The public Pulse API meets Safe Settings by being public. Tier-2 and Tier-3 work currently happens via secure file transfer to the institution named in the data sharing agreement.
  • Safe DataSatisfied
    Is the data itself protective enough?Yes. k-anonymity at k=50, differential privacy at ε=1.0 (Gaussian, calibrated, cryptographically secure), 4-week rolling smoothing, and three-week cell-suppression hysteresis. Verified in apps/api/src/temporal/aggregation-activities.ts and packages/core/src/aggregation/k-anonymity.ts.
  • Safe OutputsSatisfied
    Are the outputs themselves disclosure-safe?Yes. The publish step enforces the same k=50 floor on every released cell. Cross-tabulation is capped at three dimensions. Suppressed cells return null, not blank, so absence cannot be inferred from gaps. Verified in the same pipeline.

Two of five satisfied in code. Three handled by partnership today. We will not relabel a row until the corresponding workflow is shipped and audit-able.

Honest by design

What we do, and what we will never do

01

We support replication.

Methodology, anonymisation parameters, and the live JSON at /v1/pulse/methodology are versioned and dated. Any change is announced. The same query at the same week should return the same value, forever.

We don’t selectively report.

No internal pre-screening of releases for tone. Negative findings, null findings, methodology limitations all get the same publication channel as positive ones.

02

We disclose limitations.

Self-selection bias of opted-in institutions, sample composition, k=50 suppression effects, ε=1.0 noise on small cells. Documented inline with every figure.

We don’t hide the gaps.

Suppressed cells return null, not blank, so a researcher can see where coverage is thin. Pre-publication state is admitted on every page that carries our numbers.

03

You publish what you find.

No editorial review. No right of first refusal. No embargo unless you ask for one. If your work shows that something we built is not working, we want that out in the literature.

We don’t hold the pen.

Authorship is yours. We are happy to be named in acknowledgements when we contributed materially to data engineering. We do not negotiate for co-author lines or for veto rights over wording.

This isn’t a partnerships brochure.

It’s how the research relationship is built.

These are children. We don’t get to be careless.

Every young person who talks to Poyntr is trusting us with something private. That trust extends to anyone who touches the data. Four rules apply, and they are not flexible.

Your ethics committee signs off first. Before any access beyond the public Pulse API, your institution’s research ethics committee approves the work. We do not shortcut this. If your university says no, the answer is no.

Your methods section names the anonymisation. Any publication using Pulse data includes the methodology version returned by /v1/pulse/methodology, plus the limitations we publish. We supply boilerplate you can drop in. Hiding methodology is not allowed.

Re-identification ends the relationship. Attempting to work backwards from aggregate data to identify a student or a school terminates the agreement immediately. Not after a warning. Not after a conversation.

Publish whatever you find. No editorial review, no embargo unless you ask, no veto. If the data shows something we built is not working, we want that in the literature. Public benefit beats reputational comfort.

Start a conversation

If you’re working on something that could help young people.

One-page proposal: research question, ethics-committee status, data needed, intended outputs, named PI. Reply within ten working days. No commercial pitch attached.

Propose a study

[email protected]