Are data trusts an appropriate business model for the developing world?
When Julie Parker was asked to join the advisory board of Insight, an organization that oversees a slew of NHS eye scans and other images for research, she accepted without hesitation. The retired insurance adjuster lives with macular degeneration, which robs sufferers of their central vision. This gave him a particular interest in the type of breakthroughs this data could produce.
Parker’s unpaid role points to a new data management model designed to ease privacy concerns. These have intensified in recent years as the role of tech giants in aggregating and mining personal data has come under increasing scrutiny.
Parker is part of a team reviewing applications for the eye scan dataset from NHS, academic researchers and industry. The team acts as a go-between, judging applications against published criteria, including whether each application poses a significant risk to individuals’ privacy and whether the outcome strikes a “balance between the public good, scientific discovery and the creation of value”.
In this way, Insight acts as a “data trust” – and the use of such trusts has been singled out by The Lancet and the Financial Times Commission on Governance of the Future of Health 2030 as a promising model for the ‘to come up. There are, however, questions about how and by whom key decisions on data sharing are made – and whether a model that originated in the wealthy West is appropriate for the developing world.
Insight is one of nine data hubs funded by the UK Health Data Research Alliance (HDR UK), an independent grouping of healthcare and research organizations that seeks to establish best practice for the ethical use of data British health. David Seymour, chief executive, says the positives and risks of using data have been “highlighted and amplified” by Covid-19. The pandemic has not only increased the amount of information available, but has also led to faster and larger-scale data sharing under the demands of the public health emergency.
The main risk, suggests Seymour, concerns “public perception and understanding. . . because when decisions are made quickly, they may not always be communicated as transparently and/or the type of public involvement in these decision-making processes is not always as strong as it should be “.
The data trust model, he says, can be key to providing a needed layer of reassurance to the public, “not just about who gets access, but under what terms that access is granted.” Seymour sees data trusts as part of a larger approach known as trusted research environments. HDR UK recently defined principles and best practices for these “data havens” which provide researchers with a single location to access valuable datasets “similar to a secure reference library”.
Parker says she and her colleagues “ask a lot of questions” about every request they receive from research teams. Although they have yet to outright deny a request, a couple have been fired for more information. “We don’t reject outright, because it’s wrong. But we want to be reassured [that the data will be safe]”, she says.
Jack Hardinges, program manager for data institutions at the Open Data Institute (ODI), a nonprofit that advocates an “open and trusted data ecosystem”, says it’s important not to delineate too much closely the concept of data trust. Hardinges, who is responsible for ODI’s work on data stewardship, suggests it has come to be defined in a way that is “specific and niche, in that it is about creating a particular type of trust relationship and that trustees manage the data on behalf of a group”. individuals “.
He notes that other approaches to data stewardship are also emerging, such as that taken by Open Humans, a US-based organization that allows individuals to donate data from wearable devices such as Fitbits, or records medical,” and . . . make sure it’s used to research a particular condition or cause. It’s about bottom-up empowerment of individuals to exercise control over data about them rather than giving that control to someone else.
Hardinges adds that for data trusts: “Who does the trusteeship around the data is important. We shouldn’t inherently trust it just because it’s called a data trust.
Such caveats may be even more relevant in the developing world. Amandeep Gill, one of the moving minds behind the International Digital Health & AI Research Collaborative (I-Dair) – which is developing a global platform “to enable inclusive, impactful and responsible research on digital health and AI for health” – says the principal The question is: “How is the thing governed and in whose name?
In Africa and Asia, there are fears that data will be passed on to Western researchers without a clear route for the people who generated the information to benefit. Gill has seen such sensitivities rise in recent years. “There’s a risk that this whole conversation about data trusts will turn into, ‘Give us your data and we’ll solve your problems for you,'” he says. “And there could be a sort of neocolonial tinge to it.”
The resulting backlash risks fueling “a form of data localization [or] data nationalism,” adds Gill. To avoid this, I-Dair pursues a “distributed and decentralized approach – almost like the confederation of data assets”.
An example of I-Dair’s work involves nationally owned datasets covering antimicrobial resistance. Authorities in Singapore and India, for example, retained sovereignty over their data but agreed to share it for research after mutually defining the problem the data is supposed to solve and jointly working on an algorithm to analyze. “The AI that’s being developed is also done collaboratively, so you can trust it,” says Gill.
Like Hardinges, Gill puts forward a model in which citizens come together to generate the data necessary for a goal to which they themselves subscribe. An example of this approach in Europe is Midata, based in Switzerland.
Midata board member Dominik Steiger describes his organization as “a data trust organized like a cooperative.” The idea that citizens or patients should have a say in how their data is used is rooted in the idea that personal data is a resource or an asset. “And that the expectation, or rights, of people to . . . deciding what happens to their data should be integrated into the data ecosystem,” says Steiger.
Some 20,000 people have shared their data with Midata, but not all of them will choose to participate in every project. In one example, people were given an app to record pollen allergy symptoms. “It’s citizen science and the data that people generate will belong to them. . . then they consent to it being used anonymously in an allergy study,” says Steiger. He suggests that this model can offer a quintessentially European approach to data management as an alternative to the behavior of American tech giants.
In Europe, he adds, “there is a strong tendency to seek more reliable solutions, which better represent or allow the participation of individuals. We have an answer, which fulfills these criteria, and we hope that it will inspire such models.