Mind the app - considerations on the ethical risks of COVID-19 apps
[22 April update: at the Digital Ethics Lab (OII, University of Oxford) we have elaborated a list of 16 questions to check whether an app is ethically justifiable, the full article, open access, is available here]
There is a lot of talk about apps to deal with the pandemic. Some of the best solutions use the Bluetooth connection of mobile phones to determine the contact between people and therefore the probability of contagion.
There is a lot of talk about apps to deal with the pandemic. Some of the best solutions use the Bluetooth connection of mobile phones to determine the contact between people and therefore the probability of contagion.
In theory, it may seem simple. In practice, there are several ethical problems, not only legal and technical ones. To understand them, it is useful to distinguish between the validation and the verification of a system.
The validation of a system answers the question: "are we building the right system?". The answer is no if the app
- is illegal, for example, the use of an app in the EU must comply with the GDPR; mind that this is necessary but not sufficient to make the app also ethically acceptable, see below;
- is unnecessary, for example, there are better solutions;
- is a disproportionate solution to the problem, for example, there are only a few cases in the country;
- goes beyond the purpose for which it was designed, for example, it is used to discriminate people;
- continues to be used even after the end of the emergency.
The verification of a system answers the question: "are we building the system in the right way?". Here too the difficulties are considerable.
For once, the difficult problem is not privacy (or personal data protection to be more precise). A Bluetooth-based app can use anonymous data, recorded only in the mobile phone, used exclusively to send alerts in case of contact with people infected, in a non-centralised way. It is not easy but it is feasible (see also here). Of course, it is trivially true that there are and there might always be privacy issues. The point is that, in this case, they can be seriously mitigated made much less pressing than other issues. However, once (or, if one is more sceptical than I am, even if) privacy is taken care of, other ethical difficulties need to be resolved. They concern the effectiveness and fairness of the app.
To be effective, an app must be adopted by many people. In Britain, I am told that it would be useless if used by less than 20% of the population. According to the PEPP-PT, real effectiveness seems to be reached around the threshold of 60% of the whole population. This means that in Italy, for example, the app should be consistently and correctly used by something between 11m to 33m people, out of a population of 55m. Consider that in 2019 Facebook Messenger was used by 23m Italians. Even the often-mentioned app TraceTogether has been downloaded by an insufficient number of people in Singapore (12% at the time of writing).
Given that it is unlikely that the app will be adopted so extensively just voluntarily, out of goodwill, social altruism or responsibility, it is clear that it may be necessary to encourage its use, but this only shifts the problem. Because the wrong, positive incentives to use a voluntary app – for example, to use the app as a kind of passport, to allow users back to work or have privileged access to some services – have at least two drawbacks:
- they tempt one to play the system or cheat, for example, by leaving the phone in the drawer, if a "clean" record is required to be able to have access to some services, or provide the wrong kind of self-assessment if appearing to be at risk guarantees an earlier medical treatment;
- they are unfair if only those who have the app have access to some services. This may create first and second class citizens. As an example, consider that, according to the latest DESI report of the EU Commission (data are available only until 2016, see here), only 44 % of Italians between 16 and 74 years have "at least basic digital skills". And not everybody has or can afford a mobile phone equipped with Bluetooth.
Therefore, the positive incentives should be independent of the use of the app, that is, they should not be endogenous (what is offered by the app), but exogenous (what is offered for the app).
A typical exogenous incentive mechanism is a lottery. Installing and running the app may be like buying a lottery ticket and periodical prizes could be given to "app tickets", in order to gamify the process and incentive people to get as many "tickets" (apps downloaded and activated) as possible. Those tempted to "game the system" – e.g. by having people near them adopting the app to have "more tickets" – would actually be reinforcing the positive trend, ensuring more widespread adoption of the app. And a lottery-mechanism could avoid unfair discriminations in terms of advantages or other kinds of facilitation unavailable to those who are on the wrong side of the digital divide. These are all good outcomes.
If some kind of lottery-based incentive mechanism is not adopted or does not work sufficiently well, the other problem is the potentially unfair nature of a voluntary solution that is used by a limited community (see 2 above). The app works much better the more it is widespread, and it is most widespread where there is more digital literacy and ownership of mobile phones equipped with Bluetooth. Therefore, by making it voluntary and linking to it some benefits or advantages there is a very concrete risk of privileging the already privileged, and their residential areas. The following summary from the EU "Digital Economy and Society Index Report 2019 - Human Capital" is worth quoting in full:
"In 2017, 43 % of the EU population had an insufficient level of digital skills. 17 % had none at all, as they either did not use the internet or barely did so.
According to the digital skills indicator, a composite indicator based on the digital competence framework for citizens*, 17 % of the EU population had no digital skills in 2017, the main reason being that they did not use the internet or only seldom did so. This represents an improvement (i.e. decrease) of 2 percentage points compared to 2016. The share of EU citizens without basic digital skills, in turn, went down by 1 percentage point (to 43 %). However, these figures imply serious risks of digital exclusion in a context of rapid digitisation. There are proportionally more men than women with at least basic digital skills (respectively, 60 % and 55 %). In addition, only about 31 % of people with low education levels or no education have at least basic digital skills. 49 % of those living in rural areas have basic digital skills compared with 63 % in urban areas.There are still major disparities across Member States. The share of people with at least basic digital skills ranges from 29 % in Bulgaria and Romania (despite noticeable progress in both these countries in 2017) to 85 % in Luxembourg and 79 % in the Netherlands."
If the app is not carefully planned and designed, the digital divide may become a biological divide. It may be objected that a small percentage of people voluntarily adopting the app is better than nothing. I doubt it, given that such a small percentage tends to live together in the same areas. It will be great, but only for them (benefits and positive incentives included), and useless for all the others. Indeed, the unfairness of a voluntary app adopted by a small percentage of the population may be used to support the case for a mandatory solution, that is, negative incentives. Where the primary benefit is collective, not individual, it may be contextually ethical to require that those who can afford more do more, that is, those who can adopt a digital solution are asked to adopt it. This "negative" incentive (the mandatory app) could "positively" discriminate the better-off (the digitally-educated owners of a Bluetooth-enabled mobile phone) to help, now as a very large group, the worst off. Metaphorically, a compulsory app would work like a tax. Those who can contribute more digitally may be required to do so. Consider that in some cultures this may be a de facto social outcome, not a legal requirement, whenever peer pressure and expectations make a behaviour so strongly expected to be almost compulsory.
In all this, there is a final temptation one must resist: a merely political solution, in the following sense.
Validation and verification are of course related. Above all, they are related by a fundamental twofold relation:
Validation and verification are of course related. Above all, they are related by a fundamental twofold relation:
- lack of validation should stop one from building the app; and
- poor or failed verification should force to re-consider the validation of the app in the first place (e.g., its necessity).
In plain English: if it turns out that one cannot build it rightly, perhaps one should not build it at all in the first place (it is not the right solution to build) or change approach completely, because of the positive + negative + opportunity costs incurred by the adopted solution. This relation helps to explain why the app may be criticised in some cases as a mere political solution.
When a solution is labelled as merely political (“it does not matter whether the system works or not, even if it does not, we must show that we are doing something”), then this means that the backward link between verification and validation is severed. In plain English again: it does not matter if (mind: not whether) one cannot build the app rightly — in particular, it does not matter if the app is totally or largely ineffective — one should still build it (validation is ok) because the validation does not concern the system but the signalling process of building the app, and even an ineffective app is still successful in signalling that something has been tried and done.
Sometimes severing validation and verification is ethically justified. It is often justified in cases of high uncertainty (the “just in case it works” kind of scenarios). And we may be facing one of these cases. In general, politicians love to think the V/V severance offers a win-win situation: if it works great, if it does not, one has still shown goodwill, care, that one has tried everything, that one cannot be blamed for not trying etc. Of course, the temptation is to treat all costs — which are the reason against severing validation and verification — as externalities (they affect the future, the next government etc.). I think this is not one of those cases — that is, it is not a case in which we are ethically justified in severing validation and verification — because the costs (all kinds of them) are high and cannot be “externalised” (even assuming that were a good practice, which is not): they will hit a population/government pretty quickly pretty soon, potentially making the whole problem worst.
Therefore, one should avoid the risk of transforming the production of the app into a signalling process. To do so, the verification should not be severed from, but must feedback on, the validation. This means that if the verification fails so should the validation, and the whole project ought to be reconsidered. It follows that a clear deadline by when (and by whom) the whole project may be assessed (validation + verification) and in case be terminated, or improved, or even simply renewed as it is, is essential. This level of transparency and accountability should be in place.
Technology can help, substantially. But an app by itself will not save us. And the wrong app may be worse than useless, as it will cause ethical problems and potentially exacerbate health-related risks, e.g. by generating a false sense of security, or deepening the digital divide. A good app must be part of a wider strategy, and it needs to be designed to support a fair future. This should be possible. But if it is not, better do something else, avoid its positive, negative and opportunity costs, and not play the political game of merely signalling that something (indeed anything) has been tried.
It is clear that we are entering some uncharted areas of digital ethics. The way forward may lie in designing the right policies that incentive the adoption of the app (voluntary, mandatory or a mix of the two), and/or in a different architecture of the app (e.g. more centralised, using GPS data etc.), and/or the nature of the hardware required (think of a cheap, freely-distributed Bluetooth-based tracker, like those that one can attach to one's keys to find them at home), and/or how the app is used (think of an app-hub, able to support a whole family through only one mobile phone, in connection with other Bluetooth trackers). But any solution should take care of its ethical implications, and be flexible enough to be improved rapidly, to rectify potential shortcomings and take advantage of new opportunities, as the pandemics develops. I remain optimistic, it is possible to find a good solution, and efforts to devise it are certainly worth making, but I am aware it won't be easy, especially since a digital solution may not be proposed again to the public if there is an initial failure. I am reminded of one of those chess problems which are hard enough to make you doubt there might be a mistake in how they are stated (for the chess player, I recall once, as a kid, solving a problem set-up by my father by finally realising the obvious: the third and last move of the white was to promote the pawn as a Knight not as a Queen, thus checkmating the King).
When a solution is labelled as merely political (“it does not matter whether the system works or not, even if it does not, we must show that we are doing something”), then this means that the backward link between verification and validation is severed. In plain English again: it does not matter if (mind: not whether) one cannot build the app rightly — in particular, it does not matter if the app is totally or largely ineffective — one should still build it (validation is ok) because the validation does not concern the system but the signalling process of building the app, and even an ineffective app is still successful in signalling that something has been tried and done.
Sometimes severing validation and verification is ethically justified. It is often justified in cases of high uncertainty (the “just in case it works” kind of scenarios). And we may be facing one of these cases. In general, politicians love to think the V/V severance offers a win-win situation: if it works great, if it does not, one has still shown goodwill, care, that one has tried everything, that one cannot be blamed for not trying etc. Of course, the temptation is to treat all costs — which are the reason against severing validation and verification — as externalities (they affect the future, the next government etc.). I think this is not one of those cases — that is, it is not a case in which we are ethically justified in severing validation and verification — because the costs (all kinds of them) are high and cannot be “externalised” (even assuming that were a good practice, which is not): they will hit a population/government pretty quickly pretty soon, potentially making the whole problem worst.
Therefore, one should avoid the risk of transforming the production of the app into a signalling process. To do so, the verification should not be severed from, but must feedback on, the validation. This means that if the verification fails so should the validation, and the whole project ought to be reconsidered. It follows that a clear deadline by when (and by whom) the whole project may be assessed (validation + verification) and in case be terminated, or improved, or even simply renewed as it is, is essential. This level of transparency and accountability should be in place.
Technology can help, substantially. But an app by itself will not save us. And the wrong app may be worse than useless, as it will cause ethical problems and potentially exacerbate health-related risks, e.g. by generating a false sense of security, or deepening the digital divide. A good app must be part of a wider strategy, and it needs to be designed to support a fair future. This should be possible. But if it is not, better do something else, avoid its positive, negative and opportunity costs, and not play the political game of merely signalling that something (indeed anything) has been tried.
It is clear that we are entering some uncharted areas of digital ethics. The way forward may lie in designing the right policies that incentive the adoption of the app (voluntary, mandatory or a mix of the two), and/or in a different architecture of the app (e.g. more centralised, using GPS data etc.), and/or the nature of the hardware required (think of a cheap, freely-distributed Bluetooth-based tracker, like those that one can attach to one's keys to find them at home), and/or how the app is used (think of an app-hub, able to support a whole family through only one mobile phone, in connection with other Bluetooth trackers). But any solution should take care of its ethical implications, and be flexible enough to be improved rapidly, to rectify potential shortcomings and take advantage of new opportunities, as the pandemics develops. I remain optimistic, it is possible to find a good solution, and efforts to devise it are certainly worth making, but I am aware it won't be easy, especially since a digital solution may not be proposed again to the public if there is an initial failure. I am reminded of one of those chess problems which are hard enough to make you doubt there might be a mistake in how they are stated (for the chess player, I recall once, as a kid, solving a problem set-up by my father by finally realising the obvious: the third and last move of the white was to promote the pawn as a Knight not as a Queen, thus checkmating the King).
Floridi's post presents very important considerations for this debate. These are hard questions. My own take: 1.incentive systems can be OK if they are *not* connected to a negative prediction of contagion (they could be connected to a prediction of positive contagion, e.g. priorities in access to clinical tests for those identified as at risk) and if the unfairness problem is more or less solved (see 2); 2. unfairness could be mitigated by distributing discounts for newly purchased cell phones (with proof of app download-not conditional on keeping the app) and/or purposedly designed cheap contact tracing bluetooth devices for free; 3. some degree of what Floridi identifies as "unfairness" should be tolerated if digital tracing proved important to protecting democratic freedom as a whole (I sketch this argument in *) - a Rawlsian priority of liberty ; 4. the politicians' attitude of "let's do it, maybe it will work better than we can predict" is not irrational if significant downsides can be avoided before they become too big (this favors a voluntary solution rather than a compulsory one - legal obligations generate a huge social inertia).
ReplyDelete*https://medium.com/@loimichele/covid-19-contact-tracing-apps-beyond-the-health-vs-privacy-trade-off-efe0ddbab2ec
Thank you and sorry for the late acknowledgement.
DeleteFloridi's post outlines excellent considerations for the ongoing debate around the usefulness of those apps.
ReplyDeleteI believe that by merely automating manual contact tracing in the digital we have indeed completely forgotten to think about incentives properly. I want to point out another very consequential terra incognita here, which is that we are still in a mitigation phase (rather than suppression), since we don't yet have enough kits.
A mathematical perspective on the current situation should shed some light on why this is consequential. In this excellent Twitter thread
https://twitter.com/gro_tsen/status/1252581933835575297?s=20
it is pointed out - through a mathematical understanding of epidemiological propagation on networks - that what matters once the epidemic has taken off in order to reduce the attack rate is disparity across the population in the risk of being infected rather than disparity across the population in the risk of infecting others (superspreadee vs superspreader).
Both risks mathematically average out the same over the entire population, and this is the power of this insight. While amateur epidemiologists worldwide obsess over the average infection/reproduction rate (and some will think about the variance in the infection rate across the population), this insight suggests instead obsessing over reducing the *variance* in infectee rate.
This is tied very quickly to reducing disparity (economic mostly at this stage) when facing the epidemic.
This has huge consequences on how you would want to act on data that those systems log, and consequently on how you would want to design them. Maybe you would want to see logged encounter events as "tips", when logged by a person when consented by the other.
This feels to me like a much more productive way to think about incentives around these apps.
In my impression though the engineers who are designing both PEPP-PT and DP-3T, the two main protocols competing in Europe at the moment, have thus not been given the right specs. Hopefully their protocols are sufficiently adaptable to changing political framing around them.
@podehaye
Thank you and sorry for the late acknowledgement.
DeleteA reflection from Australia:
ReplyDeleteThe thing which concerns me most, in many ways, is the burden of security, maintenance and upkeep that a Government takes on when introducing these kinds of technologies. In Australia, we are assured that if we install the Government's CovidSafe App that our "privacy is protected by law" but only the most naive of us could ever accept that such a rich treasure trove of personal data would not eventually find itself breached or accessed by extrajudicial methods - this latter being a resounding concern where the US-based Amazon is the service provider for data storage.
As a closing remark, there exists some convoluted irony at work here: the self-same logical, mathematical and material (indefinite) systemic-extensibility which generates pandemics is also the core symmetry which fuels the Cybersecurity wars we all find ourselves unwittingly ensconced in. That which we seek to protect ourselves from (through information technology) is also precisely and simultaneously that which leaves us all wide open to data theft or exploitation - the core symmetry and principle of organic self-replication is also the principle of logical extensibility which sees us trading one organic, material or concrete vulnerability for an abstract digital one.
Thank you and sorry for the late acknowledgement.
Deletegreat
ReplyDeleteI like your blog on incredible green beauty I hope everybody knows about this kind of tips to glance i love it.
ReplyDeleteLdsi Architects
nice work
ReplyDelete