Recently, UNESCO released their action plan to regulate social media platforms. The guideline is a 59 page document outlining a series of goals and steps that should be taken by Member States.
On the surface, the paper seems harmless enough. But as with most things, the devil is in the details.
Going through this, the thoughts that comes to mind are the CRTC, and Bills C-11 and C-18. There’s a strong reluctance to accept any sort of Government interference with media access.
The paper talks about the importance of having an independent media, with a diversity of perspectives. Nothing wrong with that. However, there are a few places where the idea is raised of subsidizing “independent” media, presumably with Government funds. While a viable media is important, this creates an obvious conflict of interest.
There are also several mentions of online media being used in ways to help advance the U.N. Sustainable Development Agenda, a.k.a. Agenda 2030. It’s unclear what would happen if online platforms were used in ways to undermine its implementation.
There are repeated calls to use digital platforms to respect and protect human rights. This is fine in principle, but it’s undefined, and presumably arbitrary, what those rights are.
Paragraph 38 talks about the need for there to be an ongoing relationship with digital platforms and “credible” news sources. Of course, the term credible is left undefined. It’s also unclear what, if any, voices that media who aren’t considered credible would have.
Paragraph 45 gets into the topic of “compliance mechanisms”. It’s rather chilling, as it mentions the possibility of regulators making final decisions with respect to the rules on platforms.
Paragraph 49 addresses the idea of having checks and balances. This sounds fine, until one asks what structures would have to be put in place to begin with.
Paragraph 52 covers “investing” in so-called independent media, in order to make it more sustainable. If the only way that independents can survive is by getting bailout money, then that would convert them into Government employees. No need to ban critics when they can simply be bought off.
Paragraph 54 talks about having: (a) national; (b) regional; and (c) global governance systems put in place, to safeguard freedom of expression, access to information, and other human rights. There’s also a brief mention about limiting expression to protect human rights.
Perhaps the most interesting sections are paragraphs 68-73, which outline how an “independent regulator” would work. Of course, how independent can it be when it reports to the very people it’s supposed to keep an eye on.
68. In statutory regulation, official regulatory authorities, though constituting part of the executive state apparatus, should be wholly independent of the government and be primarily accountable to legislatures for fulfilment of their mandates. This applies to existing regulatory bodies that have a legitimate interest in content on platforms (such as electoral management bodies, advertising authorities, child protection entities, data and privacy commissions, competition bodies, etc.), as well as any new dedicated or coordinating regulatory instances that may be established.
69. With regard to all statutory bodies engaging in platform regulation, either solely or jointly, periodic review should be performed by an independent body reporting directly to the legislature. Statutory interventions should also be subject to review in the courts if authorities are believed to have exceeded their powers, acted unreasonably, or acted in a biased or disproportionate manner.
70. Official regulatory authorities need to be independent and free from economic, political, or other pressures. Their power and mandate should be set out in law. They should also comply with international human rights and promote gender equality standards.
71. Official regulatory institutions must have sufficient funding and expertise to carry out their responsibilities effectively. The sources of funding must also be clear, transparent, and accessible to all, and not subject to the governmental discretion.
72. Governing officials or members of the official regulatory institutions working on the issue of content on platforms should:
a. Be appointed through a participatory, transparent, non-discriminatory, and independent merit-based process.
b. Be accountable to an independent body (which could be the legislature, judiciary, an external council, or an independent board/boards).
c. Include relevant expertise in international human rights law and the digital ecosystem.
d. Deliver an annual public report to an independent body—ideally the legislature—and be held accountable to it, including by informing the body about their reasoned opinion.
e. Make public any possible conflicts of interest and declare any gifts or incentives.
f. After completing the mandate, for a reasonable period, not be hired or provide paid services to those who have been subject to their regulation, in order to avoid the risk known as “revolving doors”.
73. The official regulatory authorities should be able to request that digital platforms provide periodic reports on the application of their terms of services, and take enforcement action against digital platforms deemed non-compliant with their own policies or failing to fulfil their responsibilities to safeguard freedom of expression and access to information and diverse cultural content. They should be able to establish a complaints process and issue public recommendations that may be binding or non-binding and be empowered to issue transparent and appropriate directives to the platforms for the promotion and respect of human rights, based on international human rights standards
In fairness, there are portions that are noble, such as 72(e) and (f) which aim to limit conflicts of interest in the forms of gifts or lobbying. Lest this be viewed as a hatchet job, there are portions of the paper that are quite good.
Paragraph 115, and its many subparagraphs, detail how due process information and human rights data should be integrated at all stages of moderation. On the surface, there’s nothing wrong with this, but who will be setting the standards?
Paragraphs 116 to 118 offer suggestions for collecting user demographic data for research purposes. While it’s supposed to be anonymized, there’s not enough specifics included as to it use.
Paragraph 143 gives brief guidelines about how platforms should conduct themselves during emergencies and armed conflicts. It suggests . Developing cooperation with trusted partners, independent media organizations, and other reliable flaggers.
These are just some of the issues that are raised. This UNESCO paper seems so harmless on the surface, but it’s really vague at times when clarity is needed.
Note: While UNESCO claims to want to prevent misinformation from spreading, it has hardly been neutral or objective. Only recently, it was telling people to only trust official sources for information on the “pandemic”.
(1) https://www.unesco.org/en/articles/online-disinformation-unesco-unveils-action-plan-regulate-social-media-platforms
(2) https://unesdoc.unesco.org/ark:/48223/pf0000387339
(3) UNESCO Guidelines To Govern Digital Platforms
(4) https://www.youtube.com/@UNESCO
(5) https://www.youtube.com/watch?v=90cIg4lv-3M