The “Content Authenticity Initiative” claims to be setting the standard for digital content attribution. There are a few groups that seem to be working together to promote this. The CAI …. sounds a bit like CIA, doesn’t it?
The major goals involve being able to instantly and accurately trace a piece of media to its source. Photographs, images, videos, words, and other elements are to be encoded, and be able to get tracked. Welcome to Project Origin.
While this is sold as some sort of trust in media, there is another, more disturbing way to look at things. Will this not also directly connect people to things that are shared online? Won’t it mean the end to anonymous sharing of important information? Will it now become easier to track people for their thoughtcrimes?
1.2 Background At Adobe MAX 2019, the Content Authenticity Initiative (CAI) was announced by Adobe in partnership with The New York Times Company and Twitter. Since that time, this group has collaborated with a wide set of representatives from commercial organizations (software tools, publishers, social media), human rights organizations and academic research to produce this paper and the approach it describes.
5.1.3 Establishing Trust One key component in establishing trust in the CAI system comes from the entities whose certificates are used for signing the claim. To ensure that only assets signed by trusted actors can be considered properly attributed, it is necessary to create a list of trusted certificates or their certification authorities (CAs). Similar to the EU Trust List, the Adobe Approved Trust List, and similar lists used by web browsers and operating systems, the members of the CAI will establish their own Trust List of certificates that can be used to sign claims. Details on the governance of the Trust List is outside the scope of this paper. In many cases, the holder of the certificate will not the individual who created (or edited) an asset, but instead will be the entity responsible for the hardware or software that they used. The signing certificate belongs to the actor (e.g. Truepic Camera, Adobe Photoshop, BBC, etc.) that performed the actions on behalf of someone else. This model allows CAI to provide anonymity (and/or pseudonymity) where desired. For scenarios where the certificate holder is able to reliably establish the identity of the individual, and the individual wishes their identity associated with an asset, an identity assertion is used.
5.1.4 Identity One of the assertion types that can be present in a claim is Identity. This digital identity (also sometimes referred to as a Subject or an Entity) is present when an individual (or organization) is making a clear statement about their association with this claim. Digital identity fundamentally requires digital identifiers — strings or tokens that are unique within a given scope (globally or locally within a specific domain, community, directory, application, etc.). In order to support a variety of use cases, including those where identity might be anonymous or pseudonymous, it is important that various schemes for the identifiers are available for use. Fortunately, most common identity formats such as Decentralized Identifiers-DID, WebIDs, OpenID, ORCiD and others are all based on URIs. This enables an identity assertion to be expressed in the standard format described in RFC 3986.
5.1.6 Redaction of Assertions: In many workflows, there is a need for assertions to be removed by subsequent processes, either because publishing the assertion would be problematic (e.g. the identity of the person who captured a video) or the assertion is no longer valid (e.g. an earlier thumbnail showing something that has since been cropped out). The CAI allows for the redaction of these assertions in a verifiable way that is also part of the provenance of the asset. In the process of redacting an assertion, a record that something was removed is added to the claim. Because each assertion’s reference includes the assertion type, it is clear what type of information (eg. thumbnail, location, etc.) was removed. This enables both humans and machines to apply rules to determine if the removal is acceptable. NOTE: Assertion redacted only applies to assertions that are part of the CAI data. It does not have anything to do with removal of other metadata (XMP, EXIF, etc.).
9 Conclusion: The collaborators on this paper have explored the challenges of inauthentic media through problem definition, system design and use case research. The results of the exploration are expressed in the design of the CAI provenance system. To achieve widespread adoption we have based the design on existing standards and established techniques, and acknowledge that the system will need to include simple and intuitive user experiences. However, even an optimally designed system cannot ultimately succeed in a vacuum. We now begin the important work of deeper, more expansive collaboration with leaders in technology, media, academia, advocacy and other disciplines. With this first step towards an industry standard for digital content attribution, we look optimistically to a future with more trust and transparency in media.
The CAI white paper is certainly worth a read.
Microsoft and the BBC explain Project Origin in their own words. It all sounds so harmless, doesn’t it? It’s all about ensuring that people can trust what they observe in the media is accurate and reliable. Who could possibly disagree with that?
This “coalition” claims to be interested in being able to authenticate media images, videos, and bits of data to identify where it came from. If one was to work in a a vacuum, this sounds completely reasonable and well intentioned.
However, what all too often gets left out of the equation is the rampant corruption, collusion, and financial interests pushing certain narratives. Authenticating photos, while ignoring the bias and fake narratives leaves out the bigger picture. No way is this done by accident.
Have a look through some of the articles at the bottom. These are the bigger issues that so often get (unsurprisingly) ignored. Hard to have an independent media when they are all on the public dole. Even harder when political operatives work within many of them.
But hey, things like a global vaccine passport, are just crazy conspiracy theories, right? Just like the Vaccine Credential Initiative, or the ID2020 Project.
(8) Content Authenticity Initiative WhitePaper
BOUGHT OFF AND CORRUPT CANADIAN MEDIA