Long time readers, as well as n00bs with some exposure to the right sorts of rather pointy-headed theory, will recall that straight photographs are considered as "indexes" of whatever the camera was pointed at. In general, you can take an arbitrary speck of tone or color, and trace it back to the original scene and say "that bit comes from that bit" for every bit. Slightly more generally, you could probably say something like this:
There is a breakdown of the index into a large number of very simple units, which breakdown has the property that each of the units can be uniquely identified as coming from a specific very simple, equivalent, aspect of the original scene.
So this covers raw files, but also things like spectral decompositions and so on.
The point of the idea of "index" is that is exactly captures how much truth there is, notionally, in a photograph. A photograph faithfully and truthfully records certain tone and optionally color aspects of a scene at a moment in time, from a certain angle. A photograph is not isormorphic to the world, but it is isomorphic to that and that is what makes photos photographs and not paintings, not drawings, not whatever else you might imagine.
It is from this extremely limited isomorphic correspondence that follows the so-called Truth Claim of photography which in its most straightforward expression is that a photograph is what it manifestly is -- an index.
All this shit is basically tautological. An index is more or less defined to be whatever the hell it is that a photograph is, and the "truth claim" is that a photograph is an index. Wow. Deep. But the point is to pick apart what the hell it is that makes a photo not a painting. That picking apart actually does turn over a few interesting rocks. So, attend to the rocks we just turned over, above, rather than the circle of definitions.
Enter machine learning, neural networks, and contemporary high end phone cameras.
Google and Apple at least, and the rest cannot be far behind, have built some cameras that use one or more physical cameras which, depending on mode, each take one or more photos in the sense of indexes, and dump this pile of bits into a thoroughly trained neural network to produce an output JPEG (or raw) file.
Now, I ain't no expert on neural networks, but I am pretty confident of this: You shove a bunch of data in, and a bunch of data comes out. The stuff that comes out is strongly and... somewhat predictably... related to what went in. But the relationship is very holistic. There is no "this pixel came from.. " in play here. Each individual pixel of output comes from all the pixels input, in ways that are often profoundly not obvious.
What comes out of these modern AI-driven cameras is not an index.
It looks like a photograph, it looks like an index. But it's not. And there are tells. Girls look prettier, with fuller lips and larger eyes. The tonal range looks a little funky, because it's actually built from a shitload of pictures made in total darkness. The colors are almost psychedelic. Whatever. Occasionally there are hilarious fails where the AI gets confused and renders everything as heaps of kittens. Ok, not yet, but we all know they're coming, don't we?
Ok, so what? We're pretty used to doctored pictures these days, right?
Here's so what: In five years a device that actually produces a photographic index is going to be a special purpose device, purchased for that purpose, by retro weirdos who produce very few photographs. Virtually every photograph we see five years from now will not be an index in any meaningful fashion.
When photography was introduced, it was a bit of a surprise that the indexing properties turned out to be important. Sure, some people quickly (10-20-30 years) skipped past it's an easy way to paint to this could be used to teach, or as evidence, or something. I'm not sure anyone saw the general public trust in photos coming, though.
Now we see that trust being thrown away not merely here and there as a conscious choice of the Photoshop-hero, but as a basic operating principle of actual device we call a "camera", and it's not clear what the long term effects are. Let's repeat that: Photographs, as cultural objects, are on the cusp of collectively ceasing to be indexes. The circular mess of definitions we started with is about to collapse.
We're already seeing people experiencing body issues because not only all the photos of everyone else online are weirdly beautified, but the photos they see of themselves on snapchat or whatever are beautified by the service.
What happens when it becomes literally impossible to get a straight photograph of oneself?
What happens when journalists ("citizen" or standard-issue) are all using AI-powered phones? Is Putin really sneering or did the AI just try go make him handsome?
Will someone start building special "forensic" cameras to produce photographs admissible as evidence? What happens in the inevitable 20 year interval while the legal system catches up, and people are getting convicted on the basic of AI glitches (or freed on the basis of postulated AI glitches)?
What other social consequences will there be?
Me? I am perfectly aware that I am a fat, old, white man. There's basically no way to worsen my body image, but what about all the young pretty people? Why won't anyone think of the kids?