Featured Post

Pinned Post, A Policy Note:

I have made a decision to keep this blog virus free from this point forward, at least until the smoke clears. This is not a judgement about ...

Monday, December 26, 2022

Of AIs and wordwooze

There have been many novels and stories written about machines for writing things, but the one that sticks in my head is Lieber's The Silver Eggheads and even about this one I recall very little detail. Fiction writing has been, in the future of this book, taken over by machines, wordmills, which grind out wordwooze. The latter is easily consumable, mostly the same, repetitive, "bad" writing that has pushed all the "real" writers out of work because it is so cheap to make, and good enough for the general public.

It may be that we'll see real wordmills shortly, although I confess that I am dubious. I feel as if the current generation of AI technology is about to tap out, and will prove structurally incapable of producing more than a few thousand words in a row that hang together even loosely. Be that as it may.

I want to talk a little about a pair of essays I read recently. The first is Rebecca Solnit's "The Blue of Distance" (which is also the title and lead essay of a book of the same name) and the second is Freddie deBoer's more recent "Up You Go" published on his substack. A quick google around will get you to copies of both essays.

If you squint, they're kind of similar. Pretty short, easy to read. Easy-reading language, quick to consume, pleasing sentence follows pleasing sentence and so on. They open with some fairly random observations and facts, while the second act is built around a personal anecdote, and they wrap up with a glib, summarizing, observation. They are, I maintain, different, in ways that are hinted at in my opening remarks above, and in ways that I think are important.

Solnit is a good writer, she can craft sentences and paragraphs. She can also do research, and wrote a pretty good bio of Muybridge that only contained a couple of technical fumbles. She embodies the modern era's someone whose expertise is in writing rather than in any specific subject matter, but with just enough chops to write pretty broadly. deBoer is pretty much equally skilled with language, but sticks a bit closer to home. He writes about things that he actually has pretty deep domain knowledge on, or essentially about himself and his emotional life ("Up You Go" is one of the latter.)

I don't much like Solnit. She seems to be to embody the young man described by, I think, Orwell who told his mother he intended to write. When she asked what he intended to write about the young man explained to his mother that in these times one doesn't write about anything, one simply writes. This is an attitude that I detest. It comes out of a desire to equate a usually overblown ability to push words around with actually knowing things. Solnit's bio of Muybridge is well written, and well researched, but it becomes clear that she is interested in certain things about Muybridge, but not at all interested in other things. Indeed, she's pretty vague on a lot of relevant material. Solnit is no anorak, and even her not-very technical writing tends to rub right up against the edge of the knowledge she's researched-up. She's not as disinterested as the young man, but she's not really engaged, either.

I do like deBoer, who is a troubled man, and who has been far more troubled in the past. He writes with real integrity, and is nothing like the young man in Orwell's anecdote.

In Solnit's essay, she talks for a while about how painters use blue to indicate distance, salting in facts like peanuts in a candy bar, and then wanders into a story of walking across the dry lakebed of the Great Salt Lake in Utah in a drought year. She's trying to walk to Antelope Island, and isn't able to because, eventually, she finds the much-reduced lake. The only actual idea she proposes is that, perhaps, the state of desiring (or of desiring something) is itself a condition we might aspire to. Then she just sort of drops it and moves on. The essay is what I have heard characterized as "that New Yorker shit," it has no beginning, no end, you can start anywhere, it doesn't matter. Each sentence is beautifully made to follow the previous one and flow into the next one, but that's about it.

deBoer's essay starts in with some personal observations about aging, and about how he's taller than average but not actually that big of a guy, and then goes into a personal anecdote about attending an emo-concert, and hoisting a bunch of small emo kids up, so they can go crowd surfing, and it ends with some glib words about feeling good. Each sentence is well made, the flow is good, just like Solnit's writing.

The difference between the two is that deBoer's essay is firmly nailed to reality. If his anecdote was revealed to be untrue, or to be someone else's story retasked, or whatever, it would destroy the essay. If Solnit's trek across the Great Salt Lake was revealed to be somehow invented it would not matter in the slightest. It's shaped as a kind of allegorical journey anyway, although what it's an allegory for is elided.

To my eye, deBoer's essay is rooted in reality, it emerges organically from real things that really happened, which deBoer wants to tell us about, which deBoer learned some things from. He wants to tell us about these life events and what they mean to him, how they shaped him. Solnit wants to captivate the reader, and to hit a word count; her essay is 3004 words long. Worse, she wants to captivate a specific reader, someone who is themselves essentially divorced from reality. She wants to talk to people who have never painted a picture (her discussion of Facts About Painting is trivial to the point of dumb even to anyone who has even dabbled) and who have never across a dry lakebed.

Solnit's writing is of a character that is, if not dominant, certainly very popular. It especially has traction with the unserious but wanna-bee erudite people, the kinds of people who share links to New Yorker articles about minimalism. It's not postmodern, not really, it has none of the fuck-you tics of that style, it's a kind of post-post-modern. It seeks to be purely textual, a construct made only of language. I associate the style specifically with the New Yorker in which, for structural, physical, reasons it's difficult to find the beginning of a specific piece. The editors seem to have dealt with this not by fixing the bad magazine design, but by inventing a writing style which makes it not matter. Open the thing anywhere and start reading, it doesn't matter. None of the articles actually have any structure anyway. Whenever they actually seems to say something, it's generally just remixing some essentially trivial idea that's in vogue (see: minimalism.) The remix is maybe mashed up with some other stuff, possibly an artist that's neither quite obscure nor quite mainstream. Like anything by Sontag, you finish up feeling like you've really had some shit revealed to you, but under pressure you can't quite put your finger on what. If you can summarize it, it comes out idiotically trite "well, in the end, I guess what she's saying is to do unto others as you would ... oh fuck, seriously?"

In a way, this is the triumph of formalism over content. People like Solnit, Chayka, and all the others, write within the mesh of inter-glyph-relationship that is language, much like GPT3 and other so-called Large Language Models. They are writing, manually, what is recognizably wordwooze. GPT3 is, at least in part, successful not because it writes well but because it writes badly in a currently popular style. Social media, of course, has popularized another variant of this, in which what passes for "discussion" or "argument" is in fact just the same well-worn series of remarks rephrased with slightly different words. It isn't that GPT3 doesn't write gibberish, it does. It's that many of the humans write the same kind of gibberish.

In the same way, I think the AI picture makers are getting a pass. So much of the imagery we see is formalism, as pure as the maker can manage. Photographs cannot escape a certain connection to reality, but this can at least be minimized and often that rather seems to be the point. Photographers are obsessed with "composition" which to them means a quasi-linguistic method of arranging forms in the frame so as to "be a good picture." Photographers are obsessed with correct exposure, and "what are the right settings for..." and the right way to light a thing, or whatever. The actual content is largely irrelevant, and when you must think about it, you should probably reach for one of a handful of tropes. An arrow pointing left and a guy walking right, perfect. Now for the composition, and what about The Light! The Light! and so on. Seriously, fuck the light. Nobody cares about the god damned light.

An AI that basically can't do anything except remix tropes fits right into this shithole, and makes what most people will see as Strong Images or whatever. It's just visual wordwooze, and the hands are all fucked up. I don't know if people are just discarding the pictures with hands, or is Midjourney's minders have hacked it, but I've noticed that AI generated pictures just don't show hands at all any more. Once you see it, it becomes kind of hilarious the lengths to which the pictures go to hide the hands.

This appears to me to be one of those confluences of culture and technology. Photography appeared at a moment when technology (chemistry) happened to arrive at a very specific moment (what chemicals are light sensitive? what dissolves in what?) that lined up perfectly with a cultural interest in perspective drawing. It's not as if these things were happening all the time, and just lined up this time. Both of those are nearly unique events, which for reasons beyond my ken, occurred at more or less the same time and place.
In this case, we have the post-post-modern cultural phenomenon, in which people who fundamentally have nothing to say because they've never read, done, or experienced anything are dominating large swathes of the media/culture. At the same time we have the technical phenomenon, of machines which can produce more or less the same post-post-modern remixes of stuff, pablum suitable for satiating the masses.

As I recall it works out ok in The Silver Eggheads. I think the writers rise up, smash the machines, and a public thirsty for original work greets them with open arms. I see signs that we might be heading there (not that the photographers will smash the machines, photographers love machines, but the thirsty public with open arms thing.)

I'm not entirely optimistic.

Thursday, December 22, 2022

Of Dogs, AIs, and Photographs

I regret, slightly, that there will be no followup to the previous teaser. Dr. Low has found a less disreputable publisher for his research into the scandalous behaviors of PhotoIreland. Alas. But, onwards!

My dog, upon hearing the word "walk," goes insane. Whatever her inner life actually looks like, it certainly appears that to her this word, in some meaningful way, means a specific activity. Rather, it means one of a sort of cloud of activities. It has synonyms: "hike," "park," "leash," and "poo-bag" at least. It appears that hearing the word triggers a set of emotions (pleasurable) and memories of previous walks, hikes, etc, and most definitely an expectation of more of the same. The word connects in some meaningful way to a set of emotions and a set of ideas about reality, about things that actually happen from time to time.

To me, the word is a shortened form of a sentence: "would you like to go for a walk?" which means in a bunch of ways. The dog and I agree on the important ones, which are the bundle of emotions and the real-world activities which occur from time to time. We agree on the way the sentence connects to the emotional and the real. What the dog misses is the linguistic content. She has no notion of the interrogative voice, she has no notion of preferences, not really. Her vocabulary in general, while very real, is limited to a handful of nouns which connect to things she likes very much, and other noises, "commands," which prompt her to do things (e.g. sit) in exchange for things she likes very much (food.)

You and I, on the other hand, have a rich linguistic structure to play with. We know about prepositions, like "of" as in "the ear of the dog" which is a meaningless idea to my dog. My dog knows about things being attached to other things, and she seems to have a notion of possession or perhaps ownership, but I cannot imagine it would even occur to her to express these things. They simply are. For you and me, words like "dog" refer to a real thing, refer to (probably) a bundle of emotional material, and also refer to a bunch of other words. Dogs are mammals, they have four feet, they like to go on walks, they bite you. Words are "defined" in terms of other words, and live in grammatical relationships to other words. "Dog" is a word that appears in some sentences in some places, and not in others.

If you pay attention in the right places, we're seeing a lot of "AI" systems appearing. Most recently a chat-bot based on GPT3, with which you can have a sort of conversation. You can ask it to write a song in the style of Aerosmith about prosciutto and, by all accounts, it will do a weirdly good job of it.

These things are, essentially, pure language. They are built by dropping a half a trillion words of written English into a piece of software that builds another piece of software. This second piece of software "knows" a great deal about where and how the word "dog" appears in English text. It "knows" in some sense that "ear" is a word that can exist in an "of" relationship with a "dog," and that the reverse is rare. To GPT3 the word "dog" is connected to a great mass of material, none of which is emotional (GPT3 lacks an endocrine system) and none of which is reality (GPT3 has no model of the world, only of language.) In a real sense, GPT3 is the opposite of a dog, being composed of precisely those facets of language which a dog lacks. Or, to put it another way, if you could somehow combine GPT3 with a dog, you'd have a pretty fair representation of a human.

It happens that a great deal of what humans do these days is a lot like GPT3. Many of us live online enough that much of our "input" comes in the form of language, and much of the language we "output" is really just pattern matching and remixing, not that different from what GPT3 does. We don't really think up a response to whatever we just read, we dredge up the fragments of an appropriate response and assemble them roughly into something or other and mash the reply button. Usually angrily.

I promised you photographs.

Consider visual art, specifically representational art.


This is a drawing of a dog. It is not a dog, nor is it the word "dog." Most likely, though, it connects to the same emotional and reality-based set of material that the word "dog" connects to, or close enough. What it lacks is the linguistic connection. Just as my dog understands the word "walk" we might understand a drawing of a "dog." A drawing of a dog generally will refer to, will be connected to, an abstraction of dog-ness. You might recognize the specific dog, or not. If you don't, you'll get "a dog" from it. Even if you do recognize the dog, the distance a drawing creates might push you a little toward the abstraction of dog-ness.

A photograph of a dog, like this one,

inevitably refers to a specific dog. Whether or not you recognize the dog, the dog in the photo is a specific dog. This one is named Julia, and she knows several words, among them "walk."

A picture, like a dog, functions as kind of the inverse of a contemporary GANS-based AI system. It is emotional and real, it is not linguistic.

It may be worth noting that AI systems in the old days went the other way around. They tried to hand-build a model of the (or a) world, and hand-code some rules for interacting with that world, or answering questions, and so on. Just like modern AI systems, these systems also produced interesting toys and almost no actual use cases, but the results were a lot less eerily "good" than the current systems.

In the modern era, the systems don't know anything, really. GPT3 does not know that the Tigers won the World Series in 1968. You can probably persuade it to produce the right answer to a properly formed question about the 1968 World Series, but GPT3 actually knows only that "Tigers" is the glyph that naturally appears in the "answer" position relative to your textual question. It's also likely to guess that the name of a baseball team appears there, and randomly shovel one in there until you rephrase your question. You can get a remarkable amount of what looks like knowledge into this kind of enormous but purely linguistic system. What follows "What is 2 times 3?" well, it might be "6" or it might be <some numeral> or perhaps it's just x or some sentence about mathematics. It depends on which pseudo-neurons you tickle with the way you phrase your question.

The current systems for making pictures are, weirdly enough, based on language models as well. As far as I know they work by moving back and forth between picture and language. When you ask for a picture of a dog, it makes a picture of something, and uses an image describing AI system to describe it, and then it measures how much the textual description of the current picture matches the textual prompt you gave it. Then it... very cleverly? modifies the picture, and repeats until the computed description text is close enough to your prompt text. Somewhere in there, fragments of pictures it's been trained on show up.

Notably, there is no model of reality in there. MidJourney can't do hands, because it has no idea that hands have 4 fingers and a thumb. It doesn't know that hands are a thing. It "knows" that certain glyphs appear at certain places in certain pictures. And, to be fair, hands are hard and you learn nothing at all about how to draw hands or even hand anatomy by looking at pictures. Neither, of course, is there a model of emotion in there anywhere. Not in the text systems, not in the picture systems. These are all made by delicately, surgically, removing the complex mesh of linguistic relationships from the world and from emotion. They operate by analyzing this isolated linguistic system as a system of glyphs and relationships.

I am certainly not the first to propose that genuine intelligence, intelligence that we recognize as such, might require a body, a collection of sense organs, and perhaps an emotional apparatus, but I think we are seeing convincing evidence of that today.

What makes this terrible and terribly interesting is that we respond to pictures and to words with emotion and attempts to nail them to reality. We imagine the world of the novel, and of the photograph. We respond with joy and anger and sadness. We're attempting to reach through whatever it is to the creator, to the author, to feel perhaps what they felt, to see what they saw, to imagine what they imagined. We do this with GPT3 output as well as Jane Austen output. We do it with DALL-E output as well as Dali output. At least, we try. With the AIs, there is no creator, author, painter, not as we imagine them. There is no emotional creature there, there is no observer of reality, there is no model of reality involved at all. All we get it remixes of previously made text and pictures. Very very convincing remixes, but remixes nevertheless.

A photograph, or something that looks like a photograph, feels to us more closely nailed to reality than a drawing or a painting, we react to it as if that stuff had really, for real, been in front of the lens of a camera. When an AI is in play, the distance between reality and the picture is infinite, at the moment of creation. At the moment of consumption, the apparent gap drops to zero, with consequences we cannot really guess at. People say things like "uncanny valley" and also speculate that the system will improve until the uncanny valley goes away. The last assertion is questionable, in my mind. Some detectable essence of uncanny valley may well be irreducibly present, the trace of a complete lack of a reality, the trace of the machine without emotion, the trace of the engine that remixes convincingly but knows and feels nothing. These systems always seem to tap out right around the uncanny valley, and then the wonks produce a new toy to distract us.

Does it make any difference if the author who writes "the ear of the dog" understands that phrase? Does it matter that they know what an ear is, and what a dog is? Does it matter whether they have stroked the ear of a dog, and felt its warmth? Is it enough that they know that of the glyphs "dog", "ear" and "of" the ordering 2-3-1 is common, and all the other ones are not? We react the same either way.

Does the emptiness of the "author" somehow come through, inevitably, or could we get along an author who has no heart? We're finding out now, and so far the answer seems to be yes, yes the emptiness does come through, albeit subtly. We shall see.

Monday, December 12, 2022

The Non-Profit Industrial Complex

This is kind of gossipy, feel free to move along.

Anyone who's spent any time dealing with non-profits knows that they're all pretty much a shitshow. They're interested in their thing, whether it's lobbying for bike lanes, feeding the poor, or putting on photography festivals. They're very not interested in, or good at, getting the paperwork done correctly. I don't think I've ever known of a non-profit that's filed its Form 990 on time. So, that's a baseline.

We also know that photography tends to attract a certain perhaps overage of mediocrities. Yes, people who want to feed the poor are often dolts as well, but photography as a whole seems to have a lot of people who rather fancy styling themselves as "into photography" without actually being into anything other than trying on an identity. Of course, there is a spectrum from "dolt" to "genius" so you find all kinds of people in the middle.

Finally, note that photography non-profits, of the sort that put on festivals and whatnot, seem to rise and fall. They're the hot thing for a few years, then they're struggling for funding, and then they're gone. Except for Aperture, I guess, but then, they seem to be actual grownups.

My acquaintance (some say my very very special best friend, but they are in error) Dr. Dennis Low, animal photographer, Londoner, occasionally sends me bits and pieces of things from his investigations. He's mildly obsessed with the sheer swampiness of British Photography as a bastion of know-nothing blow-hards, liars, cheats, and idiots. This is saying something, since he came at it from literary criticism and fine art painting, so the fact that he finds it especially venal is, I think, telling.

One of the funnier bits is the Story of PhotoIreland. They have been one of the Hotter items for a bit, but I think their star may be descending?

If I understand/recall the various tidbits properly, it appears that this thing was basically just one dude all along, maybe two people some of the time. Their governance appeared to be offering highish profile Photography People seats on the board of directors, but never actually making it official. PhotoIreland would get a bit of press about So-and-so joining the board. So-and-so would stick it on their resume and maybe get a bit of juice. Perhaps they even had a meeting now and then! But the paperwork was never filed, and they never actually joined the board.

To be fair what's-his-name seems to have actually put on festivals and whatnot with money he raised, but there's actually a reason for having a board and actual governance!

I dare say it was quite efficient. Decision-making is a breeze when it's just you, after all. Still, it's a bad look when you're selling resume material, but not actually delivering it. I guess when you're getting paid in "exposure" maybe there's not a lot of incentive to deliver anything. "Join our board" "cool, can I put it on my CV?" "of course!" and then it turns out you were never on the board at all that your employer can tell, and everyone rather has egg on their face. It's not like you did it on purpose, and it's not like you lied about having a PhD, but it's still not great.

If I understand rightly, this has proven a little embarrassing for a few people, and as a bonus, PhotoIreland appears to be getting its paperwork in order. So, good for them, I guess!

Anyways, I disagree with Dr. Low slightly on what this all means. I suspect that basically all the equivalent organizations are just as fucked up. It follows that at some percentage of these organizations, though by no means all, the dude in charge, the dude who is actually is the entire thing, it treating it as a personal piggy bank. There's so little money in play, though, that he's using it to buy an illicit pint a couple times a year and that's about it.

In some ways, I am more interested in the idea that literary criticism is somehow not this much of a mess? I should enquire!

Anyway Dr. Low has provided me with a detailed rundown of some of the shenanigans in his inimitable way, and I am laboriously formatting it for blogger's terrible software, and you'll be able to sort through what the kids call "the receipts" in the near future, perhaps tomorrow!

Saturday, December 3, 2022

So What The Hell, Huh?

This is a set of hastily written notes responding, loosely, to Jonathan Blaustein's recent column which you can read here, and which includes among other things some angst about the lack of Big Wild Photography Projects.

Let's set the stage.

In the 1970s Susan Sontag kind of makes mainstream some fairly incoherent but very quotable notes on photography in which one of the few actual ideas you can discern is that photography might be kinda problematic. It might be kind of... acquisitive in an unseemly way. At roughly the same time Mulvey invents the idea of male gaze wherein we're seeing a kind of Marxist approach to criticism.

Through the next few decades, we see Marxist criticism applied more broadly, but especially to media. Photography is more than Art it is also Media, so it falls under that category, and is thus a victim of Marxist analysis. See also Stuart Hall. Marxist analysis is... what?

Roughly we can think of this kind of analysis as reducing everything to power relationships, and explaining everything in terms of those relationships. Call it "Critical Theory", call it "gaze", call it "Politics of Representation", call it "de-colonizing", in broad strokes it's all the same. Everything can be reduced to power relationships, and those explain everything.

Simultaneously we see a collapse of art criticism generally. In the present day the newspaper art critics are actually reporting news and gossip from the art industry. Only rarely do they actually stand in front of a piece of art and tell you something about it, mostly they're interested in the motions of money and people through the local museums.

Into this vacuum we see a bunch of people steeped in the aforementioned Marxist theory. Anything that is actually Criticism of an Art Thing rather than just gossip or news tends strongly to do little more than uncover the power relationships involved in making the thing, especially if the thing is not merely art, but media which is to say, photography. In the present day we are seeing almost nothing that counts as actual art criticism of photography, which is not just an indictment of whoever is perceived as having the power. It's basically all been reduced to working out where "up" is and then "punching up."

Well, not all of course, but uncomfortably large amounts of it are. Enough of it is.

This makes it very risky to do big wide-ranging projects. You'll be seen as "up" merely if you can pull that kind of budget and support together, and you're practically certain to fumble something or other and been seen as an oppressing colonialist dickhead whatever. The indictment might even be fair and correct! But once the indictment is made, that's all there will ever be. Success of any kind, even successfully completing all but the most minor work, paints a huge target on your back. Who the hell wants any part of that? You're much safer if you're a struggling artist who somehow can't get the big shows.

At the same time, photography has evolved over the last 20 years, evolved as a cultural entity. Jonathan recites, correctly, the truism that now everyone has a camera, and everyone takes pictures. It is, I think, clear to everyone that... something is different. Photos are somehow more ephemeral, more digital, less studied. As media they've become a different kind of a thing. To compare photography today to photography 50 years ago is to compare television to cinema. It's the same but.. different. The uses are different, the cultural impacts are different, the way they're made is different. They're the same, except for everything.

This lands in the middle of the shittiest era of critical apparatus ever, the "everything is power" tool is the only one left in the box, and it's fucking terrible. It leads nowhere and tells us nothing. Its only function is to punch anyone who accomplishes anything, more or less for the sin of accomplishing something (the only real utility of the tool is that it has a bunch of widgets you can use to justify your punching, but in the end it's just "punching up is awesome")

So we have a new photography, a largely useless critical apparatus, and a population of Fine Art photographers who are justifiably afraid to succeed.

I am not actually real surprised that nobody's doing big hairy-ass high risk projects any more!