Lewis Bush wrote a thing about how algorithms might change journalism, and then Jörg Colberg wrote a response to it, and now I'm writing a response to both of them.
Both of these guys need to read more science fiction, for starters. They're both at least part-time in the prognostication business, and the writers of science fiction are a substantial body of people who do the same, except full-time.
Both of them also need to pay a little more attention. Their use of the future tense is a delight to behold.
Jörg makes at least one weird statement in his, to wit, that algorithms cannot manage "the unthinkable" offering up the example of 9/11 as an example. In the first place it was totally "thinkable" and in the second place, what kind of news algorithm would not be able to handle #planecrash as well as #buildingfire as news items? There's a large body of the "unthinkable" that is simply simple combinations of the "thinkable" and if there's one thing algorithms can do, it's combine stuff.
All this, though, missed the point.
I wrote about Eliza, a computer program, several years ago. I will summarize here, though:
Eliza was a computer program that could carry on a credible conversation without remembering a single thing about the conversation. It simply responded to the most recent thing you typed in with something assembled out of your words to form a leading question. Eliza was a crude but elegant algorithm that relied on the human in the loop (you) to produce its results.
My earlier remarks on Eliza, now that I review them, still strike me as a wonderfully clever analysis of how meaning appears in visual art.
In exactly the same way as Eliza, Flickr and Instagram are algorithmically selecting photographs people like, by using people to do the work. A relatively crude algorithm is built that allows humans (which are free and numerous resources) to click Like or +1 or whatever. This provides, in some sense, a measure of goodness. The more a photograph is liked, the more it is shown around. I assume they have some damping mechanisms to prevent things from going completely off the rails, preventing odd cases like a Flickr that appears to have only one extremely well liked photo.
The point is that neural networks and AI and stuff are super sexy, but what actually works is looping in people. Artificial Intelligence, while trendy, isn't as good as the real stuff. It's not even as good as the worst of the real stuff. In this way, it resembles sugar substitutes.
Imagine, if you will, a future in which there's live video feeds from all over the place. Every street corner. While online, we are shown random snippets of footage constantly. When "news" happens, people who happen to be watching that clip will abruptly engage, they will hit the "replay" button. The plane smashes into the skyscraper over and over, and in moments the system begins to show that clip to other people. It trends. In 60 seconds, the world knows about it, and the comments begin to flow in.
While there is no editorial oversight, and no coherent analysis, there sure as hell are internet comments which serve, in some sense, the same role.
This trivial to implement, and it will (manifestly) work better than neural networks.
More to the point, though, this isn't the future at all. It's right now. This is how news works right now. 2/3 of Americans get some of their news from social media, and this is precisely how social media works, with one small caveat.
The caveat is that it's not actually randomly selected snippets of street cameras. It's "user generated content" which is slightly more curated. Someone already though it was interesting. Or someone has a point of view they wish to flog. We'd be better off if the underlying material were random security camera footage.
Which leads us around neatly to one more thing. On flickr and instagram it's well understood that you can game the system. There are visual tropes you can simply roll out to get Likes. Get good at it, and you can be, if not a star, at any rate 10x more popular than you are now. In the same way, we find people on social creating "user generated content" that looks kind of like news.
There are tropes you can hit that produce pretty much guaranteed engagement (immigrants, blah blah blah).
These bits work exactly the same way that the photo of the pretty girl in the swimsuit on this site, and the oversaturated landscape on the other site, work. They hit certain cognitive buttons so that the humans the network is using to compute with will Like the content. The content then "trends" and becomes part of the news landscape.
Same algorithms, using the same free and infinite human labor, same ways to hack it, same results. A sort of kitschy, fake, treacly substitute for Photography in one case, News in the other. The same easily manipulated results.
Lewis remarks on the possibility of bias in algorithmic reporting. Not only does he miss the fact that these algorithms tend to exhibit biases that, while very real, in no way resemble human biases, he misses out of the entire social media element. Algorithms that loop people in tend to amplify and confirm biases already present. I am convinced that this effect absolutely swamps any sort of "digital" bias. He almost touches the truth when he talks about Microsoft's chatbot being trained as a Nazi, but fails to recall that this is how everything online works.
It is precisely this amplification of bias that produces the god-awful kitsch that dominates flickr, 500px, instagram. It's populism, kind of distilled, fed back to the populi, and redistilled.
Lewis seems touchingly unaware of it, but it's already over. Yes, there's still a thing called journalism, kind of. But there ain't no percentage in it, and it's not how people are actually learning about the world.
It's just a sort of buggy whip for intellectual snobs like Lewis, Jörg, and me. And probably for you too.