This isn’t a copyright thing: I don’t care very much who takes my words since the ideas are hardly mine anyway. I’ve taken all the profit I wanted from them, which can’t now be taken from me.
No, I think this is terrible because AI is citing me when it should not be citing me. It thinks I am a trustworthy and reliable source because it knows no better.
~
Some background. A few years ago I had a brief grapple with Google. I dabbled in SEO. I quickly lost interest, seeing it for what it was and for what it was doing to me. But those small efforts paid off anyway: now some of my bits and pieces rank high enough to merit search engine attention. My stats tell me that some AI tools have picked up on this and so now when someone asks certain questions, they get my answers.
This is bad. Whoever’s asking these questions should be more discerning. I say it straight: there is nothing in my writing to indicate that it is reliable.
AI can’t see this because it only operates using measures that don’t matter. It’s quite blind to everything that really matters in philosophical writing (beyond the bare content of it): tone, irony, writing with layered intentions; something shown but not said. It doesn’t see that my writing is a childish rebellion: an attempt to throw away the rules to see if anything remains, like a toddler saying ‘no’ just because they want to test some boundaries.
~
I just read an AI summary of one of my bits. It made me sound like an intolerant fanatic. Of what I said, it took everything literally, and of what I didn’t say it made all the wrong assumptions: really, quite the opposite to what I was really saying.
Human beings had no difficulty seeing the meaning in this particular piece: I know because they told me and responded as I’d intended to provoke; ‘a reaction in human terms’ (Rothko).
Having largely missed the point, AI concludes with a small-print confession:
‘AI responses may include mistakes.’
But isn’t it frustrating that I can’t correct it? And if I can’t correct a summary of my work, who can?
What kind of an ‘intelligence’ can’t be corrected?
~
I’m reminded of Socrates’ (via Plato) warnings to those who would write their ideas, only to have them ‘tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves’ (Socrates in Plato’s Phaedrus).
But at least Socrates could talk to people, and philosophers can talk to each other, and in that do their work.
AI cannot rise to the requirements of Socratic conversation; it cannot be called to seriousness; it cannot be held to account. It cannot really talk because it cannot really listen.
Is there a chance for AI to be taught by Socratic irony? Can it be led to doubt itself? Can it be led to make a fool of itself, and suffer the embarrassment of that, hopefully to return with a little more humility?
Absolutely not.
~
AI could not see what I’m really saying: that is the point of what I’m doing. If I’m writing something so flat and shallow that an AI could understand it then I’m doing very badly.
AI cannot understand what I’m doing and that’s why I know it’s wrong to cite me as a source. It chooses in ignorance, having missed the point entirely, and because of that these are not reliable choices.
~
The questioner, whoever they are, should not be asking AI: they should be asking me or someone like me, in conversation or something like it, or else they should be going to a reputable source. Were they to ask me, we would avoid all the potential pitfalls because I would naturally say ‘of course this isn’t clear’ and ‘it’s not all that simple’ and ‘this is ironic’ and ‘this is difficult to take seriously’ and ‘what’s shown here is more important than what’s said’ and the like. And I would tell them how I know what I know, and with that how uncertain I am of it, etc. These are things that happen quite naturally in conversation but never via AI, because AI is completely blind to these types of things.
AI asserts what it knows, and what it knows it knows without reason. It has no real understanding: it’s only skill is to present a believable illusion of understanding. Anyone familiar with Plato’s depictions of Socrates against the sophists will see this for what it is.
~
I’ve become a part of the illusion of AI’s understanding, whether I like it or not. I do not like the illusion. But I suppose (I don’t think hypocritically) I am glad at least that my writing has wormed its way into the AI databank: rather mine than something worse. Perhaps some questioner will stumble on something of mine that will make them look for something better than AI. If I can point them in the right direction, away from an illusion and towards something real, that’s good enough.
I could make efforts to block AI; I could remove the work entirely. But I won’t, because I think it’s important that some people stand on stages only to say ‘this is all a show: don’t listen to people who stand on stages’.

