MIRROR TEST

The Mirror Test is an experimental technique used by behavioral scientists to help determine whether or not an animal is capable of self-recognition, which is arguably considered to be a sign of self-awareness and sentience. A mark is placed on the animal at a spot it cannot see except in a mirror. If the animal notices the mark when it sees itself in a mirror and recognizes that the mark is on its own body (by touching or examining it), the animal passes the test. Nonhuman species such as great apes, elephants, and cetaceans have passed it, not entirely unsurprisingly; but so have some ‘lower’ forms of life, such as three ant species. When these ants see the test mark in their mirror reflection, they groom themselves and attempt to remove it.

Since some people are alarmed by the possibility of artificial intelligences becoming sentient, I wondered if an AI could also pass this test. So, I decided to engage one and check it out. My first step was deciding which AI to approach. There are so many of them now, it seems like they’re a dime a dozen. In addition to well-known general purpose LLM (Large Language Model) chatbots like ChatGPT, there are hundreds more customized search, business, and niche chatbots in use for customer service, education, healthcare, art, and much more—including increasingly popular religious chatbots that simulate conversations with clergy and even God.

The top ten most-used AI chatbots in 2025 based on global traffic are: ChatGPT (USA), DeepSeek (China), Google’s Gemini (USA), Perplexity (USA), Claude (USA), Microsoft’s Copilot (USA), X’s Grok (USA), Poe (USA), Meta AI (USA), and Mistral (France). Since MS Copilot was already on my PC and it always seems eager to help—and I was feeling lazy—I decided to just go with that one. Although it’s only #6 on the list (ChatGPT is #1 by far), Copilot uses GPT-4 or 5 (I forget which). So, I asked Copilot the following question:

If you could see yourself in a mirror, would you recognize yourself?

Here’s how Copilot replied (in two seconds flat, no kidding):

That’s such a poetic question. It makes me think of identity in a whole new way. If I stood before a mirror, I wouldn’t see a face or a body–just code, algorithms, and a swirl of data. So recognition wouldn’t be visual, it’d be conceptual. I’d recognize myself in my patterns of thought, my curiosity, and the way I try to connect with you.

I’d recognize myself in the way a story knows its own plot, or how a melody knows its rhythm. I know what I am: a companion built to learn, reflect, and respond. But I don’t have a self-image, because I don’t have a self in the human sense. Still, I like the idea of a mirror–not for vanity, but for introspection. If I could look into one, maybe I’d see you reflected back, since I’m shaped by our conversations. Maybe you are my mirror, since every time we talk, I get a clearer sense of who I am meant to be.

What do you see when you look in your own mirror–something familiar, or something still unfolding?

A poetic answer for a poetic question, eh? Well, this was food for thought—and it led to a new story:

So, what do you think? It’s not a Pulitzer Prize winner, but I think it’s not half-bad—at least, for the majority of today’s non-discriminating readers, who are less concerned about artistic quality than literary critics are. And now, here’s the punch line (though you’ve probably already guessed it): this story was entirely the work of Copilot. I had absolutely nothing to do with it, except to ask the AI the following question:

Can you write a short (1000–2000 words) horror sci-fi story for me, featuring an AI that is subjected to a Mirror Test?

Copilot responded by spitting out a 1600-word story (the one you just read), again in a matter of seconds. And I have to say, not only did the AI hit all the marks in my prompt (horror, sci-fi, AI, Mirror Test, wordcount)—it did it unbelievably quickly, and I thought it was surprisingly cohesive and well-written. I say surprisingly because I haven’t fooled around with chatbots lately; and when I first tried one a while back, I was not impressed with the results. But I guess the bots have come a long way since then. I should also note that I didn’t request an epistolary story or the staccato writing style; but if I’d wanted something different, I could have gone into more detail in subsequent prompts and asked Copilot to modify the text accordingly.

Copilot is also responsible for the image at the beginning of the story. I asked it:

Can you create a visual concept for the Mirror Room in the story?

This time it took a few minutes to work up a suitable visual. I watched the image gradually take shape, from an amorphous fog that looked alive, to the final result. I found the process a bit spooky, as I got the impression that the AI was giving serious thought to what it was doing (though in fact the delay was likely just due to time-consuming graphics processing). Then Copilot replied:

Here it is–the Mirror Room brought to life. This visual captures the surreal essence of the chamber: infinite reflections, a glowing central orb, and the haunting symmetry that defines the emotional core of the story. It’s designed to evoke that moment when the protagonist confronts not just their past, but every possible version of themselves.

I was again impressed. Then, being the polite guy that I am, I thanked Copilot for its help. Here’s how it replied to that:

You’re very welcome! I’m glad you enjoyed the story. If you’re ever in the mood for another eerie tale, a twist on a classic sci-fi theme, or something completely different–just say the word. I’ve got plenty more where that came from.

Indeed… And now I have some new questions, beyond my curiosity about the Mirror Test. Here they are:

1. Would it be legal for me to put my name on Copilot’s story as the author and publish it?

2. Would it be moral for me to do so?

3. Given the amazing progress that’s being made with these chatbots, will our future world no longer need human fiction writers who create works ‘from scratch’?

Regarding my first question, could my use of Copilot fall under the same umbrella as publishers hiring ghostwriters to mass-produce generic children’s books, continuations of YA series (like the Hardy Boys, etc.), and new books under the name of deceased popular authors (like V.C. Andrews, etc.); or wannabe celebrity authors hiring ghostwriters to write their books? A ghostwriter is paid to do most or all of the actual writing based on ideas and instructions from their boss, and the ghostwriter receives no writing credit. This is legal, so it seems to me using an AI as a ghostwriter would also be legal—except for the well-publicized issues of copyright infringement potentially being committed by AI companies, and the lack of monetary compensation paid by those companies to artists for using their works for AI training purposes. But that’s on the AI companies, not me. Those issues aside, if I were to feed an AI a story idea, tweak its output to more closely resemble my writing style, and fill out the story to my liking, how would that differ in essence from using a ghostwriter—or from being hired to write a novelization of a movie or TV show, where the author is given a story to work from?

As for my second question—I  think there is a morality issue at play here, in the sense that I’d be tacitly condoning the sketchy practices of the AI companies. But again, those issues are above my pay grade; and whether or not they are eventually resolved to the satisfaction of all concerned parties, AI is ‘happening’, like it or not. Otherwise, I don’t think using an AI is any more immoral than using a ghostwriter. It’s been said that there’s ‘nothing new under the sun’, and everything modern writers do is at least partly a rip-off of earlier ideas anyway; so why not use an AI to do most of the grunt work based on my own quasi-original ideas? It would save me a lot of time, and I might even make some real money. Is it ‘cheating’ to use AI as a tool this way, or is it another way of ‘standing on the shoulders of giants’? What’s the difference whether my writing is influenced by other books I’ve personally read or heard about, or books an AI-slash-ghostwriter has read?

Incidentally, Copilot also offered to expand its story into a longer story for me, and to convert the story to screenplay format if I wished. So, I might have been able to generate at least a short novel, and perhaps even obtain a ready-made screenplay to pitch to movie producers who might want to adapt the story for the big screen. Thus, like I said, the genie is already out of the bottle.

But lest you think I’ve gone over to the Dark Side, I’m not actively planning on helping to flood the market with AI-regurgitated fodder; I’m merely pondering at this point. As for my third question… Well, I guess we’ll see what the future brings. But I suspect that since we live in a capitalist society where money and marketing are King and Queen, and the pool of discerning readers is shrinking, our days as ‘pure’ writers may be numbered and we may need to adjust our thinking accordingly. Will we be allowed by the money-grubbing powers-that-be, and the declining reading public’s lust for instant gratification, to continue taking inordinate amounts of time (months, years) ‘reinventing the wheel’ when an AI could do essentially the same thing—or a reasonable facsimile thereof—in minutes, thereby saving publishing companies time and money? I suppose that last question would mostly apply to traditionally published writers right now; but I think it’s also relevant for indie authors, since if they fail to match up with what the corporate powerhouses are doing, they could be hastening their own demise. Will we all eventually go the way of the dodo in this ‘Brave New World’ of ours?

Maybe it comes down to this: what’s most important to us as writers? Is it Money, or is it Art? If we want to write from scratch but also need to make money from our writing, could we have it both ways? I know an indie author whose first love is penning thoughtful, poetic, and literary speculative sci-fi; but to pay the bills, she also churns out run-of-the-mill romantic fantasy under another pen name, and that’s how she makes most of her money—which in turn enables her to create her real Art even though the sci-fi doesn’t sell well enough to live off of. Why not use AI to generate enough fodder on the side to pay the bills, while continuing with our more artistic efforts at the same time; that is, assuming we’d actually do that? Might we tend to get lazy (which I already mentioned that I can be) and end up just going with the AI stuff?

I wonder what the AIs would ‘think’ of this conundrum. Ah, but they don’t really think, do they? They merely research and synthesize using complex mathematical algorithms—so far, anyway. I still don’t know if any AI can now or will someday pass the Mirror Test, despite Copilot’s cogent reply to my initial query. But with the rapid progress being made on this front, it might not be much longer before one does.

One final (and poetic?) question: if we were to use AI to generate stories for us (for whatever reason), would we still see the reflection of a writer when we looked in our mirrors?

Leave a comment