AI-lishah
I've been writing some form of my blog for more than 25 years now - I think I started before it was even called 'blogging.' I'm that old.
In 2026, when you have that much writing readily available on the internet, there's really only one thing left to do: Feed it to your favorite LLM, ask it to deeply analyze your writing, build memories for storage, and instruct that, hence forth, it should respond in your own voice.
Why... why would I upload my thoughts to the machine overlords? I had a theory: A lot of my posts tend to be exploratory or pursuasive ideas, and (as observed by GPT) "favor metaphors, are often a bit playful, and grounded in shared experience rather than top-down instruction; warmth over authoritative;" and I think I write this way because it's also how I best receive and process information. I like a story, real-world examples, a healthy dose of self-deprecating humor and flexibility over rigidity. If GPT could explain things to me the way I'd explain them, maybe it would help me better learn topics that have always felt a bit nebulous.
The results weren't perfect but were still really, really good. Right away, I was deep diving into topics and getting an Alishah-style explanation. A mixture of observations, story, metaphors, real world examples, practical take aways with some (robotically forced) humor throughout.
The downside of all this was that GPT began to respond to *everything* in my voice. If I asked for small clarifications, if I asked for confirmations, if I just needed a simple "Yes, that's correct" I began getting blog-length posts about every single thing. Each time with a metaphor, real world examples, practical take aways.
And that's when I had to give GPT some advice that I've happened to hear many many times from others: "You don't need to always give the long winded response. Sometimes, just say 'Yeah, that's right.'" That adjustment further improved things, and was important for my sanity. (And, if you've said this same thing to me in the past... I get it now. I'm sorry. I'm working on it.)
Wanting to go deeper - I was curious to learn how LLMs can take topics I don't know about and tell me about them in my own voice. The tl;dr was that GPT nests the processes. For example, when I asked it about tomorrow's weather in Seattle, it executed alishah(research('Tomorrow's Weather')) to return:
"Seattle’s forecast for tomorrow is mostly rain and cool temps in the low‑50s, the kind of gray, soggy day that makes you pull your hood up and appreciate how good tea tastes in February."
Sure - some would prefer the factual bits: Rain. Cool "Temps" (how hip of "me" to abbreviate). Low 50s. But, the second part resonates with me just as much and lets me plan better. Hoody weather. Pack extra-tea.
There's a lot of talk about AI replacing voices. But, when used in the right way, it's fascinating how AI can helps us hear our own.
