A 2026 follow-up to "AI in Technical Writing: What Does the Future Hold?", with notes on hybrid workflows and agentic pipelines.
Writeinteractive AI Stack
Human-led, AI-accelerated
A practical publishing and documentation system: strategy, models, agents, QA, analytics, and recovery paths.
Wow, what a year.
A year ago, I wrote that AI was going to change technical writing.
That part was right.
What I underestimated was the speed.
In 2025, I was still talking about AI agents as something just over the horizon. By the end of the year, they were already part of my daily workflow. By early 2026, I had agents helping with inbox triage, blog production, YouTube optimization, site publishing, and documentation cleanup.
Some of it worked beautifully. Some of it was expensive, brittle, confusing, and occasionally ridiculous.
That is the version I want to write about here. Not the keynote version. Not the polished vendor-demo version. The working version.
This is a field report from inside an AI-assisted technical writing practice in May 2026. I am using these tools every day. I am paying for them. I am breaking them. I am rebuilding parts of my business around them. And I am still very much the human in the loop.
The big lesson so far is simple: AI is no longer just helping technical writers draft faster. It is starting to change the shape of the work itself.
That does not mean technical writers disappear.
It means the job is shifting. The best writers are moving closer to systems design, workflow architecture, editorial judgment, source-of-truth management, and quality control. The work is still human-led. It is just moving through a much faster machine.
This article is my attempt to explain where things stand now: what I am using, what it costs, what is working, what is not, and what this shift means for students, junior writers, mid-career writers, and senior technical communicators trying to make sense of the next few years.
What I'm Running Now
Here is what is actually on my desk this spring.
Godmail. This is a multi-inbox triage agent I built for my own Gmail chaos. It pulls several accounts into one destination and tags messages by intent. It already found a major audio brand inquiry that had been sitting in a secondary inbox for six weeks. It also surfaced several sponsorship offers buried in Promotions. That one paid for itself quickly.
Writeinteractive.com, rebuilt from scratch. I used Claude Code to rethink and rebuild this site as two connected surfaces: an editorial blog and a separate agency site. Both now run on lightweight hand-coded HTML and CSS. WordPress is gone, which was not a small decision. It may be the most consequential change I have made to this site in years. The site is faster, cleaner, and no longer tied to a CMS database. The editorial pipeline now runs from markdown source files, which is a much better fit for AI-assisted publishing.
YouTube optimization. My TecTimmy workflow is now AI-assisted across titles, descriptions, thumbnails, timestamps, and metadata. I use AI to test angles, rewrite descriptions against current search intent, and tighten packaging. The lift is measurable. Some weeks, it is dramatic.
Claude Code as a content pipeline. This part has been messy, but it is working. My blog workflow is now roughly 70 percent automated. Claude Code can pull a live post, build the handoff, draft section blocks, run an audit, push the result, and verify the output. The old Saturday-morning writing cycle that used to consume most of a day now takes a fraction of that time. Not zero time. But a fraction.
OpenAI Codex alongside Claude. I do not use one model for everything. Codex is better for heavier code-adjacent work, fast structural changes, and some build tasks. Claude is stronger for long-context editorial work, judgment-heavy revision, and orchestration. The useful pattern is not model loyalty. It is routing the right job to the right system.
Claude as the always-open assistant. I keep Claude open on my phone the way I used to keep email open. It briefs me before a meeting, helps triage a thread, and sanity-checks language when I am away from my desk. It is not glamorous. It is just useful, which matters more.
Wispr Flow. Voice input has become a serious part of my workflow. I can dictate faster than I can type, and Wispr Flow lets me do that in almost any window. For high-volume drafting, that changes the math.
OpenClaw and Hermes. I am also running and testing agentic orchestration tools, including OpenClaw and Hermes. These are not beginner tools yet. They still require setup, recovery habits, and a willingness to break things. That said, this is where the work is heading. The future is not one chatbot answering one prompt. It is coordinated systems doing pieces of the work, then handing the result back to a human at the right checkpoint.
Local models on a high-end gaming PC. This is the least successful part of the stack so far. In theory, my gaming PC should be able to run serious local models. In practice, driver conflicts, quantization formats, VRAM limits, and CUDA updates have all gotten in the way. I will get there. The guide I write when it happens will be the one for real hardware, not clean installs.
The pattern is clear. AI is doing more of the work, but the work is not clean yet. The systems need standards. They need review. They need recovery paths. They need a human who knows when the output is good and when it is only confident.
That is where technical writers still matter.
There is also a personal angle here. I am testing the senior job market again after 25 years, with a small number of frontier-AI companies at the top of my list. The agentic-documentation work I am doing here is not a side experiment. It is the kind of work I want to do at scale.
If that lands, this site keeps running. If not, this site remains the lab.
What It Costs
This is the part the keynote skips.
You can learn a lot with a $20-a-month AI subscription. For students, new writers, and curious professionals, that is still the right place to start. You can draft, revise, summarize, experiment, and get a real feel for what these tools can do.
But that is not the same thing as running a production AI-assisted documentation practice.
I learned that the expensive way. When I first started pushing metered API workflows, I burned through almost $300 in a few days. Then I spent another $90 in another round of testing. Both bills were avoidable. I just did not know what I was doing yet.
The lesson was simple: AI gets expensive when you stop using it like a chat window and start using it like infrastructure.
| Monthly cost | Example setup | What it gets you | Where it breaks |
|---|---|---|---|
| $20 | ChatGPT Plus or Claude Pro | Good learning tier. Fine for experiments, drafting, and basic AI fluency. | Not enough for serious production work, heavy coding, deep research, or long agent sessions. |
| $40 | ChatGPT Plus plus Claude Pro | A strong starter stack for comparing models, drafting, editing, and light workflow testing. | Still easy to hit limits when using AI all day or running multiple projects at once. |
| $100 to $200 | Claude Max, ChatGPT Pro, or one heavy-use subscription plus a lighter backup tool | A workable solo setup for one main tool and a few repeatable workflows. | You still need discipline. Long-context work, image/video generation, coding agents, and API tests can push past the flat-rate comfort zone. |
| $250 to $400 | ChatGPT Pro plus Claude Pro or Claude Max, with a few paid utilities such as Wispr Flow, SEO tools, or automation helpers | A serious solo stack for publishing, coding, documentation cleanup, and client work. | The stack starts to behave like infrastructure. You need rules for which tool does what. |
| $400 to $500+ | Multiple premium AI subscriptions, agentic coding tools, search/SEO tools, and occasional API usage | Enough capacity to work for long stretches without constantly hitting limits. | This only makes sense if it is tied to revenue, publishing velocity, lead generation, or operational leverage. |
| Metered API usage | OpenAI API, Anthropic API, hosted agents, batch jobs, embeddings, or custom automations | Useful for specific high-volume jobs and systems that need to run outside a chat interface. | Dangerous as the default unless you watch the bill carefully. A bad loop can turn a test into an invoice. |
Today, my own stack is expensive on paper. I use paid tiers across OpenAI and Anthropic because I run these tools heavily, often for 15 to 20 hours a day. That is not normal usage, and it is not casual experimentation. At that level, the cost starts to look less like a software subscription and more like business infrastructure.
The question is not "Is this cheap?" It is "Does this create enough leverage to justify the bill?" For me, the answer is yes, but only because the tools are tied to real output: faster publishing, better workflows, recovered opportunities, cleaner systems, and less manual drag.
Used casually, the stack is expensive. Used well, it compounds.
What Is Not on Autopilot Yet
The pipelines work. They are not autonomous. That distinction matters.
Right now, I am still the orchestrator, reviewer, trigger, editor, QA layer, and recovery plan. When something breaks, I am the one who figures out where it broke and why. When the model drifts, I pull it back. When the output sounds plausible but wrong, I catch it.
That is the job now.
The next step is getting more of these workflows onto schedules and triggers. I want the system to know when to run, what to check, when to stop, and when to bring me in. That is harder than it sounds.
The promise of agentic AI is that the writer becomes the architect. The current reality is that the writer is also the night-shift operator. For now, both roles matter.
And honestly, that is where a lot of the value is. Anyone can talk about AI workflows. Fewer people have burned the tokens, broken the scripts, rebuilt the pipeline, and learned where the handoffs actually fail.
That is the useful experience. Writeinteractive has lived inside that work. That is what I can now bring to a client team: not an AI demo, but a working understanding of how to build, test, recover, and improve an AI-assisted documentation workflow without pretending the machine is smarter than it is.
Human-led, AI-accelerated. Still the right phrase.
| The role I still play | What it means in practice |
|---|---|
| Orchestrator | I decide which tool runs, what context it gets, and what outcome is acceptable. |
| Reviewer | I check whether the output is accurate, useful, and appropriate for the audience. |
| Trigger | I still decide when many workflows should start. The schedule is not fully automatic yet. |
| Editor | I remove drift, tighten language, and keep the work in my voice. |
| QA layer | I test links, formatting, claims, structure, metadata, and publishing behavior. |
| Recovery plan | When the workflow breaks, I diagnose the failure and rebuild the handoff. |
The Real Shift: From Writing to Systems
Writing to systems
The work moves up the stack
Technical writing is expanding from page production into workflow design, review gates, source control, publishing, analytics, and recovery.
The biggest change in 2026 is not that AI writes better drafts. It does. But that is no longer the interesting part.
The bigger shift is that AI is starting to operate around the document. It can help gather source material, compare versions, summarize tickets, draft updates, check terminology, flag inconsistencies, and prepare work for review. That moves AI from the writing layer into the workflow layer.
That matters because technical writing has never been only writing. It has always been source gathering, SME interviews, structure, review management, version control, publishing, and maintenance. AI is now touching more of that system.
This is where "agentic" becomes useful, as long as we do not treat it like magic. A workflow follows a predefined path. An agent has more room to decide what to do next, use tools, inspect results, and keep moving toward a goal. Most useful systems in 2026 sit somewhere between those two ideas.
For documentation, I do not want an agent silently rewriting regulated content and publishing it on its own. I do want a system that can monitor a change log, identify likely documentation impacts, draft proposed updates, check them against a style guide, and hand me a clean review package.
That is the practical version. Less science fiction. More useful.
| Old writing layer | New workflow layer |
|---|---|
| Draft a document from notes | Pull source material, summarize changes, and prepare a review-ready draft |
| Edit one page at a time | Check structure, terminology, links, metadata, and consistency across a content system |
| Wait for SMEs to respond | Package focused questions and identify missing source-of-truth details |
| Publish manually | Generate files, run checks, deploy, and verify the result |
| Measure later, if at all | Use analytics, counters, and search data to decide what to improve next |
What This Means for Technical Writers
AI affects every technical writer differently. It depends on where you are in your career, what kind of work you do, and how much of your value comes from drafting versus judgment.
The risk concentrates where the work is repetitive, templated, or easy to check after the fact. The opportunity sits where judgment, structure, accountability, and domain knowledge matter.
| Career stage | Most exposed work | Best move now |
|---|---|---|
| Students | Generic writing samples, basic drafting, formatting | Build technical fluency and domain knowledge |
| Junior writers | First drafts, KB cleanup, SOP revisions, routine updates | Move toward SME work, structure, and review |
| Mid-career writers | Speed-and-reliability production work | Own workflows, standards, and QA gates |
| Senior writers | Less exposed directly, but the role is shifting | Design documentation systems and AI review models |
Students and Pre-Career Writers
Students are entering a different field now than at any other time in history.
The old entry-level path relied heavily on first drafts, cleanup, formatting, and basic documentation updates. That work still exists, but AI can now do a large part of it quickly. A student who shows up with only polished writing samples is going to have a harder time standing out, because polished writing samples are now easy to produce.
That does not mean the field is closed. It means the entry point is changing.
Students need to show how they think. They need to understand structure, not just sentences. They need enough technical literacy to follow the product. They need to know what AI can do, where it fails, and how to review its output.
A few skills matter more now than they did a few years ago:
- Structured content, including Markdown, DITA, OpenAPI, or AsciiDoc
- AI review, not just AI prompting
- Domain knowledge in areas such as medical devices, biotech, fintech, cybersecurity, or developer documentation
- Basic comfort with source files, version control, tickets, and product workflows
- The ability to explain why a document should be structured a certain way
The degree still matters. The portfolio still matters. But a portfolio full of generic AI-polished samples will not be enough.
The better question is simple: can you show clear thinking in a real subject area?
Junior Writers
Junior writers are the most exposed group.
That is not meant to scare anyone. It is just the honest reading of the work. The tasks that used to define the early years of technical writing are exactly the tasks AI is getting better at: first drafts, formatting, summaries, KB cleanup, release-note drafts, and routine SOP revisions.
The answer is not panic. The answer is to grow into harder work faster.
Use AI to clear the routine work. Then use the saved time to learn the work above you. Watch how senior writers interview SMEs. Pay attention to how they decide what belongs in a procedure and what does not. Learn how they push back on vague source material without becoming difficult to work with.
The junior writer who survives is not the one who pretends AI is irrelevant. It is the one who ships clean work, learns faster, and becomes useful beyond basic drafting.
That means building skill in areas AI still handles poorly:
- Asking better questions of SMEs
- Understanding the audience and what the user needs to do
- Knowing when a warning, note, or prerequisite is required
- Spotting when an answer sounds fluent but is not grounded
- Connecting one document to the larger documentation set
- Escalating uncertainty instead of smoothing it over
The goal is not to beat AI at cleanup. The goal is to become the person who knows what the cleanup is supposed to accomplish.
Mid-Career Writers
Mid-career writers may have the hardest adjustment.
For years, the value of a strong mid-career technical writer has been speed plus reliability. Give them messy input and they produce clean documentation. That still matters. But AI is moving directly into that zone.
The wrong move is to assume experience alone provides insulation. It does not.
The better move is to own the workflow. That means defining the editorial standard, building the review checklist, deciding what AI can handle, and deciding where human review is mandatory. It also means measuring whether the output is actually better, not just faster.
This is where mid-career writers can become documentation-systems thinkers.
A strong mid-career writer already understands the messy middle of the work: the product, the SMEs, the review cycle, the publishing process, the customer pain, and the organizational politics. AI does not remove the need for that. It makes that knowledge more valuable if the writer can turn it into a repeatable system.
The move is from "I produce good documentation" to "I design the workflow that produces good documentation."
That is a much stronger position.
Senior Writers
Senior writers are not immune, but they are better positioned.
The senior role has always involved more than writing. Senior writers define standards, resolve ambiguity, shape information architecture, review difficult material, mentor other writers, and know when something is not good enough to ship.
AI pushes that role higher up the stack.
The senior writer increasingly becomes the person who decides what AI can touch, what it should never touch, where review gates belong, and what "good enough to ship" actually means.
This is also where structured content and docs-as-code become more important. AI works better when inputs are clean, standards are explicit, and the workflow is designed instead of improvised. If the documentation set is a pile of inconsistent files, the AI will amplify the mess. If the documentation set has structure, the AI has something to work with.
The machine likes structure.
So do users.
That is the career shift in one sentence: technical writers who only draft are more exposed; technical writers who define, review, and govern the system become more valuable.
What Still Belongs to Humans
For all the movement around AI, three things have not changed.
Editorial judgment still matters. Someone has to decide what belongs in the document, what should be left out, what the user actually needs, and what the organization is responsible for saying.
Source-of-truth knowledge still matters. AI does not automatically know what the customer does, what the engineer meant, what the regulator expects, or what the support team sees every week. Without that grounding, it produces fluent guesses.
Accountability still matters. When documentation is wrong, confusing, unsafe, or noncompliant, a person is still responsible. That person needs the authority and context to catch problems before they ship.
This is why "human-led, AI-accelerated" still feels like the right phrase. AI is not replacing the entire discipline. It is also not "just a tool" in the old casual sense. It is becoming part of the production system.
That means writers need to understand it, shape it, and put limits around it.
Where This Goes
The direction is clear. AI systems will keep getting better at longer tasks, richer tool use, and coordinated workflows. More routine work will move into agents and pipelines. More of the human role will move toward judgment, architecture, review, and accountability.
The technical writer of the next few years will not just write documents. The best ones will design documentation systems. They will define standards, build workflows, evaluate output, and know when to stop the machine before it ships something polished and wrong.
That is not a small change. It is also not the end of the field.
The future of technical writing is not the absence of the technical writer. It is the technical writer moving up the stack.
Agents can draft, check, format, and hand off more of the work. But someone still has to understand the product, the user, the risk, the workflow, and the standard.
That someone is still us, if we move fast enough.
Author Bio
He also runs TecTimmy, a tech-focused YouTube channel, and a network of digital media properties where he tests AI-assisted publishing, content workflows, and automation systems in public.
Want help with this kind of work?
We turn ideas like the one in this post into production-ready content, creative, and infrastructure.
Start a project →