It’s 2026 and I made myself a promise that I will blog more about technical web things. Nothing new because I do this every year, but this year I will also make a YouTube video for each article. This is the first “artico-video” in the series. Yes, I just invented that word.
If you’ve been following my blog, you know I do these tools stack articles every year. But something changed. In 2025, my entire workflow was driven by AI. I started with Cursor AI, then switched to Claude Code in the last part of the year. My usage of Figma MCP spiked. Playwright tests became more like a verification tool for AI models to check their own work. The shift was gradual, then suddenly everything clicked.
With this in mind, let me give you the main spoiler from the start: 2026 will be about using AI in research, code generation, design handling, and product management. Here’s my complete list:
- Claude Desktop and Gemini for research — I don’t find myself searching on Google anymore
- Stitch for UI design
- Nano Banana for graphics and images
- Antigravity for coding, with Claude Code running in parallel to compare results between AI models
- Playwright as the AI’s tool to verify its own work on local environments
- wp-env and Playground for testing WordPress work locally
- The Notes app from macOS has replaced Grammarly for me this year.
- CapCut for video editing. Something new for me, let’s see how this works.
Now let’s talk about each of these individually.
Research: Goodbye Google Search
Lately when I’m doing research for a new task, I rarely use Google Search anymore. This is wild to type out loud, but it’s true. After two decades of Googling all my doubts, the habit just… shifted.
Nowadays I use AI apps like Gemini, Claude Desktop, or ChatGPT to validate my questions or research the best ways to tackle a specific problem, use a certain API, or design a system architecture. I find it easier to do research via an AI model because I can let it create the plan, work on presentation files, charts, and reports. It’s like having a research assistant that doesn’t get tired and doesn’t judge you for asking basic questions.
If I doubt something, I literally ask the AI model to do a Google Search and give me the best results, which I then manually review. But that’s the key part — I still review. AI is not a replacement for critical thinking; it’s an accelerator for the boring parts.
The biggest change is mental. Instead of context-switching between browser tabs, I stay in one conversation and drill down. It feels more like a dialogue than a hunt through blue links.
Design: AI Does the Heavy Lifting
I’m not a designer. If you’ve ever seen my attempts at making graphics, you’d understand why I used to buy entire UI kits and templates for the products I worked on. In 2026, that changed.
Stitch for UI Design
Stitch is a Figma-like AI tool that generates entire UIs from prompts. You describe what you want, and it gives you a starting point that actually looks professional. I use it for wireframes, component layouts, and quick mockups when I need to communicate an idea to a client or teammate.
Does it replace a real designer? No. But for a WordPress developer who just needs to visualize an idea before coding it, Stitch is a lifesaver. I spend less time in design limbo and more time in my IDE where I actually belong.
Nano Banana for Graphics
Nano Banana is an AI image generation service that I use for blog thumbnails, social media graphics, and placeholder images during development. You’re looking at its output right now in this article’s thumbnail.
For someone who can’t draw a straight line with a ruler, having AI generate decent-looking graphics on demand is a game changer. It’s not going to win design awards, but it gets the job done, and that’s all I need.
Antigravity Wins as My IDE
My favorite IDE at this moment is Antigravity. Clear UI, modern design, and the planning mode integrates perfectly into my workflow.
For those who don’t know, Antigravity is a VS Code fork made by Google. What I love about it is that since it’s a VS Code fork, I can use extensions like the Claude Code extension inside it. So I get the best of both worlds: Google’s AI models and Anthropic’s CLI tool in the same environment.
Antigravity gives me access to Google’s AI models like Gemini 3.0 Pro and Gemini Fast. I’m a big fan of Gemini 3.0 Pro — in my testing, it comes on par with Opus 4.5. They’re both great models, and having options is always nice.
The planning mode is what sold me. Before writing any code, the AI helps me break down the task, consider edge cases, and outline the approach. This fits perfectly with how I work as a WordPress developer — I need to think about hooks, filters, database migrations, and backward compatibility before I touch the keyboard.
Claude Code: My Second Option
I keep Claude Code open in my terminal as a parallel track. When I’m working on a feature in Antigravity, I sometimes ask Claude Code to recreate the same feature independently. Then I compare the results.
Why? Because different models have different strengths. Sometimes Gemini nails the architecture but fumbles on edge cases. Sometimes Claude produces more defensive code. By comparing outputs, I get a more complete picture of what “good” looks like for that particular problem.
It’s like pair programming, but my pair is two different AI models arguing with each other while I take notes.
Playwright: The AI’s Way of Checking Its Work
Here’s something interesting I noticed. I have both a Playwright MCP and a Google Chrome MCP installed. When I ask Claude Opus 4.5 to review the result of my work on a specific URL, it almost always reaches for Playwright instead of Chrome MCP.
My guess is that Playwright is more descriptive in its outputs. Even though Chrome MCP has access to the network and the DOM, Playwright’s testing framework gives structured, assertion-style feedback that AI models can reason about more easily.
So now Playwright isn’t just a testing tool for me — it’s the AI’s preferred way to verify that what we built actually works. It visits the local environment, checks the elements, validates interactions, and reports back. It’s like having a QA engineer built into the AI loop.
WordPress Local Development: wp-env and Playground
For testing WordPress work locally, I still rely on wp-env and Playground. These aren’t AI tools, but they’re essential to my stack because they give me a reproducible environment where I can test plugin changes without risking anything.
wp-env spins up a Docker-based WordPress environment with a single command. It’s perfect for plugin development because I can test against different WordPress versions and PHP configurations.
Playground is great for quick experiments and sharing demos. When I need to show a client or colleague how something works, I can give them a link and they can see it running in their browser without installing anything.
The AI tools I use don’t know how to deploy to production, but they do know how to write code that works in these local environments — and that’s where Playwright comes in to verify everything.
Notes App Replaces Grammarly
This one surprised me. The Notes app on macOS has become my go-to for writing and grammar checking. Apple Intelligence features now handle grammar suggestions, and I find them good enough for my needs.
As a non-native English speaker, I used to rely on Grammarly for everything. But with Apple Intelligence built into the OS, I don’t need a separate app anymore. I write in Notes, get grammar suggestions inline, and then move the text to wherever it needs to go.
It’s simple, it’s fast, and it doesn’t nag me with premium upsells. Sometimes the boring solution is the right one.
CapCut for Video Editing
Since I promised myself that every blog post will have a YouTube video companion this year, I needed a video editor. After looking at options, I landed on CapCut.
Why CapCut? A few reasons. First, it’s free and the free tier is actually usable — no aggressive watermarks or crippled features. Second, it has solid AI integration: auto-captions, background removal, and smart editing features that speed up the boring parts. Third, it’s made by ByteDance (TikTok’s parent company), so it’s optimized for the kind of short-form and social content that performs well on YouTube these days.
I’m not a video editing expert. I don’t need After Effects-level control. What I need is something that lets me cut together a talking-head video, add some captions, maybe throw in a screen recording, and publish. CapCut does exactly that without making me feel like I’m piloting a spaceship.
Will I outgrow it? Maybe. If this YouTube experiment takes off and I need more advanced features, I might look at something like Descript, which lets you edit video by editing the transcript — basically treating video like a text document. But for now, CapCut is the right balance of power and simplicity for someone who’s just starting out.
Final Thoughts
Looking at this list, the pattern is obvious: AI is everywhere in my 2026 workflow. Research, design, coding, testing, even writing — every step has an AI component.
But here’s what I want to emphasize: I’m not outsourcing my thinking. I’m still the one making decisions, reviewing outputs, and taking responsibility for the final result. The AI tools accelerate the boring parts so I can focus on architecture, user experience, and the problems that actually require human judgment.
If you’re a developer who hasn’t integrated AI into your workflow yet, I’d encourage you to start small. Pick one area — maybe research or testing — and see how it changes your process. You don’t have to go all-in immediately. But ignoring it entirely? That’s leaving productivity on the table.
I can also understand people who go fully Anti AI, I respect that decision, and if you go that route I still think that it is a missed opportunity to use AI models just for learning, not for coding or design. Simply ask it with your free tokens “What is the best design pattern for this algorithm?”. Terabytes of data, courses, documentation and StackOverflow topics are already trained in it, and you can simply access it with a few questions.
See you in the YouTube video. Or maybe I should say, hear you? I don’t know how this works yet.
Note: I’m not a native English speaker nor a designer, so the thumbnail and the grammar check for this article were made with AI. Props to Claude!
Leave a Reply