AI is here to stay. No doubt about it. We'll see more and more code that has been AI-generated. The question is, how do we keep that code high quality? Will AI know better than we do what good even means? Does it matter?
Let me show you how I work with AI in a way that's genuinely smart, not just fast.
#Which AI agent should I use?
The one I've tested most extensively is Claude CodeOpen in a new tab by AnthropicOpen in a new tab. At some point I'll probably move to a tool that abstracts the underlying model, like OpencodeOpen in a new tab.
For now, since Claude is blocking access to third partiesOpen in a new tab, I prefer to maximize inference until the ecosystem settles down.
As always, use the best tool for you. Try new ones from time to time, the tooling keeps improving and the models keep changing all the time.
Feeling AI anxiety?
It's normal, I had it too! There are so many changes happening that it's hard to keep up.
Wait, do you need to keep up?
Probably not, that's something I have to remind myself of from time to time. But the number of opportunities AI is opening up right now is incredible, so as always, I keep thinking about the positives.
#Skills for Frontend Architecture
I've created a set of 10 high-quality skills that represent how I code. I've spent more time fine-tuning them than I'd care to admit. Here they are:
architecture-guardrailsbugfixcreate-deliverycreate-domain-modelcreate-infrastructurecreate-repository-contractcreate-use-caserefactortdd-bddvalidate
Each one encodes a specific decision I've made about how Frontend Architecture should be built: where the domain lives, how use cases are wired, how delivery code stays thin, how tests drive the design.
To create them, I use the skill-creatorOpen in a new tab.
If you want to find useful skills, check out skills.shOpen in a new tab.
#Making AI tools remember
There's another skill I've found extremely useful:
This skill is for when I want my AI agent to remember an important piece of information. The description:
--- description: Capture and persist learnings to improve the agentic system over time. Use when (1) a pattern, solution, or pitfall is discovered during a session that should be remembered, (2) the user explicitly says "remember this", "learn this", or "add this to the guidelines", (3) a workaround or best practice emerges that would benefit future sessions, (4) project-specific conventions are established. Routes learnings to CLAUDE.md, existing skills, or creates new skills based on the learning type. ---
You can find the complete skill on my website.
#One source of truth for conventions
Multiple AI tools use multiple conventions for their convention file. I prefer to abstract that away and generate the needed files on the postinstall hook:
"postinstall": "node scripts/postinstall.ts"
Then I have a NodeJS script that runs multiple postinstall scripts:
import { spawnSync } from 'node:child_process' if (process.env['CI'] === 'true') { console.log('[postinstall] CI detected — skipping local setup steps.') process.exit(0) } function run(cmd: string, args: string[]) { console.log(`[postinstall] $ ${cmd} ${args.join(' ')}`) const result = spawnSync(cmd, args, { stdio: 'inherit', cwd: process.cwd(), env: process.env, shell: process.platform === 'win32', }) if (result.status !== 0) { const code = result.status == null ? 1 : result.status console.error(`[postinstall] Command failed: ${cmd} ${args.join(' ')}`) process.exit(code) } } // 1) Install git hooks via lefthook run('lefthook', ['install']) // 2) Install Playwright browsers run('npx', ['playwright', 'install']) // 3) Sync AI guidelines run('node', ['scripts/sync-ai-guidelines.ts']) console.log('[postinstall] All steps completed.')
And this is the sync-ai-guidelines.ts script:
import { constants } from 'node:fs' import { access, copyFile, mkdir } from 'node:fs/promises' import { dirname, resolve } from 'node:path' const src = resolve(process.cwd(), 'ai/GUIDELINES.md') const targets = [ '.github/copilot-instructions.md', '.cursor/rules/Core.md', '.junie/guidelines.md', '.windsurfrules', 'CLAUDE.md', ].map((p) => resolve(process.cwd(), p)) async function ensureDirFor(file: string) { await mkdir(dirname(file), { recursive: true }) } async function safeCopy(from: string, to: string) { await ensureDirFor(to) await copyFile(from, to) } async function main() { try { await access(src, constants.R_OK) } catch { console.warn(`[sync-ai-guidelines] Source not found: ${src}. Skipping.`) return } await Promise.all(targets.map((t) => safeCopy(src, t))) } main()
And then I keep one ai/GUIDELINES.md as the single source of truth:
# Development Guidelines Code Style and Structure - Write concise, best-practice code with accurate examples. Follow Code Craftsmanship patterns. - This is a React-based project. - This is a Next.js-based project. - Favor object-oriented programming (OOP) and declarative programming patterns. - Prefer iteration and modularization over code duplication. - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError). - Prefer one export per file. - Ensure a clear separation between UI, state management, and business logic to maintain a clean architecture. - Use lowercase with dashes for directories (e.g., components/auth-wizard). - Always use named exports for consistency and maintainability. - Use npm as the package manager and lock versions using package-lock.json for consistency. - Use ?? instead of || for nullish coalescing. - Use conventional commit messages (feat:, fix:, chore:, etc.). - Ensure all code changes include relevant test cases. - Use declarative JSX. TypeScript Usage - Use TypeScript for all code. - Prefer interfaces over types, except for utility types or mapped types. - Avoid enums due to runtime overhead; prefer object maps or union types instead. - Use strict mode in TypeScript for better type safety, avoid usages of `any`. UI and Styling - Use Styled Components for styling. - Ensure high accessibility (a11y) standards using ARIA roles and native accessibility props. - Avoid hardcoding padding or margins. - Implement proper keyboard handling. - Use CSS variables for theming when necessary. Performance Optimization - Minimize the use of useState and useEffect. - Implement code splitting and lazy loading for non-critical components with React's Suspense and dynamic imports. - Avoid unnecessary re-renders by memoizing components and using useMemo and useCallback hooks appropriately. State Management - Use React Context sparingly to avoid unnecessary re-renders. Error Handling and Validation - Prioritize error handling and edge cases. - Handle errors at the beginning of functions. - Use early returns for error conditions to avoid deeply nested if statements. - Avoid unnecessary else statements; use the if-return pattern instead. - Use domain errors to handle errors in the domain layer. - Use Next.js's ErrorBoundary components for error handling at the route level. Testing - Write unit tests using Vitest. - Implement integration tests for critical user flows using Playwright. - Write test cases for both success and failure scenarios. React Components - Use a variable (const) for the components. - Use FC to type the variable. - If a component has children, use PropsWithChildren to type the component. - Props should be typed within the component's type definition.
One file. One place to edit, that way we follow the D.R.Y pattern.
#AI + TDD
AIs (and humans) work best when there's an automated way to validate the work they've done. When practicing TDD, AIs generate significantly better code. This reminds me of a thought experiment by Martin Fowler:
“”Given the choice to delete your whole codebase or your entire test suite, you should delete your codebase. It's easier to recreate the codebase than the test suite.
With the rise of AI, coding is no longer the commodity it used to be, nor the bottleneck. The bottleneck has shifted to areas that are harder to automate: describing what the system needs to do and how it should behave. Which coincidentally, that is what testing really is.
It's funny to see how everyone is suddenly striving toward best practices, given that AI behaves so much better and is more efficient when those practices are in place.
#PRDs: Product Requirements Documents for AI
Generating PRDs (Product Requirements Documents) for complex migrations or features is an interesting approach to having documentation of the changes you intend to make.
A PRD is nothing more than a Markdown file in your project. You can use a skill to generate it. The best way I've found to go about it is to let the AI interview you about the changes. It gathers context, then persists its findings in a PRD.
Then I generate detailed tasks from the PRD and let the AI implement those tasks using the already defined skills.
People go as far as running the task implementation under a Ralph LoopOpen in a new tab, where the idea is to run an AI agent infinitely:
while :; do cat PROMPT.md | claude-code ; done
Crazy things these days.
I like when PRDs follow this structure:
- Executive Summary
- Problem Statement
- Goals and Success Metrics
- User Stories
- Technical Architecture (Overview, Data Models, API Specifications)
- UI/UX Specifications
- Security Considerations
- Testing Strategy
- Risks and Mitigations
- Open Questions
For this I created another skill. You can find the create-prd skill here. And once the PRD is written, plan-tasks decomposes it into TDD-first task files, and code-task implements them one at a time, gating every step on failing tests.
The trick is not in the AI. The trick is in the architecture. Once your domain, application, infrastructure and delivery layers are well-defined and your skills know about them, the AI just follows the chain. Better architecture = better AI output. Always.
#I would actually want your help
Here's where I want to make a compelling argument, so you might want to let me help you.
If you're a CTO, VP of Engineering, or tech leader with a frontend that's slowing your team down — and you're trying to figure out how to integrate AI without sacrificing code quality — I'd love to talk to you.
This is exactly what I do for a living: helping teams design Frontend Architecture that's pleasant to work with today, and that AI agents can extend safely tomorrow. Skills, PRDs, TDD enforcement, layered design, DI containers — the whole stack of practices in this newsletter, applied to your codebase.
If you've ever felt that:
- Your team is shipping AI-generated code that nobody quite understands
- Your frontend is a tangle of components, hooks and side effects with no clear boundaries
- You want to use AI more aggressively but can't trust the output
- Onboarding new developers (human or agent) takes weeks instead of days
Then we should talk.
See how I help teams or book a call directly.
I take on a small number of engagements at a time, so the calendar fills up fast. If this resonates, reach out now rather than later.
You are not a CTO, VP of Engineering, or tech leader but still want, please refer me to the ones you know, I'll be immensely grateful.
#Wrapping up
AI is not going to replace good architecture. It's going to expose the lack of it.
The teams that win in the next couple of years won't be the ones that adopt the most AI tools. They'll be the ones that have the cleanest abstractions, the strictest tests, and the clearest conventions, because those are the things AI can leverage.
Skills, PRDs, conventions, TDD. Boring stuff. Career-defining stuff.
P.S: I'm giving talks at React AlicanteOpen in a new tab, React SummitOpen in a new tab, Commit ConfOpen in a new tab and JSNationOpen in a new tab. Perhaps see you there?