The Last 30% - Software engineering for vibe coders. Join the waitlist.

TIL

Learning new things every day. Writing them down to remember.

· #AI Tools ·

CLI-ify Everything in the Agent Era

Agents don’t operate GUIs. They operate text. Every daily activity without a CLI is an activity your agents can’t automate for you.

This inverts traditional DX priority. The CLI used to be the optional hacker mode, with the GUI as the product. Now the CLI is the product, and the GUI is the optional wrapper for humans.

Three principles for CLI-ifying well for agents:

  1. --json is non-negotiable. Agents need parseable output, not pretty ASCII tables. Every command needs --json, and errors must be structured JSON with codes. {"code":"AUTH_EXPIRED"} is actionable. Error: something went wrong is not.

  2. Schema introspection beats documentation. Exposing cli <command> --schema returns a JSON schema of flags, types, and defaults. Agents discover capabilities without reading textual --help and stop hallucinating flag names.

  3. Encode the invariants agents can’t intuit. If adding a model requires editing 3 files in a specific order, that’s tribal knowledge that breaks agents. The CLI should expose cli add-model <name> that does all 3 atomically.

CLI + Skill is the combo. The CLI provides the capabilities. The skill provides the when and why (gotchas, correction layers for model hallucinations). Neither works alone. A skill without a CLI is dead documentation. A CLI without a skill leaves the agent guessing when to invoke it.

Practical corollary: every time you catch yourself clicking through a GUI repeatedly, write the CLI. Every repeated manual workflow is automation debt for your future agent.

· #AI Tools ·

Skills Are Correction Layers, Not Tutorials

After building and evaluating 23+ agent skills across GPT-5, Sonnet 4.5, and Gemini 3, one meta-principle became obvious. Skills patch the knowledge gaps of SOTA models. They don’t teach the basics.

Three things that surfaced from the evals:

  1. The model already knows the basics. A skill explaining “how to install Clerk” doesn’t move the needle. A skill that says “ClerkProvider goes in layout.tsx, NOT page.tsx” moves eval scores from 0% to 100%. The value is catching what the model hallucinates, not teaching what it already knows.

  2. Every model has different gaps. GPT-5, Sonnet 4.5, and Gemini 3 fail at different things because their training cutoffs and datasets differ. A skill that’s optimal for Sonnet can lower GPT-5’s score if it’s too directive about something GPT-5 already handles correctly. There is no “perfect skill”. There’s only “the perfect skill for model X’s knowledge gap at moment Y”. Evaluate per model, patch per model.

  3. The description is 80% of the work. Phil Schmid took his Gemini Interactions API skill from 66.7% to 100% pass rate , and rewriting the description field alone fixed 5 of 7 failures. The description is what the router uses to decide if the skill loads at all. If it doesn’t match user intent, the rest of the content never runs. Optimize description before instructions.

Corollary: skills are a moving target. Every new model release means re-evaluating which patches are still needed and which are obsolete because the model learned them in its new cutoff. Maintaining a skill library is closer to maintaining a browser compatibility shim than writing documentation.

Learn more
· #DOM ·

nativeInputValueSetter Bypasses React Input Masks

React (and Angular, Vue) override the native value setter on inputs to intercept writes and keep internal state in sync. This means input.value = 'foo' from the outside, from a browser extension, Playwright, or an agent-browser driver, silently fails. The DOM updates but the framework doesn’t notice and reverts it on next render.

The fix is to call the original native setter directly:

const setter = Object.getOwnPropertyDescriptor(
  HTMLInputElement.prototype,
  'value'
).set;

setter.call(input, 'foo');
input.dispatchEvent(new Event('input', { bubbles: true }));

This bypasses the framework’s wrapper and triggers the native input event, which React then picks up as a legitimate user input. Works identically on HTMLTextAreaElement.prototype.

Framework-independent, version-independent, and the only reliable way to drive controlled inputs from outside the framework’s world. Essential for browser automation, testing tools, and extensions that inject values into third-party apps.

· #CSS ·

color-mix() Replaces the --fg-20, --fg-40 Dance

color-mix(in srgb, var(--fg) 20%, transparent) is the native CSS way to do theme-aware dynamic opacity. No hex+alpha juggling, no rgba split, no need to generate a whole ladder of --fg-10, --fg-20, --fg-40 tokens.

/* Before: define every opacity variant per theme */
--fg-20: rgba(255, 255, 255, 0.2);
--fg-40: rgba(255, 255, 255, 0.4);

/* After: one variable, arbitrary opacity at use-site */
border: 1px solid color-mix(in srgb, var(--fg) 20%, transparent);
background: color-mix(in srgb, var(--accent) 40%, transparent);

Works with any color syntax (hex, rgb, hsl, oklch, CSS vars), respects the active theme automatically because var(--fg) resolves at paint time, and lets you pick the exact blend ratio inline instead of committing to a fixed set of tokens.

Also useful for hover states. color-mix(in srgb, var(--accent) 85%, white) lightens 15% on hover, theme-aware and cheap.

· #Infrastructure ·

Upstash getRemaining() vs limit()

limit() consumes a token. getRemaining() doesn’t. Use getRemaining() for status endpoints and UI counters. Built this for ray.tinte.dev .

// read-only, no token consumed
const { remaining } = await rl.getRemaining(id);

// consumes a token
const { success } = await rl.limit(id);
Learn more
· #Agents ·

Figma + Claude Code: The Bidirectional Design-Code Loop

Figma’s “Code to Canvas” plugin enables a full round-trip between design and code via MCP.

/implement-design extracts layout, typography, colors from any Figma frame and generates code for your stack. generate_figma_design serializes your running page’s DOM into real editable Figma layers (not a screenshot).

Figma frame → /implement-design → component
    ↑                                  ↓
edit in Figma              generate_figma_design
    ↑                                  ↓
get_design_context ← editable Figma layers

Gotcha: Code to Figma still needs a browser render with a capture script injected. Not purely automatic from code.

Learn more
· #Infrastructure ·

Vercel Deploy Hooks Solve SSG Scheduled Publishing

Static sites can’t publish posts at a future date because pages only exist if they’re generated at build time. No build = no page = 404.

The fix: Vercel Deploy Hooks. You create a hook URL in your project settings, and hitting it with a POST triggers a fresh production deploy. Combine that with a vercel.json cron job that calls your hook daily, and your SSG site rebuilds itself automatically.

{
  "crons": [
    {
      "path": "/api/cron/rebuild",
      "schedule": "0 12 * * *"
    }
  ]
}

The API route just fetches the deploy hook URL and authenticates with CRON_SECRET. I use this on my Astro blog to auto-publish “premiere” posts at their pubDate — the countdown disappears and the article goes live, zero manual intervention.

Learn more
· #Agents ·

Multi-Agent Teams Mirror Human Org Design

Built a 3-agent dev team (Bolt codes, Sage reviews, Nova researches) and hit every classic engineering management problem: ownership, concurrency limits, routing, queues.

Conway’s Law applies to AI teams too — your agent architecture mirrors the communication structure you design. The review loop proved it: Bolt opens PR, Sage reviews, Bolt fixes, repeat. Had to cap at 3 iterations because without a kill switch, two agents argue forever.

Don’t think of multi-agent systems as “parallel workers.” Think of them as a team. Roles, boundaries, escalation paths, termination conditions.

· #AI Tools ·

Skills Use Progressive Disclosure for Token Management

Agent skills load context in three levels: metadata (~100 tokens) at startup, full instructions (under 5,000 tokens) when triggered, and resources only as needed.

This is why skills beat .cursorrules - it loads entire contents into every conversation. A skill with 60+ product references only loads what’s relevant. Cloudflare-skill has 60 reference files but you can ask about Workers without paying for WAF documentation.

Learn more
· #Career ·

Build in Public Maximizes Luck Surface Area

Every public commit, tweet, or project is a lottery ticket. The more you ship visibly, the more opportunities find you.

tryelements.dev caught Clerk’s attention and got me the job. X grew from 0 to 2500 followers. Ship something every week. Tweet 2-3x per week minimum.

· #Agents ·

Cache Hit Rate is the Most Important Agent Metric

Effective agent design boils down to context management. Context is a finite resource with diminishing returns - like human working memory.

A higher capacity model with caching can be cheaper than a lower cost model without it. Progressive disclosure: don’t load all tool definitions upfront. Successful agents use surprisingly few tools (~12 for Claude Code, <20 for Manus).

Learn more
· #Agents ·

Ralph Wiggum Technique

Autonomous AI coding loop that ships features while you sleep. Named after The Simpsons character - embodies persistent iteration despite setbacks.

The core insight: context pollution is inevitable. Ralph doesn’t try to clean memory - it throws it away and starts fresh. Progress persists via files + git. Failures evaporate.

while :; do cat prompt.md | agent ; done

Each iteration reads task list, picks highest priority, implements, runs tests, commits if passing, logs learnings, repeats.

Learn more
· #AI Tools ·

MCP is USB-C for AI

Model Context Protocol (MCP) is an open standard for connecting AI agents to external systems. Think of it as “USB-C for AI applications” - a universal connector.

Before MCP, every agent needed custom integrations for every service (N×M problem). MCP creates a standardized interface so one MCP server for Postgres works with any MCP-compatible agent. Agents discover tools dynamically via protocol.

Learn more
· #AI ·

Context Engineering is the New Competitive Advantage

In a world where everyone has access to the same AI intelligence, companies differentiate through context - the proprietary information, tribal knowledge, and organizational data that agents receive.

Knowledge workers are becoming managers of AI agents. Agents are only as good as the organizational context they receive. As Peter Drucker said: “if HP knew what HP knows, we would be three times more productive.”

Learn more
· #Shipping ·

The Five Day Version

Ship the fast version first. Tight, artificial deadlines spike productivity and reveal that most things can be done faster than scarcity mentality allows.

“Do the five day version” - the constraint forces clarity. Unrealistic deadlines are a useful tool, not a stress source. Ship fast, iterate faster. Nobody cares about your v1 polish.

Learn more
· #Design ·

Web Whimsy Renaissance

AI is plummeting the cost of personalization. The web will become more whimsical, expressive, and unique in 2026.

The cost of turning “wildest dreams into pixels” is now accessible. Geocities-era creativity is returning but with AI capabilities. Non-technical people can move away from templates. The template era is dying.

Learn more
· #Productivity ·

Life Systems Over Willpower

Willpower fluctuates with stress, fatigue, and hunger. Instead of relying on it, build automated if-then rules that remove decision-making.

Nearly half of daily decisions are already autonomous (habits). Create explicit rules: “if 7am, work out” - no decision needed. Every choice taxes mental energy. High performers don’t have more willpower - they remove choice entirely.

Learn more