Back to Blog

Lessons in Vibecoding — A Beginner’s Journey

By Jan Schäferjohann
5 min read

My vibecoding evolution

When ChatGPT launched in November 2022, I finally got a helping hand for learning to code in a way that hadn’t clicked before. Since then I’ve used ChatGPT, Claude, Mistral, Grok, Gemini, Ollama, Cursor, Kiro, Claude Code, and a few others. Today I want to reflect on what actually helped and how it shaped my workflow.

Writing snippets in Python and building a small tool library

My first steps were small (<100 lines) scripts in Bash and Python to do things I knew were possible but couldn’t quite build from scratch: zipping and renaming files, converting formats, manipulating data. Suddenly I didn’t need random web tools to merge PDFs or batch‑rename anything—LLMs scaffolded it quickly, and I learned by reading and tweaking their code.

Claude Desktop, MCP, and filesystem access

The next inflection point was running a local filesystem MCP server (Model Context Protocol) inside Claude Desktop. Now the assistant could read and write files directly. That expanded the scope: scripts could install dependencies, call APIs, and produce “real” outputs instead of one‑off utilities. The feedback loop tightened; iterating became much faster.

Scaling with an IDE (Cursor)

That led to my first “real” piece of software: API calls to Claude, input processing, XML transforms, and Python to automate a project task. Early progress was smooth, but a few thousand lines in, the cracks showed. I switched to Cursor as my main IDE (with Sonnet in the loop) and kept Claude Desktop for architecture questions and reviews. I finished the tool, and it saved us many days of work.

Since then I’ve tried Cline, Roo Code, Windsurf, Claude Code, Kiro, and GitHub Copilot. Some are better at certain tasks, but none changed my core vibecoding loop. Cursor is my default; Claude Code is my second opinion or bug‑fix buddy when I’m stuck.

Vercel

The last big unlock was Vercel. Starting from a template, pushing to GitHub, and getting a live deployment pushed me beyond localhost. It made sharing prototypes with colleagues trivial—and this site is one of those projects.

Many tools for many problems — and lessons learned

Andrej Karpathy recently shared his current workflow. While he’s far beyond vibecoding, the “use different tools for different tasks” mindset mirrors my experience. Here’s his full take.

For me, Claude, ChatGPT, Cursor, Claude Code, VS Code, and Codex are still in regular rotation—if not daily, then several times a week. Lessons that stuck:

Prompting

I don’t keep prompt templates. I usually co‑write the initial requirements with the LLM: what inputs I’ll provide, what outputs I expect, formats, and a couple of concrete examples. When I’m wiring up API calls, the LLM drafts the prompts first. After that, my follow‑ups are short (1–3 sentences) and focused on one improvement at a time.

Models

Claude Sonnet has been king for me. Recently, GPT‑5 has become my default inside Cursor. For prototyping internal tools, I prefer readable code over defensive code. Excess logging and comments slow me down; I can add guardrails later if the tool graduates from “internal” to “shared.”

Docs and specifications

I keep tasks and short specs in a /docs directory (gitignored and Vercel‑ignored). Architecture notes and logging concepts live there too. Lightweight docs are enough—just enough for me (or a future me) to pick up the thread.

MCP and helpers

Claude Desktop runs a filesystem MCP server for direct file access. In Cursor, I use Playwright for front‑end tests and Concept7 for up‑to‑date documentation lookups.

Cursor rules

I try community rules I find on X, Reddit, or GitHub. I can’t say I feel a huge impact yet, and I’ve neglected project‑specific rules. When I do write them, it’s usually for naming, file layout, and test patterns.

Vercel deployments

I deploy often to keep the loop tight and changes small. Shipping early invites feedback and catches broken assumptions faster than any private checklist.

Localhost + hot reload

Same idea: I keep the dev server on hot reload so every small change gets immediate validation.

Markdown and JSON

Anything intended for an LLM lives in Markdown. When I need structure, I ask for JSON dictionaries. This keeps inputs consistent and outputs easy to parse.


Conclusion: what vibecoding taught me

  • Start tiny. Ship a script, not a system.
  • Tight loops beat perfect plans: local files, hot reload, live deploys.
  • Treat the LLM like a collaborator: co‑write requirements, then iterate in small steps.
  • Use the right tool for the moment; don’t force one tool to do everything.
  • Prefer readable code while you explore; add safety once the path is clear.
  • Document just enough to restart momentum after a break.
  • Share early. A URL beats a screenshot for feedback.

What’s next for me: firmer project‑level rules in Cursor, more Playwright coverage, and turning my ad‑hoc scripts into a small library I can import instead of copy‑pasting.