Posted August 20, 2025 by chimpaigo
#web #html5 #ai #experiment #chimpaigo #prototype #trivia #frontend
Hi,
I did this partly out of curiosity and partly because I was lazy: “let’s see what happens if I ask GPT-5 to build a small Chimpaigo-like demo from scratch.” The goal was simple: one HTML5 page that shows a question + answers, a players list, bots that play, and the flow question → answers → results → next question. If it worked well, the plan was to later hook it to the real Chimpaigo server.
At first it was kind of magical. GPT-5 gave me HTML, CSS and JS that looked fine and was playable in minutes. The usual “wow” moment. But when I asked it to add structure and make the thing a bit more real, the cracks started to show.
Forgets things. It keeps things coherent for a while, but as you add features it starts forgetting variable names, agreed structures or even functions it already made. You end up repeating specs, which kills time.
Fix one, break two. You change something to fix a bug and suddenly other parts that worked are broken. Not kidding: every fix seemed to create collateral damage.
Bad at architecture. Ask it to split code into modules or files and you get pieces that work in isolation but don’t fit together: missing imports, implicit dependencies, inconsistent contracts.
No care for edge cases. It handles happy paths, but strange inputs or odd flows fail. It won’t write meaningful tests and the auto-docs are shallow.
Documentation is lightweight. What it writes as “docs” often doesn’t match the actual decisions in code. Another dev would struggle.
After a few iterations I realised the time I spent fixing and re-aligning the generated code was approaching — and sometimes exceeding — the time it would’ve taken me to build a small, clear skeleton myself.
It’s great for quick prototypes and testing UI ideas. If you want a visual mock or something to play with fast, it’s brilliant. But for anything that needs coherence over time, my workflow now is:
Define the architecture and data contracts first (names, JSON, endpoints).
Build a minimal skeleton myself (state management, file structure).
Use the AI to generate specific components or helper functions.
Version early and add basic tests from the start.
Short: GPT-5 speeds up the start, but it doesn’t replace planning and discipline once the project grows.
Right now, vibe coding is useful and interesting, but immature for mid/large projects because of context limits and flaky coherence. That said, I’m optimistic: with better session memory, persistent context and tools that enforce contracts, this will get much more viable. It’s not sci-fi — it’s a matter of time and better tooling.
If you want to try the demo I used for this experiment:
https://www.chimpaigo.com/play/index.html
Anyone else let an AI “do it all”? What did you end up doing by hand?