Then as scope expands you're left with something that is difficult to extend because its impossible to keep everything in the LLM context. Both because of context limits and because of input fatigue in terms of communicating the context.
At this point you can do a critical analysis of what you have and design a more rigorous specification.
Only issue is Gemini's context window (I've seen my experience corroborated here on HN a couple times) isn't consistent. Maybe if 900k tokens are all of unique information, then it will be useful to 1 million, but I find if my prompt has 150k tokens of context or 50k, after 200k in the total context window response coherence and focus goes out the window.
>I've been using AI to get smaller examples and ask questions and its been great but past attempts to have it do everything for me have produced code that still needed a lot of changes.
In my experience most things that aren't trivial do require a lot of work as the scope expands. I was responding more to that than him having success with completing the whole extension satisfactorily.
After I completed the extension I did try on another model and despite me instructing it to generate a v3 manifest extension, the second attempt didn't start with declarativeNetRequest and used the older APIs until I made a refinement. And this isn't even a big project really where poor architecture would cause debt.
Vibe coding can lead to technical debt, especially if you don't have the skills to recognize that debt in the code being generated.