Only issue is Gemini's context window (I've seen my experience corroborated here on HN a couple times) isn't consistent. Maybe if 900k tokens are all of unique information, then it will be useful to 1 million, but I find if my prompt has 150k tokens of context or 50k, after 200k in the total context window response coherence and focus goes out the window.
>I've been using AI to get smaller examples and ask questions and its been great but past attempts to have it do everything for me have produced code that still needed a lot of changes.
In my experience most things that aren't trivial do require a lot of work as the scope expands. I was responding more to that than him having success with completing the whole extension satisfactorily.