upvote
This whole reply, and every other "anecdote" reply is more worthless than the pixels its printed on, without a link to your "actually did a good job" password manager.

(wow funny how these vibe code apps always are copies of something theres many open source versions of already)

reply
Ugh, you made me spend the 20 minutes it takes to spin up a new github account to share this (my existing one uses my real name and I don't really want to doxx myself that much. Not that it's a huge deal, my real identity and the "ninkendo" handle have been intertwined a lot in the past.)

https://github.com/ninkendo84/kenpass

I'm not saying it's perfect, there's some things I would've done differently in the code. It's also not even close to done/complete, but it has:

- A background agent that keeps the unsealed vault in-memory

- A CLI for basic CRUD

- Encryption for the on-disk layout that uses reasonably good standards (pbkdf2 with 600,000 iterations, etc)

- Sync with any server that supports webdav+etags+mTLS auth (I just take care of this out of band, I had the LLM whip up the nginx config though)

- A very basic firefox extension that will fill passwords (I only did 2 or 3 rounds of prompting for that one, I'm going to add more later)

Every commit that was vibe-coded contains the prompt I gave to Codex, so you can reproduce the entire development yourself if you want... A few of the prompts were actually constructed by ChatGPT 5.2. (It started out as a conversation with ChatGPT about what the sync protocol would look like for a password manager in a way that is conflict-free, and eventually I just said "ok give me a prompt I can give to codex to get a basic repo going" and then I just kept building from there.)

Also full disclosure, it had originally put all the code for each crate in a single lib.rs, so I had it split the crates into more modules for readability, before I published but after I made the initial comment in this thread.

I haven't decided if I want to take this all the way to something I actually use full time, yet. I just saw the 1password subscription increase and decided "wait what if I just vibe-coded my own?" (I also don't think it's even close to worthy of a "Show HN", because literally anybody could have done this.)

reply
Thank you for the time commitment based on an internet forum comment. I appreciate greatly the succinct human written README.

Did you investigate prior art before setting out on this endeavor? https://www.google.com/search?q=site%3Agithub.com+password+m...

I ask because engineers need to be clever and wise.

Clever means being capable of turning an idea into code, either by writing it or recently by having the vocabulary and eloquence to prompt an LLM.

Wisdom means knowing when and where to apply cleverness, and where not to. like being able to recognize existing sub-components.

reply
> Did you investigate prior art before setting out on this endeavor

Lol no, I had no idea there was any other password managers! Thanks for the google search link! I didn't know search engines existed either!

> Wisdom means knowing when and where to apply cleverness, and where not to. like being able to recognize existing sub-components.

It says literally in the README that part of this is an exercise in seeing what an LLM can do. I am in no way suggesting anyone use this (because there's a bazillion other password managers already) nor would I even have made this public if you hadn't baited me into doing it.

The fact that there's a literal sea of password managers out there is why I'm curious enough to think "maybe a one that I get to design myself, written to exactly my tastes and my tastes alone could be feasible", and that's what this exercise is about. It literally took me less time to vibe-code what I have right now, than to pour through the sea of options that already exist to decide which one I should try. And having it be mine at the end means that I can implement my pet features the way I want, without having to worry one bit about fighting with upstream maintainers. It's also just fun. I thoroughly enjoy the process of thinking about the design and iterating on it.

reply
sooo ...

> it actually did a good job.

applies when there is a sea of "prior art" on the topic requested. And that request (prompt) is actually framed/worded properly to match that prior art.

Which may be perfect if the target is reduceable to prior-art. Re-use, Mix-and-match, from opensource or stackoverflow, into my-own-flavour-hot-water, finally!

No, this is not sarcasm. i hate to (catch myself a month later) reinventing hot-water. Let something else do it.

The question that stays with me is, How to keep the brain-bits needed for that inventing / making new stuff , alive and kicking.. because they will definitely deteriorate towards zero or even negative. Should we reinvent each 10th thing? just for the mental-gym-nastics?

reply
If you measure the productivity of the system that is “you, using an LLM” in terms of the rate at which you can get actually-reviewed code completed (which, based on your comment, seems to be what you were doing) that seems like a totally reasonable way of doing things. But in that case the bottleneck is probably you reviewing code, right? Which, I bet, is faster than writing code. But you probably won’t get the truly absurd superhuman speed ups.

What would you say is your multiplier, in terms of throughly reviewing code vs writing it from scratch?

reply
Yeah, I guess that's kinda my point. LLM detractors on HN seem to straw-man what they think the average LLM user is doing. I'm an experienced programmer who is using an LLM as a speed boost, and the result of that is that it produces thousands of lines of code in a short time.

The impressive thing isn't merely that it produces thousands of lines of code, it's that I've reviewed the code, it's pretty good, it works, and I'm getting use out of the resulting project.

> What would you say is your multiplier, in terms of throughly reviewing code vs writing it from scratch?

I'd say about 10x. More than that (and closer to 100x) if I'm only giving the code a cursory glance (sometimes I just look at the git diff, it looks pretty damned reasonable to me, and I commit it without diving that deep into the review. But I sometimes do something similar when reviewing coworkers' code!)

reply
I don't know if it is incompetence - if anything i doubt it, someone else pointed out that pg also used that metric and i don't think pg is incompetent. However at the same time i think it is misleading at best.

My impression is that, as someone else wrote, we do not have an actual metric for such things as productivity or quality or what have you, but some people do want to communicate that they feel (regardless of if that matches reality) using an LLM is better/faster/easier and they latch to the (wrong) assumption about more LoC == better/faster that non-programmers already believed for years (intentionally or not, they may also deluding themselves) as that is an easy path to convince them that the new toys have value that applies to the non-programmers too (note that i explicitly ignore the perspective of the "toymakers" as those have further incentives to promote their products).

Personally i also have about 2 decades of professional experience (more if counting non-professional) and i've been toying with LLMs now and then. I do find them interesting and when i use them for coding tasks, i absolutely find useful cases for them, i like to have them (where possible) write all sorts of code that i could write myself but i just don't feel like doing so and i do find them useful for stuff i'm not particularly interested in exploring but want to have anyway (usually Python stuff) and i'm sure i'll find more uses for them in the future. Depending on the case and specifics i may even say that in very particular situations i can do things faster using LLMs (though it is not a given and personally that is not much of a requirement nor something i have anywhere high in my interest when it comes to using LLMs - i'd rather have them produce better code slower, than dummy/pointless/repetitive code faster).

However one thing i never thought about was how "great" it is that they generate a lot of lines of code per whatever time interval. If anything i'd prefer it if they generated less line of code and i'd consider an LLM (or any other AI-ish system) "smarter" if they could figure out how to do that without needing hand holding from me. Because of this, i just can't see LoCs as anything but a very bad metric - which is the same as when the code is written by humans.

reply
>this tech is here to stay

How can you say that when all these models are externally sourced by companies that actively make a loss per token? When they finally need to make a profit, how can we be sure these models as well as their owners will remain as reliable and not enshittified? Anthropic has been blacklisted in the last 24 hours so its a turbulent industry to say the least

reply