upvote
Show HN: Haystack – Review pull requests like you wrote them yourself

(haystackeditor.com)

As I work more with AI, I’ve came to the conclusion that I have no patience to read AI-generated content, whether the content is right or wrong. I just feel like it’s time wasted. Countless of examples: meeting summaries (nobody reads them), auto generated code (we usually do it for prototypes and pocs, if it works, we ship it, no reviews. For serious stuff we take care of the code carefully), and a large etc.

I like AI on the producing side. Not so much on the consuming side.

reply
I tend to agree. Except if it's text generated by me for me.

I don't want you to send me a AI-generated summary of anything, but if I initiated it looking for answers, then it's much more helpful.

reply
I'm not doing this much now, but this AI-generated text might be more useful if you use AI to ask questions using it as a source.
reply
For me, AI meeting summaries are pretty useful. The only way I see they're not useful for you is that you're disciplined enough to write down a plan based on the meeting subject.
reply
That's fair! If there were a "minimal" mode where you could still access callers, data flows, and dependencies with no AI text, would it be helpful for your reviews?
reply
Not parent, but in my opinion the answer here is yes. I agree that there is a real need here and a potentially solid value proposition (which is not the case with a lot of vscode-fork+LLM-based starups) but the whole point should be to combat the verbosity and featurelessness of LLM-generated code and text. Using an LLM on the backend to discover meaningful connections in the codebase may sometimes be the right call but the output of that analysis should be some simple visual indication of control flow or dependency like you mention. At a first look the output in the editor looks more like an expansion rather than a distillation.

Unrelated, but I don't know why I expected the website and editor theme to be hay-yellow and or hay-yellow and black instead of the classic purple on black :)

reply
Thanks for the opinion! That makes a lot of sense and I like the concept of being an extension of a user's own analysis vs hosing them with information.

Yeah originally I thought of using yellow/brown or yellow/black but for some reason I didn't like the color. Plenty of time to go back though!

reply
honestly i feel the same way and i can't quite put into words why. I guess if I had to -- I think it's because I know not all AI generated stuff is equally created and that some people are terrible at prompting/or don't even proofread the stuff that's outputted, so I have this internal barometer that screams "you're likely wasting your time reading this" and so I just learned to avoid it entirely. Which is sad, because clearly now a ton of stuff is AI generated, so I barely read anything, _especially_ if I see any signals like "it's not just this, it's that"
reply
Products like these make me realize we're solving for the wrong problems with a lot of these AI solutions. I don't want you to take this as a hit to you or your product, I actually think it's extremely cool and will likely find a use. But from my perspective if this is a product you think you need, then you likely have a bigger organizational issue, as PRs are probably the last thing that I would want an AI 'intern' to organize for me.
reply
> you likely have a bigger organizational issue

Could you expound on this? In my experience as a software engineer, a pull request could fall into one of two buckets (assuming it's not trivial):

1. The PR is not organized by the author so it's skimmed and not fully understood because it's so hard to follow along

2. The PR author puts a lot of time into organizing the pull request (crafting each commit, trying to build a narrative, etc.) and the review is thorough, but still not easy

I think organization helps the 1st case and obviates the need for the author to spend so much time crafting the PR in the 2nd case (and eliminates messy updates that need to be carefully slotted in).

Curious to hear how y'all handle pull requests!

reply
This is where I feel like we've solved a third-order problem. If you're sorting all PRs into those two buckets then you should probably take a step back and redefine what a PR is for your organization, as both 1 and 2 make the assumption that the PR is too big to review in a single sit down or that the author didn't put in enough effort to craft their PR. Both of these should just be rejected outright in favor of doing things in a smaller more manageable way, instead of having an AI sort through something that a human should have started with. Obviously this is more of an ideal situation and a lot of companies don't work on the ideal which is why I think your product will find good use because companies don't want to invest in slowing down, only going faster.
reply
Interesting. At my previous company there was a debate about smaller PRs vs bigger PRs and the end conclusion was that there are tradeoffs in being able to deal with 2-5 bite-sized PRs vs one large PR. The largest one being that it's hard to grasp the totality of the pull request and how the different PRs work together.

> companies don't want to invest in slowing down, only going faster.

I do think this is the way things are going to go moving forward, for better or for worse!

reply
My solution is to organize my PRs as a sequence of commits that explain what they do, and then a PR description that gives an overview and motivates the changes. I've gotten really positive feedback on this, and it dramatically speeds up reviews of my code. Overall less work for the team. (And it often helps me find problems before I even submit the PR.)

As for other people's PRs? If they don't give a good summary, I ask them to write one.

reply
Yeah I did this too as an engineer!

I think this is a valid part of the "crafting PR" skill that's under appreciated, and part of the goal of Haystack here is to make that part of PR craft effortless.

reply
This nails a real problem. Non-trivial PRs need two passes: first grok the entrypoints and touched files to grasp the conceptual change and review order, then dive into each block of changes with context.
reply
I would really want to use this, maybe about once a week, for major PRs. I find it absurd that we all get AI help writing large features but very little help when doing the approx same job in reviewing that code. I actually would even read my own PRs with it, as my workflow with AI is to prompt it to acheive building some feature/goal, then only review the code once things work (this is an oversimplification).
reply
I think tools like this are useful, but they can never replace the quality of the narrative that someone who actually wrote the code can come up with.

There's just so much contextual data outside of the code itself that you miss out on. This looks like an improvement over Github Co-Pilot generated summaries, but that's not hard.

reply
Did not load, sad.

Failed to load resource: net::ERR_BLOCKED_BY_CLIENT ^ I'm not exactly sure what this is about. I think it is https://static.cloudflareinsights.com/beacon.min.js/vcd15cbe... which I would imagine is probably not necessary.

Uncaught TypeError: Cannot convert undefined or null to object at Object.keys (<anonymous>) at review/?pr_identifier=xxx/xxx/1974:43:12

These urls seem to be kind of revealing.

reply
Trying to fix/find this! Could you email me the repo details? Very sorry for the error here.

In terms of auth: you should get an "unauthenticated" if you're looking at a repo without authentication (or a non-existent repo).

reply
I love this idea, trying it out now. There is QUITE a delay doing the analysis, which is reasonable, so I assume as a productionized (non-demo) release this will be async?
reply
There's a GitHub app that you can install on your repo.

If you install and subscribe to the product, we create a link for you every time you make a pull request. We're working (literally right now!) on making it create a link every time you're assigned a review as well.

We'll also speed up the time in the future (it's pretty slow)!

reply
Any ideas what pricing will look like?

What is your privacy policy around AI?

Any plans for a locally-runnable version of this?

reply
1. Pricing would be $20 per person and we'd spin up an analysis for every PR you create/are assigned to review 2. We don't train or retain anything related to your codebase. We do send all the diffs to Open AI and/or Anthropic (and we have agentic grepping so it can see other parts of the codebase as well) 3. Do you mean the ability to run this on your own servers (or even on your own computer with local LLMs)? We do have plans for the former, but I don't know how big the lift on the latter will be so I'm not sure!
reply
Pretty cool. Not from a big firm, but I really like the idea of adding metadata to PRs! One counterintuitive thing about navigation is that I keep hitting the Back button, hoping to go back to `Pull request summary`, which feels like the main navigation page, but nothing happens. You already implement navigation back and forward in history, so why not do it in the browser router too?
reply
Sorry to clarify: you hit "back" after traversing from the summary but it doesn't go back? If so, that's a bug!

Or do you mean that doing the browser navigation of "back" should bring you to the summary (initial page)?

reply
Sorry for the confusion! Missing browser navigation is the problem. The virtual back buttons you put in the top left are working as I'd expect browser nav to do. I keep trying to go back, it would feel so natural.
reply
[dead]
reply