upvote
> without the cognitive bias that many of us will carry.

It is naive and incorrect to believe LLMs do not have biases. Of course they do, they are all trained on biased content. There are plenty of articles on the subject.

> Would you jump into the ocean and start trying to talk with them, or would you look up what is already discovered?

Why resort to straw men arguments? Of course anyone would start by looking up what has already been discovered, that doesn’t immediately mean reaching for and blindly trusting any random LLM. The first thing you should do, in fact, is figure out which prior research is important and reliable. There are too many studies out there which are obviously subpar or outright lies.

reply
I agree, LLMs have biases. It was my primary desire to build this tool and put the weight on the LLMs to synthesize rather than think about and interpret the subjects. It's actually the main goal of this tool - maybe I don't articulate that as well as I could be - I'm open to suggestions here!

I agree to first figuring out which research is most important and reliable. There is a planning stage, to consider the sources and which ones hold credibility.

In addition, the user has full control over the sources the tool uses, and even add their own (MCP tools).

In addition, being open source, you have full control over the flow/prompts/source methods/etc and as a result can optimize this yourself and even contribute improvements to ensure this benefits research as a whole.

I welcome your feedback, and any code amendments you propose to improve the tool. You clearly understand what makes good research and your contributions will be highly valued by all of us.

reply
By having to defend your (thesis)/work like this, the whole piece is getting lifted into academic heights in a way, so you could as well keep calling its result and process research :)

What description would itself come up with, BTW?

When you anwer with "I agree, LLMs have biases.", I immediately suspect that to be an LLM calming me after correcting it, though. So, the world has definitely changed and we might need to allow for correcting the broadness of words and meanings.

After all you did not write thesis, scientific research or similar and I remember it being called researching when people went looking up sources (which took them longer than an agent or LLM these days). Compressing that into a report might make it a review, but anyway. Great that you assembled a useful work tool here for some who need exactly that.

reply
I'm very curious. As it's main purpose is built to really search and decide balanced/unbias sources and not really have an opinion, what the result would be of such a question. I'm curious if it will give an answer on this. I just gave it the request:

"I am struggling what to call this other than "Deep Research tool" as really it is looking online and scanning/synthesizing sources (that's you, by the way!). With that in mind, someone suggested "literature review" but it makes me think of books. I wonder if you can see what this kind of "research" is and suggest a name to describ it based on all the information you uncover on what good research looks like."

Let's see how it gets on...

Also, something I think about a lot (you sound like a deep thinker!) - when we discover something that is untrue, can it make it true? (purely hypothetical thought)... if 1000 people were told coffee was bad for them, does the mind-body connection take over and amplify this into reality. We are certainly in interesting times!

reply
Ha, hoc - it was quite interesting to see, and learn a bit about this.

Apparently the suggested term is "Digital Information Synthesis"

You can see the report here: https://docs.google.com/document/d/1Vg_UWUPelWohzVGduaKY7Czd...

This was quite an interesting use case, thanks!

reply
I updated the Readme now, to describe as "CleverBee: AI-Powered Online Data Information Synthesis Assistant" and put emphasis on the synthesis.

Also, put a new section in place:

What cleverb.ee is not Cleverb.ee is not a replacement for deep domain expertise. Despite explicit instructions and low temperature settings, AI can still hallucinate. Always check the sources.

reply
Did hoc intentionally pun by writing that this meta analysis is getting “lifted”?

Reference: https://legacy.reactjs.org/docs/higher-order-components.html

reply
> I welcome your feedback, and any code amendments you propose to improve the tool. You clearly understand what makes good research and your contributions will be highly valued by all of us.

This bit is worded in a way that feels manipulative. Perhaps it’s why your comment is being downvoted. Regardless, I’ll give you the befit of the doubt and believe you’re being honest and replying in good faith; my genuine intentions have been misinterpreted in the past too, and I don’t wish to do it to another.

I won’t propose any code improvements, because I don’t believe projects like yours are positive to the world. On the contrary, this over-reliance on LLMs and taking their output as gospel will leave us all worse off. What we need is the exact opposite, for people to be actively aware of the inherent flaws in the system and internalise the absolute need to verify.

reply
I'd like to humor you a bit on what you say (going off on a little tangent here).

- Were all the existing sources (e.g. news, podcasts, etc) ever reliable? - Do people lobby for certain outcomes on some research/articles?

And finally...

- Now we know LLMs hallucinate, and news can easily be faked, are people finally starting to question everything, including what they were told before?

Of course, mostly rhetorical but I think about this a lot - if it is a good or bad thing. Now we know we are surrounded by fakeness that can be generated in seconds, maybe people will finally gain critical thinking skills, and the ability to discern truth from falseness better. Time will tell!

For now the way I see it is people are becoming reliant on these tools, and only will a community of people collaborating to better the outcomes can ensure alternate agendas do not lead the results.

reply
> Were all the existing sources (e.g. news, podcasts, etc) ever reliable?

No, of course not. And I would deeply appreciate if you stopped arguing with straw men. When you do it repeatedly you are either arguing in bad faith or unable to realise you’re doing so, neither of which is positive. Please engage with the given argument, not a weaker version designed to be attacked.

But I’ll give you the benefit of the doubt once more.

Provenance matters. I don’t trust everyone I know to be an expert on every subject, but I know who I can trust for what. I know exactly who I can ask biology, medical, or music questions. I know those people will give me right answers and an accurate evaluation of their confidence, or tell me truthfully when they don’t know. I know they can research and identify which sources are trustworthy. I also know they will get back to me and correct any error they may have made in the past. I can also determine who I can dismiss.

The same is true for searching the web. You don’t trust every website or author, you learn which are trustworthy.

You don’t have that with LLMs. They are a single source that you cannot trust for anything. They can give you different opposite answers to the same query, all with the same degree of confidence. And no, the added sources aren’t enough because not only are those often wrongly summarised, even stating the opposite, most people won’t ever verify them. Not to mention they can be made to support whatever point the creators want.

> Now we know LLMs hallucinate, and news can easily be faked, are people finally starting to question everything, including what they were told before?

No, they are not. That is idyllic and naive and betrays a lack of attention to the status quo. People are tricked and scammed every day by obviously AI pictures and texts, and double down on their wrong beliefs even when something is proven to have been a lie.

reply
A more precise term for what it is doing would be a "literature review".

But I think you're right to describe it as research in the headline, because a lot of people will relate more to that term. But perhaps describe it as conducting a literature review further down.

reply
I agree. In all honesty I was just following on the trend that has been popularized by OpenAI/Google so it is more relatable but will mention the "literature review" as you suggest, it's a good idea.

I didn't give the wording too much thought in all honesty - was just excited to share.

Where would you suggest to put the literature review text? Readme.md?

What about something like "synthesized findings from sources across the internet" or something like that.

When I see the word literature, I immediately think of books.

reply
I really have to challenge the notion of AI "distilling information without cognitive bias.

First, AI systems absolutely embody cognitive biases - they're just different from human ones. These systems inherit biases from:

  - Their training data (which reflects human biases and knowledge cutoffs)
  - Architectural decisions made by engineers  
  - Optimization criteria and reinforcement learning objectives  
  - The specific prompting and context provided by users
An AI doesn't independently evaluate source credibility or apply domain expertise - it synthesizes patterns from its training data according to its programming.

Second: You frame AI as a "power suit" for distilling information faster. While speed has its place, a core value of doing research isn't just arriving at a final summary. It's the process of engaging with a vast, often messy, diversity of information, facts, opinions, and even flawed arguments. Grappling with that breadth, identifying conflicting viewpoints, and synthesizing them _yourself_ is where deep understanding and critical thinking are truly built.

Skipping straight to the "distilled information," as useful as it might be for some tasks, feels like reading an incredibly abridged version of Lord of the Rings: A small man finds a powerful ring once owned by an evil God, makes some friends and ends up destroying the ring in a volcano. The end. You miss all the nuance, context, and struggle that creates real meaning and understanding.

Following on from that, you suggest that this AI-driven distillation then "allows for another level, experiments, surveys, etc to uncover things even further." I'd argue the opposite is more likely. These tools are bypassing the very cognitive effort that develops critical thinking in the first place. The essential practice for building those skills involves precisely the tasks these tools aim to automate: navigating contradictory information, assessing source reliability, weighing arguments, and constructing a reasoned conclusion yourself. By offloading this fundamental intellectual work, we remove the necessary exercise. We're unfortunately already seeing glimpses of this, with people resorting to shortcuts like asking "@Grok is this true???" on Twitter instead of engaging critically with the information presented to them.

Tools like this might offer helpful starting points or quick summaries, but they can't replicate the cognitive and critical thinking benefits of the research journey itself. They aren't a substitute for the human mind actively wrestling with information to achieve genuine understanding, which is the foundation required before one can effectively design meaningful experiments or surveys.

reply
Very true, and it got me thinking a lot.

As humans, we align to our experiences and values, all of which are very diverse and nuanced. Reminds me of a friend who loves any medical conspiracy theory, whose dad was a bit of an ass to him, and of course, a scientist!

Without our cognitive biases, are we truly human? Our values; our desired outcomes inherently are part of what shapes us. and of course the sources we choose to trust reinforce this.

It's this that makes me think AGI can never be achieved, or human-like ability for AI to think, because we are all biased, like it or not. Collectively and through challenging each other, this is what makes society thrive.

I feel there is no true path towards a single source of truth, but collaboratively we can at least work towards getting there as closely as possible

reply