upvote
ideally users could be banned for posting LLM outputs as if they were authored by humans https://www.pangram.com/history/49335ddf-118d-43e4-9340-a58a...
reply
I think not "ideally" in any case. Rather "practically" could be banned, for what badness?

It doesn't claim it was authored by humans. It is clearly the work product of human who transparently is using AI.

The work product if it works as claimed is rather amazing. Maybe even an inflection in AI use, if it would be sustainable.

reply
Would wax also be usable as a simple variant of a hybrid search solution? (i.e., not in the context of "agent memory" where knowledge added earlier is worth less than knowledge added more recently)
reply
Yes—Wax can absolutely be used as a general hybrid search layer, not just an “agent memory” feature.

  It already combines text + vector retrieval and reranking, so you can treat
  remember(...) as ingestion and recall(query:) as search for any document
  corpus.

  It does not natively do “recency decay” (newer beats older) out of the box in
  the core call signature. If you want recency weighting, add timestamps in
  metadata and apply post-retrieval re-scoring or filtering in your app logic
  (or query-time preprocessing).
Ive add this to the backlog, It comes in handy when dealing with time sensitive data. expect a pr this week
reply
Any plans to make it available to other languages via bindings?
reply