upvote
Thanks — this is a really good scenario to walk through, and I’m happy to extend the conversation.

First, I’m implicitly assuming your 10 ECS tasks are talking to the same Postgres instance and may issue overlapping queries. Once that’s the case, WAL slots and backend orchestration naturally enter the story — not just querying.

A few concrete facts first.

PostgreSQL caps logical replication slots via `max_replication_slots`. Each LinkedQL Live Query engine instance uses one slot.

Whether “10 instances” is a problem depends entirely on your Postgres config and workload specifics. I’d expect 10 to be fine in many setups — but not universally. It really does depend.

---

That said, if you want strong deduplication across services, the pattern I’d recommend is centralizing queries in a separate service.

One service owns the LinkedQL engine and the replication slot. Other backend services query that service instead of Postgres directly.

Conceptually:

[API services] → [Live Query service (LinkedQL)] → Postgres

From the caller’s point of view this works like a REST API server (e.g. `GET /users?...`), but it doesn’t have to be "just" REST.

If your technology stack requirements allow, the orchestration can get more interesting. We built a backend framework called Webflo that’s designed specifically for long-lived request connections and cross-runtime reactivity — and it fits this use case very naturally.

In the query-hosting service, you install Webflo as your backend framework, define routes by exposing request-handling functions, and have these functions simply return LinkedQL's live result rows as-is:

  // the root "/" route
  export default async function(event, next) {
    if (next.stepname) return next();

    const q = event.url.q;

    const liveResult = await client.query(q, {
      live: true,
      signal: event.signal
    });

    // Send the initial rows and keep the request open
    event.respondWith(liveResult.rows, { done: false });
  }
Here, the handler starts a live query and returns the live result rows issued by LinkedQL as "live" response.

  * The client immediately receives the initial query result
  * The HTTP connection stays open
  * Mutations to the sent object are synced automatically over the wire and the client-side copy continues to behave as a live object
  * If the client disconnects, event.signal is aborted and the live query shuts down
On the client side, you'd do:

  const response = await fetch('db-service/users?q=...');
  const liveResponse = await LiveResponse.from(response);

  // A normal JS array — but a live one
  console.log(liveResponse.body);

  Observer.observe(liveResponse.body, mutations => {
    console.log(mutations);
  });

  // Closing the connection tears down the live query upstream
  liveResponse.background.close();
There’s no separate realtime API to plumb manually, no explicit WebSocket setup, and no subscription lifecycle to manage. The lifetime of the live query is simply the lifetime of the request connection.

---

In this setup:

  * WAL consumption stays bounded
  * live queries are deduped centrally
  * API services remain stateless
  * lifecycle is automatic, not manually managed
I haven’t personally run this exact topology at scale yet, but it fits the model cleanly and is very much the direction the architecture is designed to support.

Once you use Webflo, this stops feeling like “realtime plumbing” and starts feeling like normal request/response — just with live mode.

reply