Skip to content

Nats scheduling fix#297

Merged
sonus21 merged 70 commits intomasterfrom
nats-scheduling-fix
May 10, 2026
Merged

Nats scheduling fix#297
sonus21 merged 70 commits intomasterfrom
nats-scheduling-fix

Conversation

@sonus21
Copy link
Copy Markdown
Owner

@sonus21 sonus21 commented May 9, 2026

Description

Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • Unit Test
  • Integration Test
  • Reactive Integration Test

Test Configuration:

  • Spring Version
  • Spring Boot Version
  • Redis Driver Version

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

sonus21 and others added 30 commits May 3, 2026 01:08
The coverage_report job was producing an effectively empty
jacocoTestReport.xml (3.4KB vs ~1.1MB locally) because no .class files
existed when coverageReportOnly ran — the job checked out source code
and downloaded .exec artifacts, but never compiled. JaCoCo's report
generator skips packages/classes it cannot resolve, so the merged XML
ended up with only <sessioninfo> entries and no <package> elements.

That made coverallsJacoco silently no-op via the
"source file set empty, skipping" branch in CoverallsReporter, so
"Push coverage to Coveralls" reported success without uploading.

Verified by downloading the coverage-report artifact from a recent run
and comparing its XML structure against a local build's report.

Assisted-By: Claude Code
…e Q-detail

Replace the all-stub `NatsRqueueUtilityService` with real impls for the operations
JetStream can model: `pauseUnpauseQueue` persists the `paused` flag on `QueueConfig`
in the queue-config KV bucket and notifies the local listener container so the poller
stops dispatching; `deleteMessage` is a soft delete via `MessageMetadataService`
(stream message persists, dashboard hides via the metadata flag); `getDataType`
reports `STREAM`. `moveMessage`, `enqueueMessage`, and `makeEmpty` deliberately
remain "not supported" — there is no JetStream primitive for those.

Update `RqueueQDetailServiceImpl.getRunningTasks` / `getScheduledTasks` to return
header-only tables when the broker capabilities suppress those sections, instead of
emitting zero rows or 501s on NATS.

20 new unit tests cover the pause/delete paths and lock in the still-unsupported
operations. Updates `nats-task.md` / `nats-task-v2.md` to reflect what landed.

Assisted-By: Claude Code
End-to-end browser-tested the NATS dashboard and shipped the templates +
broker fixes uncovered by it:

- `RqueueViewControllerServiceImpl.addBasicDetails` now propagates the active
  broker's `Capabilities` to every template via `hideRunningPanel`,
  `hideScheduledPanel`, `hideCronJobs`, and `hideCharts`. Templates default
  to "show" when these are absent so the legacy Redis path is unchanged.

- `base.html` hides the Running tab when `hideRunningPanel` is set; Scheduled
  was already gated.
- `index.html` and `queue_detail.html` skip the stats / latency chart panels
  (and their JS bootstrap) when `hideCharts` is set, replacing the home
  charts with a friendly backend-aware blurb.
- `queues.html` swaps the hard-coded "backing Redis structures" copy for the
  broker-supplied `storageKicker`.

- `JetStreamMessageBroker.peek` rewritten to read messages directly from the
  stream via `JetStreamManagement.getMessage(streamName, seq)` instead of
  creating an ephemeral pull consumer. NATS 2.12+ rejects `AckPolicy.None`
  on WorkQueue streams (10084) and rejects mixing filtered + non-filtered
  consumers (10100), so the consumer-based approach can't coexist with the
  durable poller. Sequence-based reads sidestep both.

- `NatsRqueueMessageMetadataService.deleteMessage` now creates a tombstone
  metadata entry when no record exists (NATS skips the storeMessageMetadata
  path at enqueue time), so dashboard-driven deletes always succeed and the
  next peek renders the row as deleted.

- `rqueue.js`'s `deleteMessage` / `enqueueMessage` button handlers now use
  `closest('tr')` instead of two `.parent()` hops. The recent
  `explorer-action-group` div wrapper added an extra level of nesting; the
  old walk landed on the action cell and read "Delete" as the message id.

Assisted-By: Claude Code
Replace the hard-coded "LIST" / "ZSET" tokens on the queue-detail page with a
broker-supplied human label so NATS shows "Queue (Stream)", "Completed (KV)",
and "Dead Letter (Stream)" instead of Redis-shaped data structure names.

- New `MessageBroker.dataTypeLabel(NavTab, DataType)` SPI hook, default
  returns null (legacy Redis path keeps `DataType.name()`).
- `JetStreamMessageBroker` overrides for the NATS-mapped tabs.
- `RedisDataDetail` carries an optional `typeLabel` field; templates render
  via `{{ typeLabel | default(type) }}` so older callers stay correct.
- `queue_detail.html` plus the `rqueue.js` modal title use the label and
  surface the broker-friendly token in the explorer header.

Also fixes `JetStreamMessageBroker.size(QueueDetail)` for streams created
with `RetentionPolicy.Limits`. WorkQueue retention drops messages on ack so
`streamState.msgCount` already equals the pending count, but Limits keeps
all messages and `msgCount` over-reports. The new path detects retention
from `streamInfo.config` and walks the stream's durable consumers to surface
the maximum `numPending` for Limits-mode queues, falling back to msgCount
on consumer-enumeration errors.

Assisted-By: Claude Code
…~ prefix

Replace the consumer-walking max(numPending) computation with stream position
math so the dashboard surfaces the worst-case backlog using a single
pass over consumers:

  pending ≈ lastSeq - min(consumer.delivered.streamSeq)

Mathematically equivalent to the previous max(numPending), but expressed
in terms of stream offsets (which is what the dashboard now signals as
approximate to the operator).

Also adds the user-facing approximation indicator:

- New `MessageBroker.isSizeApproximate(QueueDetail)` SPI hook, default
  false (Redis returns exact list / sorted-set sizes).
- `JetStreamMessageBroker.isSizeApproximate` returns true for streams
  with `RetentionPolicy.Limits` and false for WorkQueue (the standard
  rqueue queue mode where msgCount is exact after acks remove messages).
- `RedisDataDetail.approximate` carries the flag through to the view.
- `queue_detail.html` renders `~ N` when approximate, `N` otherwise; the
  `Queue-backed` short-circuit for size<0 stays.
- `RqueueQDetailServiceImpl` sets the flag on the pending row when a
  broker is wired.

Assisted-By: Claude Code
A single aggregated "~ N" pending number hides the per-consumer lag that
matters on a fan-out stream — Consumer A might be at seq 100 while Consumer
B is at seq 1100, and the dashboard previously surfaced only the worst case.

This commit replaces that aggregate with a per-row breakdown when the broker
exposes one, leaving the WorkQueue / Redis path unchanged:

- New `MessageBroker.consumerPendingSizes(QueueDetail)` SPI hook returning
  an ordered map of `consumerName -> pending`. Default returns null (no
  breakdown available).
- `JetStreamMessageBroker` overrides for Limits-retention streams: walks
  the durable consumers, prefers `numPending` (server-computed), falls back
  to position math (`lastSeq - delivered.streamSeq`) when numPending is 0.
  Returns null on WorkQueue retention (single shared pool).
- `RedisDataDetail` carries an optional `consumerName`.
- `RqueueQDetailServiceImpl` emits one PENDING row per consumer when the
  broker provides a breakdown, with exact (non-approximate) counts.
- `queue_detail.html` renders the consumer name as muted text next to the
  data-structure name.

Result on a NATS Limits stream:
  PENDING | Stream | rqueue-js-feed / consumer-a | 100
  PENDING | Stream | rqueue-js-feed / consumer-b | 500
  PENDING | Stream | rqueue-js-feed / consumer-c |   0

Assisted-By: Claude Code
Replace the queue-detail page's data-structure-centric "Job Type / Data Type /
Name / Size" table with a consumer-level table that works across all backends.
The standalone "Queue Pollers" section is folded in via a worker-registry join
on consumer name, so the queue-detail page now has a single integrated view.

UI shape:

  Subscribers
    Consumer | Type | Storage | Pending | In-Flight | Status | Host/PID | Last Poll

  Terminal Storage (only when present)
    Bucket | Type | Storage | Size

Backends:

- Redis — every @RqueueListener registered for the queue is a row, with shared
  Pending (LIST size, marked "(shared)") and shared In-Flight (processing-ZSET
  size). EndpointRegistry.getAllForQueue() enumerates the handlers.
- NATS WorkQueue — every durable consumer is a row; Pending = shared msgCount
  marked "(shared)", In-Flight = consumer's exclusive numAckPending.
- NATS Limits — every durable consumer is a row; Pending = exact per-consumer
  numPending, In-Flight = numAckPending.

Architecture:

- New `MessageBroker.subscribers(QueueDetail)` SPI hook returning a list of
  `SubscriberView` records (consumerName, pending, inFlight, pendingShared).
  Default returns a single anonymous row backed by `size()` so brokers that
  don't track named consumers still render a working table.
- `RedisMessageBroker` overrides via `EndpointRegistry.getAllForQueue()` +
  shared list/ZSET sizes.
- `JetStreamMessageBroker` overrides via `jsm.getConsumerNames` +
  `getConsumerInfo` per consumer, branching on retention policy.
- `RqueueQDetailService` exposes `getSubscriberRows` / `getTerminalRows`,
  joining broker SPI data with the worker registry for status / last-poll.
- `RedisDataDetail` is unchanged (kept for existing API consumers).
- `queue_detail.html` rewritten: "Subscribers" + "Terminal Storage" sections
  replace the Job Type / Data Type / Name / Size and Queue Pollers blocks.

Side fix per user: charts (stats / latency) are no longer hidden when a
broker reports `!supportsScheduledIntrospection`. NATS has the chart endpoints
working; the panels just render empty until counters accumulate, which is
the natural "no data yet" state. `index.html` and `queue_detail.html` always
include the chart partials now; the friendly "not available" placeholder
is removed.

Assisted-By: Claude Code
…inal cards

Bring the queue-detail page in line with the modern card-based design system
already used by /queues and /workers. The previous layout was Bootstrap
table-bordered tables that felt like a different app.

New shape:

  Hero
    Kicker "QUEUE" + queue name + paused/live badge inline + subtitle
    Stat chips: Subscribers / Pending / In-Flight (right-aligned panel)

  Configuration chip strip
    Concurrency · Retries · Visibility · Dead Letter (icons + values)
    Created / Updated meta below a dashed divider

  Subscribers section
    "LIVE" kicker + heading + supporting copy
    One subscriber-card per @RqueueListener consumer:
      - Consumer name (clickable into explorer modal)
      - Type pill (Queue (Stream) / Stream consumer / List)
      - Status badge (ACTIVE/STALE/PAUSED) using existing worker-status-* classes
      - Pending + In-Flight stat panels with "shared" hint where applicable
      - Storage / Host / Last Poll meta rows
    Empty state uses the queue-empty-state pattern from /queues.

  Terminal Storage section
    "SHARED" kicker + heading
    One terminal-card per shared bucket (COMPLETED / DEAD), color-coded
    via left border (green for completed, orange for dead letter).
    Big size value with "messages" label or "Queue-backed" placeholder.

  Stats & Latency
    Collapsed by default in a <details> disclosure ("TELEMETRY" kicker).
    Charts lazy-render on first open so the initial paint stays focused
    on the actionable data. The full chart partials are unchanged.

CSS additions are scoped to new component classes (queue-detail-*,
subscriber-*, terminal-*) and reuse the existing tokens — same colors,
same border-radius scale, same shadow recipe — so the page sits flush
with /queues and /workers in look and feel.

Pebble fix: switched `if config` to `if config != null` because Pebble
rejects domain objects in boolean context (PebbleException 10084 caught
in browser test).

Assisted-By: Claude Code
For workqueue streams, NATS rejects multiple non-filtered consumers
(error 10099). When multiple listeners were registered on the same
workqueue queue without custom consumer names, each listener tried to
create its own consumer, causing the provisioning to fail.

Fix: For workqueue streams with no custom consumer name, use a
consistent `queueName-consumer` name so all listeners share a single
consumer. This matches the workqueue semantics where only one consumer
can receive each message.

- NatsStreamValidator: Resolve consumer name based on queue type,
  using `queueName-consumer` for workqueues without custom names
- JetStreamMessageBroker: Use the same resolution logic in pop() to
  ensure validator and poller use the same consumer name

Assisted-By: Claude Code
User feedback was blunt: too much wasted whitespace and no clear pause/play
control. This rewrite collapses the queue-detail page into a single header
bar plus dense table sections so the operator can see and act on everything
without scrolling.

Layout changes:

- Header: queue name + state pill (LIVE / PAUSED) + actionable pause/play
  toggle button + summary stats ("N subscribers · N pending · N in-flight")
  on the right, all on one row. Replaces the big hero block.
- Configuration chip strip: 6 inline cells (Concurrency · Retries · Visibility
  · DLQ · Created · Updated) with dashed dividers — fits everything in ~60px
  of vertical space.
- Subscribers and Terminal Storage as compact tables with light row dividers
  and pill-styled type labels. No more card-grid whitespace.
- Stats & Latency stays behind a <details> disclosure so charts don't pin
  the actionable rows off-screen.

Pause/play action:

- The state pill is paired with a button that reuses the existing
  `pause-queue-btn` JS handler (POSTs to /pause-unpause-queue and reloads).
  Click toggles the queue state and the pill / button icon swap accordingly.
  Browser-tested: pausing the queue switches to "PAUSED" + play icon;
  clicking play unpauses, queue resumes processing pending messages.

Bug caught while testing:

- `QueueDetail.resolvedConsumerName()` previously returned different names
  for system-generated vs. primary queues (`-consumer-primary` vs
  `-consumer`). NATS WorkQueue streams reject multiple non-filtered
  consumers (10099) so the poller's runtime consumer name had to match
  whatever the bootstrap validator created. Unified to a single
  `{name}-consumer` suffix.

Files touched:

- queue_detail.html — full template rewrite (compact tables, header bar,
  inline config strip, charts disclosure)
- rqueue.css — replaced the previous card-heavy queue-detail CSS block
  with a tighter `qd-*` namespaced ruleset (~250 lines, was ~430)
- QueueDetail.java — consumer-name suffix fix
- (assorted formatter cleanups across files touched in earlier commits)

Assisted-By: Claude Code
resolvedConsumerName() returned different suffixes (-consumer vs
-consumer-primary) based on systemGenerated. The bootstrap validator
and runtime poller therefore disagreed on the consumer name when
systemGenerated was false, and the second creation attempt failed
on NATS workqueue streams with error 10099 (multiple non-filtered
consumers not allowed).

Use {name}-consumer in both cases. The custom consumerName from
@RqueueListener is still honoured when set; only the generated default
loses the -primary distinction.

Assisted-By: Claude Code
The queue-detail explorer was paginating from the stream's first
sequence regardless of which subscriber the operator clicked on. For
Limits-retention streams with competing consumers each consumer has
its own delivered offset, so the explorer was showing messages the
selected consumer had already acked instead of what is still pending
for that subscriber.

Wire consumerName end-to-end:

- QueueExploreRequest: new optional consumerName field.
- MessageBroker.peek: new consumer-aware overload, default delegates
  to the existing peek so non-NATS backends keep working.
- JetStreamMessageBroker.peek: when consumerName is set on a Limits
  stream, base the start sequence on
  getConsumerInfo(stream, consumer).getDelivered().getStreamSequence()+1.
  WorkQueue and the no-consumer call site are unchanged.
- RqueueQDetailService(.Impl): propagate consumerName into the broker
  peek call.
- Rest controllers (blocking + reactive): forward consumerName from
  the request into the service.
- queue_detail.html: subscribers table emits data-consumer on each
  consumer link.
- rqueue.js: read data-consumer in exploreData() and include it in
  the queue-data POST body.

Assisted-By: Claude Code
The consumer-aware peek was starting from delivered.streamSeq + 1,
which skipped over the in-flight window (sequences delivered but not
yet acked). Operators looking at a row with pending=0, in-flight=15
clicked through and got an empty explorer because all 15 in-flight
sequences were <= delivered.streamSeq.

Use ackFloor.streamSeq + 1 instead, so the explorer includes both
in-flight and not-yet-delivered messages — i.e. everything this
consumer still has work to do on. Matches the operator's mental
model of "what is this consumer still chewing on".

Assisted-By: Claude Code
The inFlight map was keyed only on RqueueMessage.id. For Limits-
retention streams with two or more durable consumers (e.g. one
@RqueueListener per handler with consumerName="linkedin-search" and
consumerName="google-search" on the same queue), each consumer
receives its own copy of every message and the second pop's
inFlight.put silently overwrote the first's NATS Message handle.

When the first worker later called broker.ack/nack, it picked up the
wrong consumer's NATS Message and acknowledged that delivery instead
of its own. The original delivery stayed in numAckPending forever,
and Outstanding Acks grew without bound (e.g. 92 -> 101 -> 112 over
30s with no new produces).

Key the inFlight map on "<consumerName>::<messageId>" so each
consumer's delivery is tracked independently. Pop uses its
consumerName arg; ack/nack/moveToDlq use q.resolvedConsumerName(),
which matches what the poller passed at fetch time.

Verified: with two @RqueueListener on a Limits stream, Outstanding
Acks now drains to 0 and Ack Floor advances to lastSeq instead of
sticking near the start.

Assisted-By: Claude Code
Adds JetStreamMessageBrokerMultiConsumerAckIT, which catches the bug
fixed in 524fc5c: under multi-consumer fan-out (Limits-retention
stream + two durable consumers) the broker's inFlight map was keyed
on RqueueMessage.id alone, so consumer-b's pop silently overwrote
consumer-a's NATS Message handle and consumer-a's ack later targeted
the wrong delivery.

The trigger is timing-sensitive: pops on both consumers must occur
before either acks. A sequential drain-then-drain pattern hides the
bug because the inFlight key is removed before the second pop
repopulates it. Verified that the test fails ("ack(consumer-b, m-0)
must succeed ==> expected: <true> but was: <false>") against the
reverted broker and passes once the fix is restored.

Also updates four existing ITs to set q.resolvedConsumerName() to
match the consumer name passed to pop, since the fix makes ack/nack
key on (consumerName, messageId) and the contract is that callers
keep the two in sync. mockQueue gains an overload that takes a
consumerName for tests that need this explicitly.

Assisted-By: Claude Code
Three Subscribers-table refinements after the consumer-aware peek
landed:

1. Pending column = outstanding work, not just yet-to-deliver.
   Previously the column showed numPending (yet-to-deliver), but
   clicking the consumer link opens the explorer with peek starting
   at ackFloor + 1 — which includes in-flight messages. Operators saw
   "Pending: 0 / In-flight: 15" and got 15 rows in the explorer,
   confusing the column meaning. Switch the per-row pending count to
   numPending + numAckPending so the column matches what the
   explorer renders. WorkQueue retention is unaffected — its msgCount
   already represents outstanding work.

   Same shift applied to the queue-level approximateLimitsPending:
   base on min(ackFloor.streamSeq) instead of min(delivered.streamSeq)
   so the queue-wide "size" agrees with per-consumer pending semantics.

2. Workers column on Subscribers table. The registry stores one
   heartbeat per (JVM, consumer) pair, so multi-instance deployments
   show >1; thread-level fanout from concurrency = "10-20" lives
   inside a single instance and is not reflected. Javadoc spells this
   out so operators don't expect a thread count.

3. Example listener restored to mode = QueueType.STREAM on both
   job-queue listeners. Without it, two distinct consumerNames on
   the same queue would land on a WorkQueue stream and trip NATS
   error 10099. The mode change makes the multi-listener fan-out
   demonstrable end-to-end (and matches the scenario the new ITs
   exercise).

Assisted-By: Claude Code
Two test changes:

1. New JetStreamMessageBrokerConsumerAwarePeekIT covers the SPI
   overload peek(QueueDetail, consumerName, offset, count) on a
   Limits-retention stream with two durables at different ack
   positions. Asserts:
     - per-consumer peek for the fast consumer skips the acked range
       and returns only the un-acked tail
     - per-consumer peek for the untouched consumer returns the full
       stream (its ackFloor is 0)
     - the no-consumer overload bases on the stream's first sequence
       and is unaffected by per-consumer state

2. JetStreamQueueModeIT updated for the new (consumerName, id) inFlight
   keying contract. Three tests touched, all swap mockQueue(name, type)
   for mockQueue(name, type, consumerName) so q.resolvedConsumerName()
   matches what the test passes to broker.pop(...). Without the
   matching name, ack/nack lookups miss, the consumer's
   delivery-position assertions break, and consumerReuse falsely
   reports "5 messages remaining instead of 2."

The fan-out test was also restructured to use one QueueDetail per
listener — mirrors how production builds a separate QueueDetail per
@RqueueListener with its own resolvedConsumerName.

Assisted-By: Claude Code
Earlier ec99465 changed JetStreamMessageBroker.subscribers() to report
pending = numPending + numAckPending so the Pending column would equal
the explorer's row count. That collapsed the per-row pending vs
in-flight split and made NATS disagree with Redis on what "Pending"
means.

The Subscribers table already has a separate In-Flight column, so the
operator can see both numbers at a glance. Restoring pending to
numPending keeps the columns disjoint (yet-to-deliver vs being-
processed), aligns NATS Limits behaviour with the Redis backend's
LIST-vs-processing-ZSET split, and makes the Pending column match
NATS CLI's "Unprocessed" report.

The columns and the explorer answer different questions on purpose:
columns split outstanding work into pending + in-flight; clicking the
consumer link browses everything outstanding (peek bases on
ackFloor + 1, which spans both buckets). Operators see a 0-pending /
15-in-flight row and click through to see the 15 in-flight messages,
which was the original UX motivation for consumer-aware peek.

The queue-level approximateLimitsPending stays based on
min(ackFloor.streamSeq) — that's the queue's total outstanding work,
which is the right size for the queue listing.

Assisted-By: Claude Code
When the inbound message converter throws while deserializing the raw
payload, RqueueExecutor.getUserMessage() catches it, logs at DEBUG,
and falls back to the raw String so downstream middleware and the
handler still run. The exception itself was lost — middleware had no
clean way to tell "converter failed and we're staring at a raw String"
apart from "queue legitimately carries Strings."

Capture the exception in the executor and thread it into JobImpl via
a new constructor overload (the old 10-arg constructor delegates with
null to keep existing callers compiling). Surface it on the Job
interface as Throwable getConversionException() plus a default
hasConversionException() convenience.

Middleware can now branch on conversion failure deliberately:

    if (job.hasConversionException()) {
        // route to DLQ, alert, attempt fallback decode, etc.
        return;
    }
    next.call();

Adds a unit test that swaps the handler's converter for one that
throws on fromMessage, asserts middleware still runs, that
job.getMessage() is the raw String fallback, and that
job.getConversionException() exposes the original IllegalStateException.

Assisted-By: Claude Code
The earlier consumer-aware peek work changed two SPI surfaces that the
web tests still called with the old shape, so CI failed at compile
time on rqueue-web:compileTestJava.

RqueueQDetailService.getExplorePageData gained a String consumerName
between DataType and pageNumber. Updated five call sites in
RqueueQDetailServiceTest and three in RqueueQDetailServiceBrokerRoutingTest
to pass null for consumerName.

MessageBroker.peek gained a consumer-aware 4-arg overload, and the
production explorer path now calls that overload. Two stubbings and
one verify in RqueueQDetailServiceBrokerRoutingTest were still
matching peek(QueueDetail, long, long) — strict Mockito reported a
PotentialStubbingProblem. Updated to nullable(String.class) for the
consumer arg so the stub matches both the explicit-null and any
future non-null call sites.

Assisted-By: Claude Code
publish.sh was only pushing rqueue-core, rqueue-spring, and
rqueue-spring-boot-starter to Maven Central; rqueue-nats, rqueue-redis,
and rqueue-web were never published, so downstream consumers couldn't
pull the new backends/explorer module. Added them to the publish
script and bumped the subprojects version from 4.0.0-RC4 to 4.0.0-RC6
for the next release.
Captures the user-facing changes since RC2: the NATS JetStream backend
and pluggable MessageBroker SPI, capability-aware dashboard, consumer-
aware peek and queue-detail redesign for fan-out topologies, NATS
pause/soft-delete admin ops, the new Job.getConversionException API
for middleware, and the additional modules (rqueue-nats, rqueue-redis,
rqueue-web) now published to Maven Central. Also notes the migration
guidance: existing Redis users keep working, NATS users wire a
JetStream MessageBroker bean, and /explore gained a nullable
consumerName query parameter.
The 4.0.0 entry I added in the previous commit claimed credit for the
NATS JetStream backend, but that landed in RC4 (PR #292), not in
4.0.0. Backfill the missing RC entries based on git history and tags:

  RC3 (14-Apr-2026, never tagged) — housekeeping. Version bump +
  /docs dependabot bump for addressable. No user-facing code changes
  versus RC2.

  RC4 (14-Apr-2026, tag v4.0.rc4) — initial NATS JetStream backend
  drop, plus Coveralls CI fixes for GitHub Actions.

  RC5 — skipped entirely; build.gradle never had 4.0.0-RC5 and no
  tag exists.

Trim the 4.0.0 entry to only cover post-RC4 work: the broker SPI
extraction (PR #293), capability-aware dashboard, consumer-aware peek,
queue-detail redesign, NATS pause/soft-delete admin ops, the
fan-out ack/nack fix, the new Job.getConversionException middleware
API, and the additional modules now published to Maven Central. Add
a note about the RC5 skip so anyone tracing version numbers later
isn't confused by the gap.
… RC6

Per the release plan: RC5 is the broker SPI extraction, NATS-aware
dashboard work, consumer-aware peek, and the multi-consumer fan-out
ack/nack fix. RC6 is just the new Job.getConversionException
middleware API plus the additional Maven Central publish targets.
4.0.0 GA promotes RC6 with no functional changes.

Also fixed a forward reference in the RC4 entry that pointed to
"4.0.0" for the SPI extraction; that work lands in RC5.
RC3 wasn't housekeeping — it lowered the Java baseline from 21
(declared in RC1) back to 17 so the library can be consumed by
applications still on Java 17. The current build is still on Java 17.

Updated the RC3 entry to call this out as the headline change and
amended the 4.0.0 preamble so the Java baseline is stated correctly
(Java 17, not 21) instead of inheriting the now-superseded RC1 note.
NatsRqueueMessageMetadataService.deleteMessage was changed in 42eb61e
to write a tombstone (a deleted MessageMetadata keyed by metaId) when
called for an id that has no metadata yet. That's the intentional
contract for stream-resident NATS messages: enqueue doesn't write
metadata, so a dashboard delete request lands on a missing record but
still has to mark the row as deleted in subsequent peeks. The Redis
impl behaves the same way for parity.

The IT was still asserting the pre-tombstone behaviour (false on
missing) and started failing in CI. Update it to call out the
intentional contract: deleteMessage on a missing id returns true and
leaves a deleted MessageMetadata behind. Renamed to
deleteMessageOnMissingCreatesTombstone so the file documents the
behaviour rather than the old absence.
sonus21 and others added 20 commits May 9, 2026 14:07
- Remove stale "scheduling not supported" warning — enqueueIn/enqueueAt/
  enqueuePeriodic now work on NATS >= 2.12 (ADR-51)
- Remove duplicate-window from stream property table (removed in RC8)
- Add "How the NATS backend works" section: pull consumer model, stream/
  subject naming, KV bucket roles, ADR-51 scheduling internals, dedup key
  shape, long-running job keep-alive (+WIP), dashboard operations
- Add "Pitfalls" section: ack-wait vs handler duration, missing keep-alive,
  scheduling version requirement, periodic message dedup edge case,
  WORK_QUEUE retention, priority weighting, elastic concurrency, purge
  unsupported, replica count vs cluster size
- Update feature comparison table to reflect current capabilities
- Update index.md: fix requirements note and NATS section blurb

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…lation

Add RC7 changelog entry noting pebble-spring7 removal and the switch to
RqueueHtmlRenderer. Update RqueueViewControllerTest to assert HTTP status,
content-type, and rendered HTML content instead of the removed PebbleView.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
connection.getServerInfo() can return null (e.g. on a Mockito mock or before
the handshake completes). Guard both the version check and the log statement
so schedulingSupported defaults to false instead of NPE-ing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The CI nats-server is older than 2.12.0 and does not support ADR-51
message scheduling. Add a @beforeeach assumeTrue guard that skips both
scheduledMessageIsDeliveredAfterDelay and its reactive counterpart when
NatsProvisioner reports scheduling is unavailable, matching the intent
already documented in the class Javadoc.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… exception

New @NatsUnitTest-tagged test classes covering:
- NatsKvBucketsTest: ALL_BUCKETS catalogue completeness, immutability, distinct names
- NatsKvBucketValidatorTest: autoCreate skip, missing-bucket validation, IOException handling
- NatsRqueueLockManagerTest: acquire/release happy paths, conflict, IOException, sanitize
- NatsRqueueQueueMetricsProviderTest: pending/DLQ counts, unknown-queue, stream errors
- NatsWorkerRegistryStoreTest: CRUD across worker-info and heartbeat buckets, key sanitisation, swallowed exceptions
- RqueueNatsExceptionTest: constructors and cause propagation

137 unit tests now pass under :rqueue-nats:test -DincludeTags=unit.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…rvice

New @NatsUnitTest-tagged test classes:
- NatsRqueueJobDaoTest (25 tests): save/createJob, findById, findJobsByIdIn,
  finByMessageId/In (scan + interrupt), delete, key sanitisation
- NatsRqueueQStatsDaoTest (15 tests): findById (null, hit, miss, IO, deserialize
  failure, sanitize), findAll, save (null stat/id guards, IO, sanitize)
- NatsRqueueSystemConfigDaoTest (20 tests): cache hit/miss/bypass, getQConfig
  (cache, scan, interrupt, IO), getConfigByNames, findAllQConfig, saveQConfig,
  saveAllQConfig, clearCacheByName
- NatsMessageBrowsingRepositoryTest (14 tests): getDataSize routing (main queue,
  DLQ, processing/scheduled, null, stream-not-found 10059, other JsApiException,
  IOException), getDataSizes, viewData BackendCapabilityException
- NatsRqueueMessageMetadataServiceTest (26 tests): get/delete/deleteAll/findAll,
  save, getByMessageId, deleteMessage (tombstone creation), getOrCreate,
  saveReactive, readMessageMetadataForQueue, deleteQueueMessages,
  saveMessageMetadataForQueue

Total unit test count: 235 (up from 137).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ue-nats, rqueue-spring-boot-starter

- rqueue-spring: NatsBackendConditionTest (7), RqueueBackendImportSelectorTest (7),
  RqueueRedisConfigImportSelectorTest (3), MetricsEnabledTest (2)
- rqueue-web: BaseControllerTest (4), RqueueWebExceptionAdviceTest (7),
  RqueueRestControllerTest (28), RqueueViewControllerTest (23), RqueueJobServiceImplTest (11)
- rqueue-nats: NatsProvisionerTest (15) — KV/stream/DLQ provisioning and scheduling detection
- rqueue-spring-boot-starter: RqueueNatsPropertiesTest (28), RqueueNatsAutoConfigToBrokerConfigTest (22)

All modules: 157 new unit tests; all pass.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The enqueueWithDelay path was completely broken: it used a made-up
header (Nats-Next-Deliver-Time) that NATS ignores, published to the
work subject instead of a scheduler subject, and the pull consumer
had no filter so it read scheduler entries before the scheduled time.

Correct implementation (NATS >= 2.12, ADR-51):
- Replace HDR_NEXT_DELIVER_TIME with Nats-Schedule, Nats-Schedule-Target,
  and Nats-Rollup: sub — the three headers the server actually requires
- Publish to a per-message scheduler subject (<work>.sched.<msgId>);
  NATS fires the triggered message to the work subject at the due time
- Extend stream subjects to include the <work>.sched.* wildcard pattern
- Set filterSubject on pull consumers so they only receive work-subject
  messages and never see scheduler entries early

Infrastructure fixes that were hiding the bug:
- AbstractJetStreamIT: replace @testcontainers(disabledWithoutDocker=true)
  with assumeTrue(DockerClientFactory.isDockerAvailable()) inside @BeforeAll
  so NATS_RUNNING=1 works even without Docker (annotation was vetoing the
  class before @BeforeAll ran, silently skipping all ITs)
- CI: bump nats-server from v2.10.22 → v2.12.0 so scheduling assumeTrue
  guards stop silently skipping and actually gate the build
- NatsProvisioner: add filterSubject overload to ensureConsumer; add
  allowMessageSchedules flag to stream creation / upgrade path
- AllowMessageSchedulesEnforcementIT: new IT proving server enforces the
  allow_msg_schedules flag with Nats-Schedule (error 10189 without it)

All 306 rqueue-nats tests pass including 3 new scheduling ITs that
verify messages are held until the scheduled time then delivered.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
All conflicts were between the fixed scheduling implementation on this
branch and the old broken Nats-Next-Deliver-Time code from the merge
source. Resolved every conflict by accepting the nats-scheduling-fix
(HEAD) version in full.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…eduling

When enqueue() creates a stream first (subjects: [work]), then
enqueueWithDelay() is called for the first time, the provisioner was
only adding the allow_msg_schedules flag but not the sched-wildcard
subject (work.sched.*). NATS would then reject the publish with "no
stream matches subject".

Fix: in the existing-stream upgrade path, compute the union of existing
and requested subjects and include both the flag update and the subject
merge in a single updateStream() call.

Also tighten the skipsCreation unit test to stub getSubjects() so the
provisioner correctly identifies no-op situations.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Adds enqueueWithDelay_afterPlainEnqueue_streamUpgradedAndMessageHeld to
JetStreamMessageBrokerSchedulingIT. This exercises the specific path
where a plain enqueue() creates the stream (no sched subjects, no flag)
and enqueueWithDelay() is called first time afterward — the provisioner
must upgrade the stream in-place before publishing to the sched subject.

Uses a 10 s delay (vs 3 s for other tests) to absorb stream-upgrade
overhead without racing the scheduler.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…m upgrade path

- NatsScheduledMessageE2EIT: add enqueueFirst_thenEnqueueIn_streamUpgradedAndBothDelivered
  test that calls enqueue() first (creates stream without sched flag/subjects) then
  enqueueIn() (must upgrade stream in-place), asserting immediate delivery and
  correct hold-then-deliver for the scheduled message
- Add EtdListener component (etd-e2e queue) with two latches: immediateLatch (first
  message) and allLatch (both messages), imported into TestApp
- AbstractNatsBootIT: remove @testcontainers(disabledWithoutDocker=true) annotation
  which silently skipped the whole class in CI (NATS_RUNNING=true, Docker absent);
  replace with assumeTrue(DockerClientFactory.isDockerAvailable()) inside @BeforeAll
  so external-NATS path always runs in CI

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Restart Redis with --logfile before non-cluster tests so logs are
  captured (producer, integration, reactive jobs)
- Add logfile directive per node in Redis cluster config
- Start NATS server with log file redirect; log version in dedicated step
- Upload redis/nats server logs as artifacts after each job:
  producer-redis-server, integration-redis-server, reactive-redis-server,
  redis-server-cluster, nats-server

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@coveralls
Copy link
Copy Markdown

coveralls commented May 9, 2026

Coverage Report for CI Build 25621442036

Coverage decreased (-0.06%) to 83.337%

Details

  • Coverage decreased (-0.06%) from the base build.
  • Patch coverage: 51 uncovered changes across 5 files (129 of 180 lines covered, 71.67%).
  • 11 coverage regressions across 3 files.

Uncovered Changes

File Changed Covered %
rqueue-nats/src/main/java/com/github/sonus21/rqueue/nats/metrics/NatsRqueueQueueMetricsProvider.java 32 2 6.25%
rqueue-nats/src/main/java/com/github/sonus21/rqueue/nats/js/NatsDeadLetterBridgeRegistrar.java 36 25 69.44%
rqueue-core/src/main/java/com/github/sonus21/rqueue/metrics/RqueueMetrics.java 13 9 69.23%
rqueue-core/src/main/java/com/github/sonus21/rqueue/metrics/RqueueMetricsCounter.java 4 0 0.0%
rqueue-core/src/main/java/com/github/sonus21/rqueue/metrics/RqueueQueueMetricsProvider.java 2 0 0.0%

Coverage Regressions

11 previously-covered lines in 3 files lost coverage.

File Lines Losing Coverage Coverage
rqueue-core/src/main/java/com/github/sonus21/rqueue/metrics/RqueueMetricsAggregatorService.java 8 73.16%
rqueue-core/src/main/java/com/github/sonus21/rqueue/listener/RqueueMessageListenerContainer.java 2 93.47%
rqueue-nats/src/main/java/com/github/sonus21/rqueue/nats/internal/NatsProvisioner.java 1 83.39%

Coverage Stats

Coverage Status
Relevant Lines: 9062
Covered Lines: 7833
Line Coverage: 86.44%
Relevant Branches: 3487
Covered Branches: 2625
Branch Coverage: 75.28%
Branches in Coverage %: Yes
Coverage Strength: 0.86 hits per line

💛 - Coveralls

sonus21 and others added 8 commits May 9, 2026 22:24
…m/subject prefix

CI runs all NATS-tagged Spring Boot E2E classes against a single nats-server with
a persistent JetStream dir, so streams created by one class were leaking into the
next: stale subjects, wrong consumer filters, leftover in-flight messages, and the
"no stream matches subject" 503 we saw on the sched-subject publish.

Per-class isolation:
- Each subclass of AbstractNatsBootIT now declares its own STREAM_PREFIX and
  SUBJECT_PREFIX constants and threads them into @SpringBootTest properties via
  rqueue.nats.stream-prefix / rqueue.nats.subject-prefix. Same queue name in the
  @RqueueListener annotation now resolves to a class-specific stream
  (e.g. rqueue-js-schedE2E-sched-e2e vs rqueue-js-schedAdv-recurring-e2e).
- AbstractNatsBootIT exposes deleteStreamsWithPrefix(prefix) and activeNatsUrl().
  Each subclass calls deleteStreamsWithPrefix(STREAM_PREFIX) from @BeforeAll so a
  rerun against a persistent JetStream dir starts clean — only that class's
  streams are wiped, never another class's.
- Best-effort: a failing delete logs a WARNING and the loop continues, so cleanup
  never causes a spurious test failure.

Applied uniformly across all nine subclasses (NatsBackendEndToEndIT,
NatsConcurrencyE2EIT, NatsConsumerNameOverrideE2EIT,
NatsMultipleListenersOnSameQueueE2EIT, NatsPriorityQueuesE2EIT,
NatsReactiveEnqueueE2EIT, NatsRetryAndDlqE2EIT, NatsScheduledMessageE2EIT,
NatsSchedulingAdvancedE2EIT).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… startup

Introduces NatsDeadLetterBridgeRegistrar (SmartInitializingSingleton + DisposableBean)
that walks EndpointRegistry and calls
JetStreamMessageBroker.installDeadLetterBridge(queue, queue.resolvedConsumerName())
for each registered queue, so $JS.EVENT.ADVISORY.CONSUMER.MAX_DELIVERIES.* fired by
JetStream is caught and the offending payload is republished onto the queue's DLQ
stream (<streamPrefix><queue><dlqStreamSuffix>). Producer-only mode (RqueueConfig
.isProducer()=true) is detected and skipped. Bean is wired from RqueueNatsAutoConfig
so it loads only when rqueue.backend=nats.

This is the NATS-side analog of the rqueue-level DLQ path (PostProcessingHandler →
broker.moveToDlq, already covered by NatsSchedulingAdvancedE2EIT). The advisory
bridge is the defensive net for cases outside rqueue's control: a handler that
hangs past visibilityTimeout, or a process that crashes mid-handler.

Also fixes a critical test-isolation bug discovered while validating the bridge
end-to-end: nine NATS Spring Boot E2E classes were setting
"rqueue.nats.stream-prefix" / "rqueue.nats.subject-prefix" but the actual
property paths are "rqueue.nats.naming.stream-prefix" /
"rqueue.nats.naming.subject-prefix". Streams from every test class were
collapsing onto the same default prefix "rqueue-js-", and the @BeforeAll cleanup
hooks were silently no-op'ing because they targeted the wrong (never-used)
per-class prefix. Patched all nine classes to use the correct property paths;
streams now actually live under per-class prefixes and cleanup actually wipes
them between runs.

Other test changes:
- NatsConsumerNameOverrideE2EIT: ConsumerInfo lookup now derives the stream name
  from STREAM_PREFIX instead of hardcoding "rqueue-js-custom-consumer".
- NatsRetryAndDlqE2EIT: re-enabled, rewritten to publish a synthetic max-delivery
  advisory matching nats-server 2.12's payload shape and assert the bridge
  republishes onto the DLQ stream. Uses a blocking listener so the source message
  stays in flight (un-acked) — matching the production scenario the bridge is
  designed for. Going through a real handler-exhaustion path was racy because
  rqueue acks the message on numRetries exhaustion, so NATS never reaches its
  derived maxDeliver and never fires the advisory.

Local run (NATS_RUNNING=true, nats-server 2.12.8): 19 tests passing, 1 skipped
(the multi-listener fan-out test, which still requires per-queue retention-policy
work to support Limits/Interest retention — left @disabled with the existing
comment).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…actually catch duplicates

The old assertion (`count >= 2`) was satisfied by any flow that produced 2+
deliveries — including a duplicate-per-period regression that would deliver
the same processAt twice. The test name promised dedup verification but the
shape didn't enforce it; a regression in the JetStream dedup-key
(Nats-Msg-Id = id@processAt) would have passed silently.

Listener now reads each delivery's processAt from the Job header
(@Header(RqueueMessageHeaders.JOB) Job job → job.getRqueueMessage()
.getProcessAt()) and records it in a ConcurrentHashMap-backed set. The test:

  - waits for >= 3 distinct processAt values (proves multiple periods fired);
  - quiesces briefly so a racing duplicate has time to land;
  - asserts count == distinct.size() — i.e. each period processed exactly
    once. A regression now surfaces immediately as count > distinct.size().

Verified locally: full nats suite passes (19 tests, 1 skipped) in 4m 29s
against nats-server 2.12.8.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…listener queues

When two @RqueueListener methods on the same queue declare different
consumerName overrides, EndpointRegistry stores them as separate QueueDetails
(composite key queueName##consumerName). The metrics layer wasn't honouring
that:

  - RqueueMetrics.monitor() registered each gauge tagged only `queue=<name>`,
    so Micrometer treated the second registration as a duplicate and silently
    kept only the first. The second listener's queue.size /
    processing.queue.size / dead.letter.queue.size gauges were lost entirely.

  - QueueCounter.registerQueue() used `queueDetail.getName()` as the map key,
    so the second consumer's Counter reference orphaned the first one. Calls
    to updateFailureCount/updateExecutionCount with the bare queue name then
    routed every increment to whichever consumer happened to register last.

  - RqueueExecutor passed only `queueDetail.getName()` to the counter, never
    the consumer name, so even with a fixed map there was no way to route the
    increment back to the right entry.

Fix:

  - RqueueMetrics: when QueueDetail.getConsumerName() is non-empty, add a
    `consumer=<name>` tag to every gauge for that QueueDetail, so distinct
    consumers register distinct (name, tag-set) gauges. Calls the new
    consumer-aware provider methods so backends can report per-consumer
    pending/processing depth (NATS does; Redis falls through to the queue
    level).

  - QueueCounter: keys both maps by `queueName##consumerName` (or just
    queueName when no override is set) so concurrent registrations don't
    collide. Adds (String, String) overloads on updateFailureCount /
    updateExecutionCount that route to the consumer-scoped entry, falling
    back to the bare-queue entry for backward compatibility.

  - RqueueMetricsCounter / RqueueCounter: matching consumer-aware overloads
    (default delegates to the bare-queue methods so external implementations
    keep compiling).

  - RqueueExecutor.updateCounter: now passes
    `job.getQueueDetail().getConsumerName()` so increments land on the right
    counter entry per consumer.

  - RqueueQueueMetricsProvider: new default methods
    getPendingMessageCountByConsumer / getProcessingMessageCountByConsumer.
    Default delegates to the queue-level call (no behaviour change for
    Redis / single-consumer queues).

  - NatsRqueueQueueMetricsProvider: implements both consumer-aware overloads
    using JetStreamManagement.getConsumerInfo(stream, consumer) — returns
    ConsumerInfo.numPending and numAckPending, which are the JetStream-side
    analogs of the Redis pending list and processing ZSET. Falls back
    gracefully to the stream-level count when the consumer isn't yet
    provisioned (boot-time race), so dashboards never see a missing reading.

Adds QueueCounterTest#multiConsumerOnSameQueueRoutesToCorrectCounter to pin
the regression: registers two QueueDetails on the same queue with distinct
consumerName values, verifies each one's increments land on the correct
counter entry (not all of them on the last-registered one).

Verified: full nats integration suite passes (19 tests, 1 skipped) and core
metrics tests pass against nats-server 2.12.8.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@sonus21 sonus21 merged commit c301fc4 into master May 10, 2026
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants