Skip to content

feat: track log line count per workflow and task to enable log truncation observability#758

Open
elookpotts-nvidia wants to merge 1 commit intomainfrom
elookpotts/log-line-count
Open

feat: track log line count per workflow and task to enable log truncation observability#758
elookpotts-nvidia wants to merge 1 commit intomainfrom
elookpotts/log-line-count

Conversation

@elookpotts-nvidia
Copy link
Copy Markdown
Contributor

@elookpotts-nvidia elookpotts-nvidia commented Mar 31, 2026

Summary

Adds a log_line_count column to the workflows and tasks tables so that the total number of log lines produced by each workflow and task is recorded persistently for the first time, enabling truncation analysis and capacity planning via SQL.

Design Changes

  • Redis INCR counter incremented alongside each XADD in the logger service tracks total lines produced, surviving WebSocket disconnects and logger restarts because the counter lives in Redis rather than in-process memory.
  • A three-state sentinel (NULL = predates change, -1 = initialized but not yet finalized, >= 0 = finalized) makes it unambiguous whether a completed workflow genuinely produced zero log lines versus whether the counter key expired before cleanup ran.
  • CleanupWorkflow is the single consumer of the Redis counter: reads it once at workflow completion, writes to PostgreSQL, and deletes the key — no Prometheus metrics needed since Grafana queries PostgreSQL directly.

Change Log

  • Added pgroll migration (005_v6_2_0_schema.json) adding log_line_count INTEGER to workflows and tasks tables for existing deployments.
  • Updated CREATE TABLE definitions in _init_tables so fresh deployments and test environments also have the column.
  • Updated Workflow and Task Pydantic models with the log_line_count field and three-state sentinel semantics documented inline.
  • Updated insert_to_db and batch_insert_to_db on both models to write -1 at submission time as the initialization sentinel.
  • Added update_log_line_count_to_db to both models, guarded with WHERE log_line_count = -1 to prevent overwriting a finalized count on CleanupWorkflow retry.
  • Added INCR calls in ctrl_websocket.py after each XADD to the workflow and per-task log streams, with TTL set via the existing first_run guard.
  • Updated CleanupWorkflow to read both workflow-level and per-task Redis counters, write final counts to PostgreSQL, and include counter keys in the existing bulk Redis DELETE.
  • Added 11 DB-backed tests covering sentinel initialization, correct writes, idempotency on retry, and no-op behaviour for pre-change NULL rows.

Issue #None

Testing:

  • bazel test //src/utils/job/tests:test_log_line_count — all 11 tests pass
  • Deployed and tested on my personal dev instance

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

Summary by CodeRabbit

  • New Features

    • Added log line count tracking for workflows and tasks with persistent database storage
    • Implemented real-time log volume monitoring and counter management
    • New system independently tracks per-workflow and per-task log line counts for enhanced observability
  • Tests

    • Added comprehensive test coverage for log line count functionality across both workflows and tasks

@elookpotts-nvidia elookpotts-nvidia requested a review from a team as a code owner March 31, 2026 00:04
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 31, 2026

📝 Walkthrough

Walkthrough

A new log_line_count feature is introduced to track log line counts for workflows and tasks. The system adds a log_line_count column (with sentinel semantics: NULL for legacy rows, -1 for new/unfinalized, ≥0 for finalized counts) to both workflows and tasks tables via database migration. During log ingestion, Redis counters are incremented; during job cleanup, these counts are persisted to the database via conditional update methods.

Changes

Cohort / File(s) Summary
Database Schema & Migrations
deployments/charts/service/migrations/005_v6_2_0_schema.json, src/utils/connectors/postgres.py, src/tests/common/database/testdata/schema.sql
Add log_line_count INTEGER column to workflows and tasks tables in schema initialization and migration definition.
Workflow & Task Models
src/utils/job/workflow.py, src/utils/job/task.py
Add log_line_count: int | None field to both models with sentinel semantics (-1 for unfinalized, ≥0 for finalized). Add update_log_line_count_to_db() method to conditionally persist counts when sentinel value is present. Update insert_to_db() and from_db_row() methods to handle the new column.
Log Ingestion & Tracking
src/service/logger/ctrl_websocket.py
Increment per-workflow and per-task Redis counters ({workflow_id}-log-count, {workflow_id}-{task_name}-{retry_id}-log-count) on each received log message, with TTL expiration set on first message.
Task & Workflow Submission
src/service/core/workflow/objects.py, src/utils/job/jobs.py
Extend task insertion tuples with -1 sentinel value. Update workflow cleanup to read Redis log count keys, persist counts via update_log_line_count_to_db(), and delete count keys alongside existing log/event cleanup.
Testing
src/utils/job/tests/BUILD, src/utils/job/tests/test_log_line_count.py
Add integration tests for log_line_count field on both workflows and tasks, verifying sentinel semantics, conditional updates, backward compatibility with legacy NULL rows, and batch insert behavior.

Sequence Diagram

sequenceDiagram
    participant Client as WebSocket Client
    participant Logger as Log Ingestion Service
    participant Redis
    participant JobCleanup as Job Cleanup Service
    participant Database

    Client->>Logger: Stream log messages
    Logger->>Redis: Increment {workflow_id}-log-count
    Logger->>Redis: Increment {workflow_id}-{task}-{retry}-log-count
    Logger->>Redis: Set TTL expiration (first message)
    
    Note over Logger,Redis: Per each log message received
    
    JobCleanup->>Redis: Read {workflow_id}-log-count
    JobCleanup->>Redis: Read all {workflow_id}-{task}-{retry}-log-count keys
    JobCleanup->>Database: Call update_log_line_count_to_db() for workflows
    JobCleanup->>Database: Call update_log_line_count_to_db() for tasks
    JobCleanup->>Database: Update persisted counts (if sentinel -1)
    JobCleanup->>Redis: Delete all log count keys
    JobCleanup->>Redis: Delete workflow logs and task logs
    
    Note over JobCleanup,Database: During workflow cleanup phase
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~35 minutes

Poem

🐰 Counting whispers log by log,
Redis marks each rabbit's cog,
When tasks complete their eager flight,
The counts are saved, locked in tight,
From -1 springs the truth at last! 🌟

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 70.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely describes the main change: adding log line count tracking per workflow and task for observability purposes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch elookpotts/log-line-count

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 31, 2026

Codecov Report

❌ Patch coverage is 36.36364% with 14 lines in your changes missing coverage. Please review.
✅ Project coverage is 42.73%. Comparing base (9af7c66) to head (61a9531).

Files with missing lines Patch % Lines
src/utils/job/jobs.py 0.00% 10 Missing ⚠️
src/service/logger/ctrl_websocket.py 0.00% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #758      +/-   ##
==========================================
- Coverage   42.84%   42.73%   -0.11%     
==========================================
  Files         203      203              
  Lines       26844    26865      +21     
  Branches     7603     7607       +4     
==========================================
- Hits        11500    11480      -20     
- Misses      15233    15278      +45     
+ Partials      111      107       -4     
Flag Coverage Δ
backend 45.06% <36.36%> (-0.13%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
src/service/core/workflow/objects.py 64.86% <ø> (ø)
src/utils/connectors/postgres.py 76.19% <ø> (-0.65%) ⬇️
src/utils/job/task.py 55.15% <100.00%> (-2.06%) ⬇️
src/utils/job/workflow.py 51.99% <100.00%> (+0.12%) ⬆️
src/service/logger/ctrl_websocket.py 20.00% <0.00%> (-0.69%) ⬇️
src/utils/job/jobs.py 26.84% <0.00%> (-0.31%) ⬇️

... and 16 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/service/logger/ctrl_websocket.py (1)

207-225: ⚠️ Potential issue | 🟠 Major

Don't consume first_run before the first actual log write.

If the first websocket frame is METRICS, BARRIER, or LOG_DONE, these EXPIRE calls are all no-ops because the keys do not exist yet. first_run still flips to False, so the log streams and the new count keys never get a TTL afterward.

🛠️ Suggested fix
                         else:
                             if io_type.workflow_logs() and (first_run or\
                                 datetime.datetime.now() - last_heartbeat_check > heartbeat_freq_dt):
                                 last_heartbeat_check = datetime.datetime.now()
                                 cmd = '''
@@
                             await redis_client.incr(
                                 f'{workflow_obj.workflow_id}-log-count')
                             await redis_client.incr(
                                 f'{workflow_obj.workflow_id}-{task_name}-{retry_id}-log-count')
-                        # Set expiration on first log message
-                        if first_run:
-                            first_run = False
-                            await redis_client.expire(f'{workflow_obj.workflow_id}-logs',
-                                                    connectors.MAX_LOG_TTL)
-                            await redis_client.expire(
-                                common.get_redis_task_log_name(
-                                    workflow_obj.workflow_id, task_name, retry_id),
-                                connectors.MAX_LOG_TTL)
-                            await redis_client.expire(
-                                f'{workflow_obj.workflow_id}-log-count',
-                                connectors.MAX_LOG_TTL)
-                            await redis_client.expire(
-                                f'{workflow_obj.workflow_id}-{task_name}-{retry_id}-log-count',
-                                connectors.MAX_LOG_TTL)
+                            # Set expiration on the first actual log message.
+                            if first_run:
+                                first_run = False
+                                await redis_client.expire(
+                                    f'{workflow_obj.workflow_id}-logs',
+                                    connectors.MAX_LOG_TTL)
+                                await redis_client.expire(
+                                    common.get_redis_task_log_name(
+                                        workflow_obj.workflow_id, task_name, retry_id),
+                                    connectors.MAX_LOG_TTL)
+                                await redis_client.expire(
+                                    f'{workflow_obj.workflow_id}-log-count',
+                                    connectors.MAX_LOG_TTL)
+                                await redis_client.expire(
+                                    f'{workflow_obj.workflow_id}-{task_name}-{retry_id}-log-count',
+                                    connectors.MAX_LOG_TTL)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/service/logger/ctrl_websocket.py` around lines 207 - 225, The code flips
the first_run flag before any actual log keys are created, causing the expire()
calls to be skipped for later log writes when the first websocket frames are
METRICS/BARRIER/LOG_DONE; update the logic around first_run (the first_run
boolean, the redis_client.incr calls, and the redis_client.expire calls for
f'{workflow_obj.workflow_id}-logs',
common.get_redis_task_log_name(workflow_obj.workflow_id, task_name, retry_id),
f'{workflow_obj.workflow_id}-log-count', and
f'{workflow_obj.workflow_id}-{task_name}-{retry_id}-log-count') so that
first_run is only set to False after you have actually created/updated the log
keys (e.g., after performing the incr/write for a LOG frame or after confirming
the relevant keys exist), or alternatively guard the expire block to run only
when the current frame is a log-producing frame, ensuring the TTLs are applied
on the first real log write.
🧹 Nitpick comments (1)
src/service/core/workflow/objects.py (1)

1084-1086: Replace the raw -1 with a shared sentinel constant.

This tuple is already positional and easy to misalign. Inlining the NULL / -1 / >=0 state marker here makes it harder to keep the insert path, cleanup predicate, and tests in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/service/core/workflow/objects.py` around lines 1084 - 1086, Replace the
raw -1 used as a sentinel in the tuple with a shared constant to avoid
positional misalignment; define a clearly named sentinel (e.g., NO_LEAD or
UNSET_INDEX) at module scope in objects.py (or in the existing module constants)
and use that constant instead of the literal -1 where the tuple is constructed
(around task_obj.exit_actions, task_obj.lead, -1) and ensure any other code
paths—insert path, cleanup predicate, and tests—that check for -1 are updated to
reference the new constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/tests/common/database/testdata/schema.sql`:
- Around line 60-63: The test fixture defines the workflows.table column "pool"
as NOT NULL DEFAULT '' which differs from production where "pool" is nullable;
update the CREATE TABLE for workflows to declare pool as nullable text (e.g.,
just "pool TEXT" with no NOT NULL or DEFAULT) so the test schema matches
production, and adjust any tests that currently rely on empty-string behavior to
account for NULLs instead; target the workflows table definition (column "pool")
in the schema.sql used by tests.

In `@src/utils/job/jobs.py`:
- Around line 1509-1528: The code only updates DB log_line_count when Redis keys
exist, leaving rows at -1 for quiet workflows; change the logic around
workflow_log_count_key and task_log_count_key so that after reading
redis_client.get(...) you treat a missing value as 0 and call
workflow_obj.update_log_line_count_to_db(int_value) /
task_obj.update_log_line_count_to_db(int_value) unconditionally (i.e., compute
int_value = int(raw) if raw is not None else 0), while still appending the keys
to redis_keys_to_delete and keeping the current calls to
common.get_redis_task_log_name and redis_keys_to_delete management.

---

Outside diff comments:
In `@src/service/logger/ctrl_websocket.py`:
- Around line 207-225: The code flips the first_run flag before any actual log
keys are created, causing the expire() calls to be skipped for later log writes
when the first websocket frames are METRICS/BARRIER/LOG_DONE; update the logic
around first_run (the first_run boolean, the redis_client.incr calls, and the
redis_client.expire calls for f'{workflow_obj.workflow_id}-logs',
common.get_redis_task_log_name(workflow_obj.workflow_id, task_name, retry_id),
f'{workflow_obj.workflow_id}-log-count', and
f'{workflow_obj.workflow_id}-{task_name}-{retry_id}-log-count') so that
first_run is only set to False after you have actually created/updated the log
keys (e.g., after performing the incr/write for a LOG frame or after confirming
the relevant keys exist), or alternatively guard the expire block to run only
when the current frame is a log-producing frame, ensuring the TTLs are applied
on the first real log write.

---

Nitpick comments:
In `@src/service/core/workflow/objects.py`:
- Around line 1084-1086: Replace the raw -1 used as a sentinel in the tuple with
a shared constant to avoid positional misalignment; define a clearly named
sentinel (e.g., NO_LEAD or UNSET_INDEX) at module scope in objects.py (or in the
existing module constants) and use that constant instead of the literal -1 where
the tuple is constructed (around task_obj.exit_actions, task_obj.lead, -1) and
ensure any other code paths—insert path, cleanup predicate, and tests—that check
for -1 are updated to reference the new constant.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 7e607f2e-f2ea-46bc-baa5-e0aa474ba4a3

📥 Commits

Reviewing files that changed from the base of the PR and between 9af7c66 and 61a9531.

📒 Files selected for processing (10)
  • deployments/charts/service/migrations/005_v6_2_0_schema.json
  • src/service/core/workflow/objects.py
  • src/service/logger/ctrl_websocket.py
  • src/tests/common/database/testdata/schema.sql
  • src/utils/connectors/postgres.py
  • src/utils/job/jobs.py
  • src/utils/job/task.py
  • src/utils/job/tests/BUILD
  • src/utils/job/tests/test_log_line_count.py
  • src/utils/job/workflow.py

Comment on lines 60 to +63
CREATE TABLE IF NOT EXISTS workflows (
workflow_id TEXT PRIMARY KEY,
pool TEXT NOT NULL DEFAULT ''
pool TEXT NOT NULL DEFAULT '',
log_line_count INTEGER
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Match the fixture's workflows.pool definition to production.

src/utils/connectors/postgres.py still defines this column as nullable, but this fixture makes it NOT NULL DEFAULT ''. That changes NULL vs empty-string behavior and can hide bugs that only show up against the real schema.

🔧 Suggested change
 CREATE TABLE IF NOT EXISTS workflows (
     workflow_id TEXT PRIMARY KEY,
-    pool TEXT NOT NULL DEFAULT '',
+    pool TEXT,
     log_line_count INTEGER
 );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE IF NOT EXISTS workflows (
workflow_id TEXT PRIMARY KEY,
pool TEXT NOT NULL DEFAULT ''
pool TEXT NOT NULL DEFAULT '',
log_line_count INTEGER
CREATE TABLE IF NOT EXISTS workflows (
workflow_id TEXT PRIMARY KEY,
pool TEXT,
log_line_count INTEGER
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tests/common/database/testdata/schema.sql` around lines 60 - 63, The test
fixture defines the workflows.table column "pool" as NOT NULL DEFAULT '' which
differs from production where "pool" is nullable; update the CREATE TABLE for
workflows to declare pool as nullable text (e.g., just "pool TEXT" with no NOT
NULL or DEFAULT) so the test schema matches production, and adjust any tests
that currently rely on empty-string behavior to account for NULLs instead;
target the workflows table definition (column "pool") in the schema.sql used by
tests.

Comment on lines +1509 to +1528
# Read and persist log line counts before deleting Redis keys
workflow_log_count_key = f'{self.workflow_id}-log-count'
workflow_log_count_raw = redis_client.get(workflow_log_count_key)
if workflow_log_count_raw is not None:
workflow_obj.update_log_line_count_to_db(int(workflow_log_count_raw))

# Remove logs from Redis
redis_keys_to_delete : List[str] = [workflow_logs_redis_key, workflow_events_redis_key]
redis_keys_to_delete : List[str] = [
workflow_logs_redis_key, workflow_events_redis_key, workflow_log_count_key]
for group in workflow_obj.groups:
for task_obj in group.tasks:
task_redis_path = common.get_redis_task_log_name(
self.workflow_id, task_obj.name, task_obj.retry_id)
redis_keys_to_delete.append(task_redis_path)
task_log_count_key = (
f'{self.workflow_id}-{task_obj.name}-{task_obj.retry_id}-log-count')
task_log_count_raw = redis_client.get(task_log_count_key)
if task_log_count_raw is not None:
task_obj.update_log_line_count_to_db(int(task_log_count_raw))
redis_keys_to_delete.append(task_log_count_key)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Finalize missing Redis counters as 0, not -1.

Lines 1512-1513 and Lines 1526-1527 only persist log_line_count when the Redis key exists. A quiet workflow/task can finish without ever creating that key, so cleanup leaves the row at -1 forever even though cleanup already completed. That breaks the sentinel contract (-1 = unfinalized, 0 = finalized with no lines) and will skew any downstream SQL/Grafana analysis. Since update_log_line_count_to_db() already ignores legacy NULL rows, this path can safely default a missing key to 0 and update unconditionally.

💡 Suggested fix
-        workflow_log_count_raw = redis_client.get(workflow_log_count_key)
-        if workflow_log_count_raw is not None:
-            workflow_obj.update_log_line_count_to_db(int(workflow_log_count_raw))
+        workflow_log_count_raw = redis_client.get(workflow_log_count_key)
+        workflow_log_count = 0 if workflow_log_count_raw is None else int(workflow_log_count_raw)
+        workflow_obj.update_log_line_count_to_db(workflow_log_count)
@@
-                task_log_count_raw = redis_client.get(task_log_count_key)
-                if task_log_count_raw is not None:
-                    task_obj.update_log_line_count_to_db(int(task_log_count_raw))
+                task_log_count_raw = redis_client.get(task_log_count_key)
+                task_log_count = 0 if task_log_count_raw is None else int(task_log_count_raw)
+                task_obj.update_log_line_count_to_db(task_log_count)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Read and persist log line counts before deleting Redis keys
workflow_log_count_key = f'{self.workflow_id}-log-count'
workflow_log_count_raw = redis_client.get(workflow_log_count_key)
if workflow_log_count_raw is not None:
workflow_obj.update_log_line_count_to_db(int(workflow_log_count_raw))
# Remove logs from Redis
redis_keys_to_delete : List[str] = [workflow_logs_redis_key, workflow_events_redis_key]
redis_keys_to_delete : List[str] = [
workflow_logs_redis_key, workflow_events_redis_key, workflow_log_count_key]
for group in workflow_obj.groups:
for task_obj in group.tasks:
task_redis_path = common.get_redis_task_log_name(
self.workflow_id, task_obj.name, task_obj.retry_id)
redis_keys_to_delete.append(task_redis_path)
task_log_count_key = (
f'{self.workflow_id}-{task_obj.name}-{task_obj.retry_id}-log-count')
task_log_count_raw = redis_client.get(task_log_count_key)
if task_log_count_raw is not None:
task_obj.update_log_line_count_to_db(int(task_log_count_raw))
redis_keys_to_delete.append(task_log_count_key)
# Read and persist log line counts before deleting Redis keys
workflow_log_count_key = f'{self.workflow_id}-log-count'
workflow_log_count_raw = redis_client.get(workflow_log_count_key)
workflow_log_count = 0 if workflow_log_count_raw is None else int(workflow_log_count_raw)
workflow_obj.update_log_line_count_to_db(workflow_log_count)
# Remove logs from Redis
redis_keys_to_delete : List[str] = [
workflow_logs_redis_key, workflow_events_redis_key, workflow_log_count_key]
for group in workflow_obj.groups:
for task_obj in group.tasks:
task_redis_path = common.get_redis_task_log_name(
self.workflow_id, task_obj.name, task_obj.retry_id)
redis_keys_to_delete.append(task_redis_path)
task_log_count_key = (
f'{self.workflow_id}-{task_obj.name}-{task_obj.retry_id}-log-count')
task_log_count_raw = redis_client.get(task_log_count_key)
task_log_count = 0 if task_log_count_raw is None else int(task_log_count_raw)
task_obj.update_log_line_count_to_db(task_log_count)
redis_keys_to_delete.append(task_log_count_key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/job/jobs.py` around lines 1509 - 1528, The code only updates DB
log_line_count when Redis keys exist, leaving rows at -1 for quiet workflows;
change the logic around workflow_log_count_key and task_log_count_key so that
after reading redis_client.get(...) you treat a missing value as 0 and call
workflow_obj.update_log_line_count_to_db(int_value) /
task_obj.update_log_line_count_to_db(int_value) unconditionally (i.e., compute
int_value = int(raw) if raw is not None else 0), while still appending the keys
to redis_keys_to_delete and keeping the current calls to
common.get_redis_task_log_name and redis_keys_to_delete management.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant