Conversation
…ror_wrappers, dict, json, Config)
…, agent, and job modules
…th and task modules
…ataset manager and postgres connector
…antic_settings import
…2 and update remaining Config classes
…ldValidationInfo for validator context
…args and API migration
…hema_extra, and is_required()
… across services and operators
… workflow and connector code
…h in postgres connector
…move redundant required=True from Field calls
… validator for v2 compatibility
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/utils/connectors/postgres.py (1)
1757-1759: Minor: Unused loop variablefield_name.The loop iterates over
cls.model_fields.items()but only usesfield_info. Per Python convention, prefix unused variables with underscore.Suggested fix
- for field_name, field_info in cls.model_fields.items(): + for _field_name, field_info in cls.model_fields.items(): if field_info.alias: allowed_keys.add(field_info.alias)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/connectors/postgres.py` around lines 1757 - 1759, The loop over cls.model_fields.items() in the code that builds allowed_keys uses an unused loop variable field_name; rename it to _field_name (or prefix with an underscore) to follow Python convention and silence linters where the loop is: the for loop that reads "for field_name, field_info in cls.model_fields.items()" should become "for _field_name, field_info in cls.model_fields.items()" while leaving the body that adds field_info.alias to allowed_keys unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/utils/static_config.py`:
- Around line 56-61: The code is reading field.default directly (in the
cls.model_fields loop) which in Pydantic v2 can be PydanticUndefined or hide
default_factory behavior; change the logic that builds help_message to only show
an explicit default (i.e. detect PydanticUndefined or use
field.get_default(call_default_factory=False)/field.has_default) and avoid
materializing factory defaults there, and change config initialization to either
omit setting defaults into the config and let cls(**config) apply them, or call
field.get_default(call_default_factory=True) when you deliberately want to
materialize factory-backed defaults; adjust uses around cls.model_fields,
field.default, and help_message accordingly.
---
Nitpick comments:
In `@src/utils/connectors/postgres.py`:
- Around line 1757-1759: The loop over cls.model_fields.items() in the code that
builds allowed_keys uses an unused loop variable field_name; rename it to
_field_name (or prefix with an underscore) to follow Python convention and
silence linters where the loop is: the for loop that reads "for field_name,
field_info in cls.model_fields.items()" should become "for _field_name,
field_info in cls.model_fields.items()" while leaving the body that adds
field_info.alias to allowed_keys unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 255bd63f-6358-44c9-a81d-3cdd7cb070b6
📒 Files selected for processing (9)
docs/locked_requirements.txtsrc/lib/utils/common.pysrc/operator/utils/objects.pysrc/service/core/config/config_service.pysrc/service/delayed_job_monitor/delayed_job_monitor.pysrc/ui/src/mocks/generators/event-generator.tssrc/utils/connectors/postgres.pysrc/utils/job/workflow.pysrc/utils/static_config.py
✅ Files skipped from review due to trivial changes (2)
- src/lib/utils/common.py
- docs/locked_requirements.txt
🚧 Files skipped from review as they are similar to previous changes (4)
- src/operator/utils/objects.py
- src/ui/src/mocks/generators/event-generator.ts
- src/service/delayed_job_monitor/delayed_job_monitor.py
- src/service/core/config/config_service.py
31bf351 to
a91fbe7
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/utils/auth.py (1)
97-104:⚠️ Potential issue | 🟡 MinorChain the exception cause when re-raising.
OSMOUserErrorinherits fromOSMOError(which inherits fromException, notValueError). When re-raised at line 103, this custom exception will bypass Pydantic v2's normalValidationErrorpath and propagate as a raw application exception, losing the validation context. Use exception chaining:except ValueError as e: raise osmo_errors.OSMOUserError(f'Invalid max_token_duration format: {str(e)}') from eAlternatively, consider whether the custom error type is necessary here—letting
ValueErrorpropagate naturally may be cleaner for a field validator.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/auth.py` around lines 97 - 104, The validator validate_max_token_duration calls common.to_timedelta and currently raises osmo_errors.OSMOUserError without chaining the original ValueError, which causes Pydantic v2 to lose validation context; update the except block in validate_max_token_duration to re-raise the OSMOUserError with exception chaining (i.e., raise osmo_errors.OSMOUserError(...) from e) so the original ValueError is preserved, or alternatively remove the custom raise and let the original ValueError propagate from common.to_timedelta if you prefer standard Pydantic validation behavior.
🧹 Nitpick comments (4)
src/service/core/config/objects.py (1)
363-383: Add strict validator typing and avoid abbreviated parameter names.At Line 365 / Line 377 / Line 426 / Line 433, validator params are still loosely typed (
v, untypedvalues). Please switch to descriptive names and explicit input/return types.Suggested refactor
- def validate_config_types(cls, v): - if v is not None: + def validate_config_types( + cls, config_types: List[config_history.ConfigHistoryType] | None + ) -> List[config_history.ConfigHistoryType] | None: + if config_types is not None: valid_types = [t.value.lower() for t in config_history.ConfigHistoryType] - invalid_types = [t for t in v if t.value.lower() not in valid_types] + invalid_types = [t for t in config_types if t.value.lower() not in valid_types] if invalid_types: raise ValueError( f'Invalid config types: {invalid_types}. Valid types are: {valid_types}' ) - return v + return config_types ... - def validate_at_timestamp(cls, v, info: pydantic.ValidationInfo): - if v is not None: + def validate_at_timestamp( + cls, + at_timestamp: datetime.datetime | None, + info: pydantic.ValidationInfo, + ) -> datetime.datetime | None: + if at_timestamp is not None: if 'created_before' in info.data and info.data['created_before'] is not None: raise ValueError('Cannot specify both at_timestamp and created_before') if 'created_after' in info.data and info.data['created_after'] is not None: raise ValueError('Cannot specify both at_timestamp and created_after') - return v + return at_timestamp ... - def validate_tags(cls, v): - if v is not None and not v: + def validate_tags(cls, tags: List[str] | None) -> List[str] | None: + if tags is not None and not tags: raise ValueError('Tags list cannot be empty') - return v + return tags ... - def validate_at_least_one_tag_operation(cls, values): + def validate_at_least_one_tag_operation( + cls, values: dict[str, Any] | Any + ) -> dict[str, Any] | Any:As per coding guidelines, "Do not use abbreviations in Python variable names unless they are well-understood..." and "Add type annotations in Python code where they improve clarity and catch errors — use strict typing."
Also applies to: 424-438
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/service/core/config/objects.py` around lines 363 - 383, Update the two pydantic field validators (validate_config_types and validate_at_timestamp) to use descriptive parameter names and explicit type annotations: replace the ambiguous parameter v with e.g. config_types: Optional[List[ConfigHistoryType]] (or appropriate concrete type) and at_timestamp: Optional[datetime], add return type annotations, and type the info parameter as pydantic.ValidationInfo; ensure imports/types (e.g., typing.Optional, typing.List, datetime, and config_history.ConfigHistoryType) are referenced correctly; apply the same renaming/typing pattern to the other validators mentioned around lines 424-438 so all field validator signatures use descriptive names and strict input/output typings rather than untyped/abbreviated names.src/operator/utils/node_validation_test/test_base.py (1)
111-113: Consider adding type annotations for consistency.The
validate_rfc3339_timestampmethod lacks type hints, whilevalidate_node_condition_prefix(line 82) has full annotations.🔧 Suggested improvement
`@pydantic.field_validator`('last_heartbeat_time', 'last_transition_time') `@classmethod` - def validate_rfc3339_timestamp(cls, v): + def validate_rfc3339_timestamp(cls, v: Optional[str]) -> Optional[str]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operator/utils/node_validation_test/test_base.py` around lines 111 - 113, Add missing type annotations to the pydantic field validator method validate_rfc3339_timestamp: annotate the first arg as cls: type[Any] (or the actual test class type used elsewhere), annotate v as the expected input type (e.g., v: str | datetime | None) and add an explicit return type (e.g., -> str | datetime | None) so the signature matches the style of validate_node_condition_prefix; keep the `@pydantic.field_validator`('last_heartbeat_time', 'last_transition_time') decorator and ensure the annotated types reflect what the validator accepts/returns.src/utils/job/task.py (1)
520-527: Minor: Missing space in return type annotation.Line 522 has
->datetime.timedeltainstead of-> datetime.timedelta. This doesn't affect functionality but deviates from PEP 8 style conventions.✏️ Suggested fix
`@pydantic.field_validator`('frequency', mode='before') `@classmethod` - def validate_frequency(cls, value) ->datetime.timedelta: + def validate_frequency(cls, value) -> datetime.timedelta: if isinstance(value, (int, float)): return datetime.timedelta(seconds=value)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/job/task.py` around lines 520 - 527, The return type annotation on the validate_frequency classmethod should follow PEP8 spacing; update the signature of validate_frequency (decorated with `@pydantic.field_validator`('frequency', mode='before')) to include a space after the arrow so it reads "-> datetime.timedelta" rather than "->datetime.timedelta", keeping the rest of the method unchanged.src/utils/connectors/postgres.py (1)
3256-3256: Consider standardizingextra=configuration syntax.The file mixes two valid Pydantic v2 styles for
extraconfiguration:
- Inline:
class Foo(BaseModel, extra='ignore')(lines 3256, 3338, 3346)- ConfigDict:
model_config = ConfigDict(extra='forbid')(elsewhere)Both work identically. For consistency and easier grep-ability of model configurations, consider standardizing on one approach.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/connectors/postgres.py` at line 3256, The class PlatformEditable currently uses the inline Pydantic v2 syntax (class PlatformEditable(PlatformBase, extra='ignore')) while other models use model_config = ConfigDict(extra='forbid'); standardize by switching PlatformEditable (and the other classes using inline extra) to the ConfigDict style: remove the inline extra argument and add model_config = ConfigDict(extra='ignore') inside the class body so all models use the same ConfigDict pattern.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/utils/auth.py`:
- Around line 46-51: The model validator validate_valid_key currently only
parses the public key; update it to also parse and validate the private key (use
self.private_key.get_secret_value()) so malformed private keys fail fast; inside
validate_valid_key (class AsymmetricKeyPair) call jwk.JWK.from_json(...) on both
the public_key and the private_key secret value, catch/let any parsing exception
surface as a validation error, and then return self.
In `@src/utils/job/task.py`:
- Around line 961-970: The validator validate_tasks (decorated with
`@pydantic.field_validator`('tasks')) accesses info.data['name'] directly which
can raise KeyError if 'name' failed validation; change that access to use
info.data.get('name') (or fallback to a sensible default) and handle the missing
name by raising a ValueError with a clear message or returning early so the
validator behaves consistently with other validators in this file that use
.get().
---
Outside diff comments:
In `@src/utils/auth.py`:
- Around line 97-104: The validator validate_max_token_duration calls
common.to_timedelta and currently raises osmo_errors.OSMOUserError without
chaining the original ValueError, which causes Pydantic v2 to lose validation
context; update the except block in validate_max_token_duration to re-raise the
OSMOUserError with exception chaining (i.e., raise
osmo_errors.OSMOUserError(...) from e) so the original ValueError is preserved,
or alternatively remove the custom raise and let the original ValueError
propagate from common.to_timedelta if you prefer standard Pydantic validation
behavior.
---
Nitpick comments:
In `@src/operator/utils/node_validation_test/test_base.py`:
- Around line 111-113: Add missing type annotations to the pydantic field
validator method validate_rfc3339_timestamp: annotate the first arg as cls:
type[Any] (or the actual test class type used elsewhere), annotate v as the
expected input type (e.g., v: str | datetime | None) and add an explicit return
type (e.g., -> str | datetime | None) so the signature matches the style of
validate_node_condition_prefix; keep the
`@pydantic.field_validator`('last_heartbeat_time', 'last_transition_time')
decorator and ensure the annotated types reflect what the validator
accepts/returns.
In `@src/service/core/config/objects.py`:
- Around line 363-383: Update the two pydantic field validators
(validate_config_types and validate_at_timestamp) to use descriptive parameter
names and explicit type annotations: replace the ambiguous parameter v with e.g.
config_types: Optional[List[ConfigHistoryType]] (or appropriate concrete type)
and at_timestamp: Optional[datetime], add return type annotations, and type the
info parameter as pydantic.ValidationInfo; ensure imports/types (e.g.,
typing.Optional, typing.List, datetime, and config_history.ConfigHistoryType)
are referenced correctly; apply the same renaming/typing pattern to the other
validators mentioned around lines 424-438 so all field validator signatures use
descriptive names and strict input/output typings rather than
untyped/abbreviated names.
In `@src/utils/connectors/postgres.py`:
- Line 3256: The class PlatformEditable currently uses the inline Pydantic v2
syntax (class PlatformEditable(PlatformBase, extra='ignore')) while other models
use model_config = ConfigDict(extra='forbid'); standardize by switching
PlatformEditable (and the other classes using inline extra) to the ConfigDict
style: remove the inline extra argument and add model_config =
ConfigDict(extra='ignore') inside the class body so all models use the same
ConfigDict pattern.
In `@src/utils/job/task.py`:
- Around line 520-527: The return type annotation on the validate_frequency
classmethod should follow PEP8 spacing; update the signature of
validate_frequency (decorated with `@pydantic.field_validator`('frequency',
mode='before')) to include a space after the arrow so it reads "->
datetime.timedelta" rather than "->datetime.timedelta", keeping the rest of the
method unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 078f18a1-ba9b-417a-a8b1-d80830751601
📒 Files selected for processing (18)
docs/locked_requirements.txtsrc/lib/utils/common.pysrc/lib/utils/login.pysrc/operator/utils/node_validation_test/lfs_validator.pysrc/operator/utils/node_validation_test/test_base.pysrc/operator/utils/objects.pysrc/service/core/config/config_service.pysrc/service/core/config/objects.pysrc/service/core/profile/profile_service.pysrc/service/core/workflow/objects.pysrc/service/delayed_job_monitor/delayed_job_monitor.pysrc/service/logger/ctrl_websocket.pysrc/ui/src/mocks/generators/event-generator.tssrc/utils/auth.pysrc/utils/connectors/postgres.pysrc/utils/job/task.pysrc/utils/job/workflow.pysrc/utils/static_config.py
✅ Files skipped from review due to trivial changes (3)
- src/operator/utils/objects.py
- src/utils/static_config.py
- src/utils/job/workflow.py
🚧 Files skipped from review as they are similar to previous changes (4)
- src/service/logger/ctrl_websocket.py
- src/operator/utils/node_validation_test/lfs_validator.py
- src/lib/utils/common.py
- docs/locked_requirements.txt
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/utils/job/task.py (1)
522-529:⚠️ Potential issue | 🟠 MajorReject booleans for
frequency.In this
mode='before'validator,boolis a subclass ofint, soisinstance(value, (int, float))at line 525 matches boolean values. YAML booleans (true/false) are then silently converted to 1 and 0 second durations instead of being rejected. Add an explicit boolean guard before the int/float check:`@pydantic.field_validator`('frequency', mode='before') `@classmethod` def validate_frequency(cls, value: Any) -> datetime.timedelta: if isinstance(value, bool): raise ValueError('Checkpoint frequency must be a duration, not a boolean.') if isinstance(value, (int, float)): return datetime.timedelta(seconds=value) if isinstance(value, datetime.timedelta): return value return common.to_timedelta(value)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/job/task.py` around lines 522 - 529, The validate_frequency field validator currently treats booleans as ints (since bool is a subclass of int) and converts YAML true/false to 1/0 seconds; update the validate_frequency method (the `@pydantic.field_validator` on 'frequency') to explicitly check for isinstance(value, bool) first and raise a ValueError (e.g., "Checkpoint frequency must be a duration, not a boolean") before the existing isinstance(value, (int, float)) branch so booleans are rejected; keep the existing handling for datetime.timedelta and the fallback to common.to_timedelta(value).
♻️ Duplicate comments (1)
src/utils/job/task.py (1)
963-972:⚠️ Potential issue | 🟡 MinorStill unsafe
info.data['name']lookup.If
nameis missing frominfo.data, Line 972 raisesKeyErrorand masks the real validation error.🛡️ Suggested fix
- group_name = info.data['name'] + group_name = info.data.get('name', '<unknown>')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/job/task.py` around lines 963 - 972, The validator validate_tasks currently does an unsafe lookup info.data['name'] which can raise KeyError and hide the real validation error; change it to safely fetch the name (e.g. use info.data.get('name') or a default like "<unknown>") and handle the missing case explicitly (raise a ValueError with a clear message or proceed with the default) inside the pydantic.field_validator('tasks') classmethod so validate_tasks never directly indexes info.data by key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/utils/job/task.py`:
- Around line 635-650: The validator coerce_credential_values currently
stringifies any non-dict value (e.g., None, list, object) which lets malformed
credentials pass as paths; change it to only accept str or dict inputs: if value
is a str return it, if it's a dict perform the existing coercion of keys/inner
values to str, otherwise raise a ValueError (or pydantic.ValidationError) to
fail fast; update the pydantic.field_validator('credentials') method
coerce_credential_values to implement this strict-type check and error path so
only valid types reach downstream code.
---
Outside diff comments:
In `@src/utils/job/task.py`:
- Around line 522-529: The validate_frequency field validator currently treats
booleans as ints (since bool is a subclass of int) and converts YAML true/false
to 1/0 seconds; update the validate_frequency method (the
`@pydantic.field_validator` on 'frequency') to explicitly check for
isinstance(value, bool) first and raise a ValueError (e.g., "Checkpoint
frequency must be a duration, not a boolean") before the existing
isinstance(value, (int, float)) branch so booleans are rejected; keep the
existing handling for datetime.timedelta and the fallback to
common.to_timedelta(value).
---
Duplicate comments:
In `@src/utils/job/task.py`:
- Around line 963-972: The validator validate_tasks currently does an unsafe
lookup info.data['name'] which can raise KeyError and hide the real validation
error; change it to safely fetch the name (e.g. use info.data.get('name') or a
default like "<unknown>") and handle the missing case explicitly (raise a
ValueError with a clear message or proceed with the default) inside the
pydantic.field_validator('tasks') classmethod so validate_tasks never directly
indexes info.data by key.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 1f786c36-f157-4fc5-9e08-e2a80b68d492
📒 Files selected for processing (1)
src/utils/job/task.py
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/utils/connectors/postgres.py (2)
1985-2021:⚠️ Potential issue | 🟠 MajorReplace production
assertwith explicit exception.Line 2020 uses
assertin runtime logic; this can be skipped with optimized Python and bypass validation. Use an explicit exception instead.Proposed fix
- comparison_function = self.get_comparison_function(self.operator) - assert comparison_function(processed_left_operand, \ - processed_right_operand), processed_assert_msg + comparison_function = self.get_comparison_function(self.operator) + if not comparison_function(processed_left_operand, processed_right_operand): + raise osmo_errors.OSMOResourceError(processed_assert_msg)As per coding guidelines, "Do not use
assertstatements in Python production code — only in unit tests. Use proper error handling instead by raising appropriate exceptions (ValueError, TypeError, etc.)."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/connectors/postgres.py` around lines 1985 - 2021, The runtime `assert` in the evaluate method should be replaced with an explicit exception: in the evaluate function (and using processed_left_operand/processed_right_operand, self.operator, comparison_function from get_comparison_function, and self.assert_message), call comparison_function(...) and if it returns False raise an appropriate exception (e.g. raise AssertionError(processed_assert_msg) or ValueError(processed_assert_msg)) instead of using the `assert` statement so the check cannot be skipped under optimization; ensure the raised exception uses the prepared processed_assert_msg for context.
3658-3667:⚠️ Potential issue | 🟠 MajorMove function-scoped import to module level and break the cycle structurally.
Line 3666 imports inside
Pool.insert_into_db, which violates the repo rule and keeps the circular dependency unresolved. Please refactor the dependency edge (shared module / inversion / type-only references) so imports stay top-level.As per coding guidelines, "All Python imports must be at the top level of the module, placed after the module docstring. No exceptions — imports inside functions are not allowed." and "If circular dependencies exist in Python code, refactor to remove them using strategies: extract shared code into separate module, use dependency inversion, restructure module hierarchy, or use late binding/forward references for type hints."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/connectors/postgres.py` around lines 3658 - 3667, Pool.insert_into_db currently does a function-level import "from src.utils.job import kb_objects" and calls Backend.fetch_from_db, which violates the top-level-import rule and preserves a circular dependency; fix it by refactoring the dependency so imports can be at module top-level: extract the shared types/logic used by Pool.insert_into_db (e.g., any functions/classes referenced from kb_objects and Backend.fetch_from_db) into a new common module (or invert dependency) and update Pool.insert_into_db to use that new module via top-level imports; ensure you remove the in-function "from src.utils.job import kb_objects" and instead import the extracted/common module and Backend.fetch_from_db (or its relocated function) at the top of src/utils/connectors/postgres.py and update all references (self.topology_keys handling, calls inside Pool.insert_into_db) to the new symbols.
♻️ Duplicate comments (1)
src/utils/job/task.py (1)
635-658:⚠️ Potential issue | 🟠 MajorKeep malformed credential entries failing fast.
The outer type check is fine, but Lines 650-654 still stringify
None, lists, or arbitrary objects into mount paths. Downstream code inTaskGroup.get_kb_specsandsrc/service/core/workflow/objects.py:914-921treats thestrbranch as a directory path, so invalid specs can still change runtime behavior instead of being rejected up front.🛡️ Proposed fix
`@pydantic.field_validator`('credentials', mode='before') `@classmethod` def coerce_credential_values(cls, value: Any) -> Any: @@ if isinstance(value, str): return value if isinstance(value, dict): result: Dict[str, Union[str, Dict[str, str]]] = {} - for k, v in value.items(): - if isinstance(v, dict): - result[str(k)] = {str(dk): str(dv) for dk, dv in v.items()} - else: - result[str(k)] = str(v) + for credential_name, credential_value in value.items(): + if isinstance(credential_value, dict): + coerced_mapping: Dict[str, str] = {} + for environment_name, secret_key in credential_value.items(): + if not isinstance(secret_key, (str, int, float, bool)): + raise ValueError( + f'Credential "{credential_name}" contains an invalid secret key mapping.' + ) + coerced_mapping[str(environment_name)] = str(secret_key) + result[str(credential_name)] = coerced_mapping + elif isinstance(credential_value, (str, int, float, bool)): + result[str(credential_name)] = str(credential_value) + else: + raise ValueError( + f'Credential "{credential_name}" must be a string path or a key mapping.' + ) return result raise ValueError( f'credentials must be a str or dict, got {type(value).__name__}' )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/job/task.py` around lines 635 - 658, The coerce_credential_values validator currently stringifies arbitrary inner values (e.g., None, lists, objects) which lets invalid credentials pass as mount paths; update the validator (coerce_credential_values) so that for dict inputs it only accepts either a str value or a dict whose keys and values are already strings (or can be safely coerced from primitive scalars like int/bool) and otherwise raises ValueError; specifically, for each k,v in the outer dict: if v is a dict, ensure every dk,dv are str (or simple scalars you convert to str) and raise on lists/None/complex types; if v is not a dict it must be a str (or scalar you convert) but reject None/list/object by raising ValueError rather than calling str() on them; keep the validator signature and mode unchanged and add clear error messages referencing coerce_credential_values so downstream callers (e.g., TaskGroup.get_kb_specs) receive failures early.
🧹 Nitpick comments (4)
src/utils/auth.py (1)
97-104: Field validator migration is correct; consider using f-string conversion flag.The migration from
@pydantic.validatorto@pydantic.field_validatoris correct. Per static analysis hint (RUF010), prefer{e!s}overstr(e)for clarity.Proposed fix
- raise osmo_errors.OSMOUserError(f'Invalid max_token_duration format: {str(e)}') from e + raise osmo_errors.OSMOUserError(f'Invalid max_token_duration format: {e!s}') from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/auth.py` around lines 97 - 104, The validate_max_token_duration field validator currently raises OSMOUserError using str(e); update the exception message construction in validate_max_token_duration to use an f-string with the conversion flag (e.g., {e!s}) so the error detail is included via {e!s} instead of str(e); keep the same behavior of calling common.to_timedelta and raising osmo_errors.OSMOUserError when ValueError is caught.src/service/core/config/objects.py (2)
198-207: Remove redundantstr()aroundmodel_dump_json()outputs.Lines 202 and 206 wrap
model_dump_json()withstr(), but Pydantic v2'smodel_dump_json()already returns a string, making the wrapper unnecessary.Suggested cleanup
- else str(self.scheduler_settings.model_dump_json()) + else self.scheduler_settings.model_dump_json() @@ - None if not self.node_conditions else str(self.node_conditions.model_dump_json()) + None if not self.node_conditions else self.node_conditions.model_dump_json()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/service/core/config/objects.py` around lines 198 - 207, The properties dict_data['scheduler_settings'] and dict_data['node_conditions'] are wrapping Pydantic v2's model_dump_json() in str(), which is redundant because model_dump_json() already returns a string; update the assignment expressions in the method that builds dict_data so they assign the raw model_dump_json() return values (e.g., use self.scheduler_settings.model_dump_json() and self.node_conditions.model_dump_json() directly) instead of wrapping them with str(), leaving the existing None checks intact.
437-439: UseAnyfor both parameter and return type inmode='before'validator.At line 437, the function annotation is
Dict[str, Any], but line 439 returns non-dict payloads unchanged. Per Pydantic v2 documentation,@model_validator(mode='before')validators receive potentially any object type (not guaranteed to be a dict), so the correct annotation isAnyfor both parameter and return type to accurately reflect actual behavior and maintain strict typing.Suggested typing fix
- def validate_at_least_one_tag_operation(cls, values: Dict[str, Any]) -> Dict[str, Any]: + def validate_at_least_one_tag_operation(cls, values: Any) -> Any:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/service/core/config/objects.py` around lines 437 - 439, The validator function validate_at_least_one_tag_operation is annotated as taking and returning Dict[str, Any] but is registered with `@model_validator`(mode='before') and may receive non-dict payloads; update its signature to use Any for both the parameter and return type (e.g., values: Any -> Any) so the type reflects Pydantic v2's before-mode behavior and prevents incorrect static typing while keeping the existing isinstance(values, dict) guard and logic intact.src/operator/utils/node_validation_test/test_base.py (1)
82-83: Use descriptive validator parameter names (valueinstead ofv).Please rename
vto a descriptive name in both validators for readability and consistency with repo naming rules.As per coding guidelines, "Do not use abbreviations in Python variable names unless they are well-understood ... Use full, descriptive names."
Also applies to: 113-117
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operator/utils/node_validation_test/test_base.py` around lines 82 - 83, Rename the short parameter name `v` to a descriptive name like `value` in the validator method validate_node_condition_prefix and the other validator method referenced (lines 113-117) so signatures, type annotations, and all usages inside each method use `value` instead of `v`; update any `@validator/`@classmethod decorators or references to the parameter within the method bodies accordingly to keep behavior unchanged but improve readability and follow naming guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/operator/utils/node_validation_test/test_base.py`:
- Around line 126-128: The current validate_rfc3339_timestamp flow accepts naive
datetimes because datetime.fromisoformat(v.replace('Z','+00:00')) will produce a
tz-naive dt when the input lacks an offset; modify validate_rfc3339_timestamp to
explicitly reject tz-naive inputs by checking if dt.tzinfo is None after parsing
(using dt = datetime.fromisoformat(...)), and raise a ValueError (or return an
error) when tzinfo is None; keep the existing replacement of 'Z' with '+00:00',
then call dt.astimezone(timezone.utc) only for timezone-aware datetimes to
produce the final strftime output.
In `@src/utils/job/task.py`:
- Around line 622-633: The validator coerce_dict_str_values currently
stringifies all dict values (including None, lists, nested dicts) which lets
malformed values flow into the pod env; change it so when value is a dict you
only stringify scalar entries (instances of str, int, float, bool) and leave
other value types as-is so they will fail later in validation. Update the method
coerce_dict_str_values (the `@pydantic.field_validator` for 'environment' and
'exitActions') to iterate dict items and call str(v) only for scalar types; keep
keys stringified as before, and do not coerce None, list, or dict values (so
downstream usage around the pod env population will get validation errors
instead of literal "None"/"[...]" strings).
- Around line 522-529: The validator validate_frequency (decorated with
pydantic.field_validator for 'frequency') currently treats bools as ints because
bool is a subclass of int; update the function to explicitly detect and reject
bool values before the numeric branch (e.g., if isinstance(value, bool): raise
ValueError("frequency must be a duration or numeric seconds, not a boolean")) so
that true/false YAML values are not silently converted to 1s/0s; keep the
existing numeric (int/float), timedelta, and common.to_timedelta branches
unchanged.
---
Outside diff comments:
In `@src/utils/connectors/postgres.py`:
- Around line 1985-2021: The runtime `assert` in the evaluate method should be
replaced with an explicit exception: in the evaluate function (and using
processed_left_operand/processed_right_operand, self.operator,
comparison_function from get_comparison_function, and self.assert_message), call
comparison_function(...) and if it returns False raise an appropriate exception
(e.g. raise AssertionError(processed_assert_msg) or
ValueError(processed_assert_msg)) instead of using the `assert` statement so the
check cannot be skipped under optimization; ensure the raised exception uses the
prepared processed_assert_msg for context.
- Around line 3658-3667: Pool.insert_into_db currently does a function-level
import "from src.utils.job import kb_objects" and calls Backend.fetch_from_db,
which violates the top-level-import rule and preserves a circular dependency;
fix it by refactoring the dependency so imports can be at module top-level:
extract the shared types/logic used by Pool.insert_into_db (e.g., any
functions/classes referenced from kb_objects and Backend.fetch_from_db) into a
new common module (or invert dependency) and update Pool.insert_into_db to use
that new module via top-level imports; ensure you remove the in-function "from
src.utils.job import kb_objects" and instead import the extracted/common module
and Backend.fetch_from_db (or its relocated function) at the top of
src/utils/connectors/postgres.py and update all references (self.topology_keys
handling, calls inside Pool.insert_into_db) to the new symbols.
---
Duplicate comments:
In `@src/utils/job/task.py`:
- Around line 635-658: The coerce_credential_values validator currently
stringifies arbitrary inner values (e.g., None, lists, objects) which lets
invalid credentials pass as mount paths; update the validator
(coerce_credential_values) so that for dict inputs it only accepts either a str
value or a dict whose keys and values are already strings (or can be safely
coerced from primitive scalars like int/bool) and otherwise raises ValueError;
specifically, for each k,v in the outer dict: if v is a dict, ensure every dk,dv
are str (or simple scalars you convert to str) and raise on lists/None/complex
types; if v is not a dict it must be a str (or scalar you convert) but reject
None/list/object by raising ValueError rather than calling str() on them; keep
the validator signature and mode unchanged and add clear error messages
referencing coerce_credential_values so downstream callers (e.g.,
TaskGroup.get_kb_specs) receive failures early.
---
Nitpick comments:
In `@src/operator/utils/node_validation_test/test_base.py`:
- Around line 82-83: Rename the short parameter name `v` to a descriptive name
like `value` in the validator method validate_node_condition_prefix and the
other validator method referenced (lines 113-117) so signatures, type
annotations, and all usages inside each method use `value` instead of `v`;
update any `@validator/`@classmethod decorators or references to the parameter
within the method bodies accordingly to keep behavior unchanged but improve
readability and follow naming guidelines.
In `@src/service/core/config/objects.py`:
- Around line 198-207: The properties dict_data['scheduler_settings'] and
dict_data['node_conditions'] are wrapping Pydantic v2's model_dump_json() in
str(), which is redundant because model_dump_json() already returns a string;
update the assignment expressions in the method that builds dict_data so they
assign the raw model_dump_json() return values (e.g., use
self.scheduler_settings.model_dump_json() and
self.node_conditions.model_dump_json() directly) instead of wrapping them with
str(), leaving the existing None checks intact.
- Around line 437-439: The validator function
validate_at_least_one_tag_operation is annotated as taking and returning
Dict[str, Any] but is registered with `@model_validator`(mode='before') and may
receive non-dict payloads; update its signature to use Any for both the
parameter and return type (e.g., values: Any -> Any) so the type reflects
Pydantic v2's before-mode behavior and prevents incorrect static typing while
keeping the existing isinstance(values, dict) guard and logic intact.
In `@src/utils/auth.py`:
- Around line 97-104: The validate_max_token_duration field validator currently
raises OSMOUserError using str(e); update the exception message construction in
validate_max_token_duration to use an f-string with the conversion flag (e.g.,
{e!s}) so the error detail is included via {e!s} instead of str(e); keep the
same behavior of calling common.to_timedelta and raising
osmo_errors.OSMOUserError when ValueError is caught.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e75441cf-0653-46fd-a38b-c69f20ff88ae
📒 Files selected for processing (5)
src/operator/utils/node_validation_test/test_base.pysrc/service/core/config/objects.pysrc/utils/auth.pysrc/utils/connectors/postgres.pysrc/utils/job/task.py
Description
Issue - None
Checklist
Summary by CodeRabbit
Dependencies
Bug Fixes
Improvements