feat(inference): add Azure OpenAI as inference provider#1604
feat(inference): add Azure OpenAI as inference provider#1604g4ur4vs wants to merge 4 commits intoNVIDIA:mainfrom
Conversation
Azure OpenAI uses the OpenAI-compatible API behind per-customer endpoint URLs. This adds it alongside the existing providers in: - REMOTE_PROVIDER_CONFIG (onboard menu, credential prompting) - getProviderSelectionConfig (inference routing) - getSandboxInferenceConfig (Dockerfile patching) - openclaw-sandbox.yaml (network policy for *.openai.azure.com) - printDashboard (provider label display) Users select "Azure OpenAI" during onboarding, supply their endpoint URL and API key, then choose a deployment model name. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdded Azure OpenAI provider support across onboarding, inference routing/validation, network policies, docs, and tests; introduces Changes
sequenceDiagram
participant User as CLI User
participant Onboard as onboard.js
participant Inference as inference-config
participant Validator as Validation Probe
participant Server as Upsert Provider / Inference Set
participant Policy as Network Policy
User->>Onboard: select "azure-openai" and provide endpoint & model
Onboard->>Inference: request provider selection config
Inference->>Validator: probe /responses (tool-calling) then fallback /chat/completions
Validator-->>Inference: probe result (compatible / not compatible)
Inference->>Server: upsertProvider / inference set with metadata and AZURE_OPENAI_API_KEY
Server->>Policy: ensure azure_openai egress rules exist
Server-->>User: confirm provider configured
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Add Azure OpenAI to the inference options table, validation table, runtime switch examples, and network policy reference. Regenerate agent skills. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
.agents/skills/nemoclaw-configure-inference/SKILL.md (1)
1-4:⚠️ Potential issue | 🟠 MajorAdd the required SPDX header to this skill file.
The file currently starts without the mandated SPDX license header.
Suggested fix
--- name: "nemoclaw-configure-inference" description: "Lists all inference providers offered during NemoClaw onboarding. Use when explaining which providers are available, what the onboard wizard presents, or how inference routing works. Changes the active inference model without restarting the sandbox. Use when switching inference providers, changing the model runtime, or reconfiguring inference routing. Connects NemoClaw to a local inference server. Use when setting up Ollama, vLLM, TensorRT-LLM, NIM, or any OpenAI-compatible local model server with NemoClaw." --- + +<!-- + SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. + SPDX-License-Identifier: Apache-2.0 +-->As per coding guidelines, "
**/*.{js,ts,tsx,sh,md}: Every source file must include an SPDX license header: '// SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.' and '// SPDX-License-Identifier: Apache-2.0' (use # for shell scripts, HTML comments for Markdown)".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/skills/nemoclaw-configure-inference/SKILL.md around lines 1 - 4, Add the required SPDX license header to the top of the SKILL.md file for the skill "nemoclaw-configure-inference": insert HTML comment lines containing "// SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved." and "// SPDX-License-Identifier: Apache-2.0" (as HTML comments for Markdown) before the existing front-matter so the file begins with the mandated SPDX header.docs/inference/inference-options.md (1)
18-21:⚠️ Potential issue | 🟠 MajorUpdate SPDX header to the repository-required canonical text.
The Markdown SPDX header is present, but the copyright line does not match the required 2026-only text.
Suggested fix
<!-- - SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. + SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: Apache-2.0 -->As per coding guidelines, "
**/*.{js,ts,tsx,sh,md}: Every source file must include an SPDX license header: '// SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.' and '// SPDX-License-Identifier: Apache-2.0' (use # for shell scripts, HTML comments for Markdown)".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/inference/inference-options.md` around lines 18 - 21, Replace the existing multi-year SPDX HTML comment in docs/inference/inference-options.md with the repository-required canonical SPDX header: use an HTML comment containing the single-year copyright line "SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved." followed by "SPDX-License-Identifier: Apache-2.0"; update the header where the current <!-- SPDX-FileCopyrightText: Copyright (c) 2025-2026 ... --> appears so it exactly matches the required 2026-only text and comment style for Markdown..agents/skills/nemoclaw-configure-inference/references/inference-options.md (1)
1-3:⚠️ Potential issue | 🟠 MajorAdd the required SPDX license header at the top of this Markdown file.
This file is missing the repository-mandated SPDX header.
Suggested fix
+<!-- + SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. + SPDX-License-Identifier: Apache-2.0 +--> + # Inference OptionsAs per coding guidelines, "
**/*.{js,ts,tsx,sh,md}: Every source file must include an SPDX license header: '// SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.' and '// SPDX-License-Identifier: Apache-2.0' (use # for shell scripts, HTML comments for Markdown)".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/skills/nemoclaw-configure-inference/references/inference-options.md around lines 1 - 3, Add the repository-mandated SPDX license header as an HTML comment at the very top of this Markdown file (above the "# Inference Options" header): include the exact two lines for copyright and license ("// SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved." and "// SPDX-License-Identifier: Apache-2.0"), but formatted as an HTML comment for Markdown (i.e., <!-- ... -->) so the header is present without affecting rendered content; ensure there are no blank lines above the comment and that the existing "# Inference Options" heading remains immediately after the SPDX comment.
🧹 Nitpick comments (1)
.agents/skills/nemoclaw-reference/references/network-policies.md (1)
44-48: Documentedazure_openairules are incomplete vs actual sandbox policyLine 47 only lists two routes, but
nemoclaw-blueprint/policies/openclaw-sandbox.yaml(lines 107-126) allows additional Azure OpenAI paths (completions, embeddings, deployments list/detail, and models detail). Please align this row with the full rule set so docs match enforcement.Proposed doc update
* - `azure_openai` - `*.openai.azure.com:443` - `/usr/local/bin/claude`, `/usr/local/bin/openclaw` - - POST on `/openai/deployments/*/chat/completions`, GET on `/openai/models` + - POST on `/openai/deployments/*/chat/completions`, `/openai/deployments/*/completions`, `/openai/deployments/*/embeddings`; GET on `/openai/deployments`, `/openai/deployments/**`, `/openai/models`, `/openai/models/**`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/skills/nemoclaw-reference/references/network-policies.md around lines 44 - 48, The documented `azure_openai` network-policies row is incomplete compared to the sandbox enforcement in `openclaw-sandbox.yaml`; update the `azure_openai` entry in network-policies.md to list all allowed Azure OpenAI endpoints (add POST on `/openai/deployments/*/completions`, POST on `/openai/deployments/*/embeddings`, GET on `/openai/deployments` and `/openai/deployments/*`, and GET on `/openai/models/*`), and ensure the existing entries (`*.openai.azure.com:443`, `/usr/local/bin/claude`, `/usr/local/bin/openclaw`, POST on `/openai/deployments/*/chat/completions`, GET on `/openai/models`) remain present so the doc matches `openclaw-sandbox.yaml`'s full rule set.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In @.agents/skills/nemoclaw-configure-inference/references/inference-options.md:
- Around line 1-3: Add the repository-mandated SPDX license header as an HTML
comment at the very top of this Markdown file (above the "# Inference Options"
header): include the exact two lines for copyright and license ("//
SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All
rights reserved." and "// SPDX-License-Identifier: Apache-2.0"), but formatted
as an HTML comment for Markdown (i.e., <!-- ... -->) so the header is present
without affecting rendered content; ensure there are no blank lines above the
comment and that the existing "# Inference Options" heading remains immediately
after the SPDX comment.
In @.agents/skills/nemoclaw-configure-inference/SKILL.md:
- Around line 1-4: Add the required SPDX license header to the top of the
SKILL.md file for the skill "nemoclaw-configure-inference": insert HTML comment
lines containing "// SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA
CORPORATION & AFFILIATES. All rights reserved." and "// SPDX-License-Identifier:
Apache-2.0" (as HTML comments for Markdown) before the existing front-matter so
the file begins with the mandated SPDX header.
In `@docs/inference/inference-options.md`:
- Around line 18-21: Replace the existing multi-year SPDX HTML comment in
docs/inference/inference-options.md with the repository-required canonical SPDX
header: use an HTML comment containing the single-year copyright line
"SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All
rights reserved." followed by "SPDX-License-Identifier: Apache-2.0"; update the
header where the current <!-- SPDX-FileCopyrightText: Copyright (c) 2025-2026
... --> appears so it exactly matches the required 2026-only text and comment
style for Markdown.
---
Nitpick comments:
In @.agents/skills/nemoclaw-reference/references/network-policies.md:
- Around line 44-48: The documented `azure_openai` network-policies row is
incomplete compared to the sandbox enforcement in `openclaw-sandbox.yaml`;
update the `azure_openai` entry in network-policies.md to list all allowed Azure
OpenAI endpoints (add POST on `/openai/deployments/*/completions`, POST on
`/openai/deployments/*/embeddings`, GET on `/openai/deployments` and
`/openai/deployments/*`, and GET on `/openai/models/*`), and ensure the existing
entries (`*.openai.azure.com:443`, `/usr/local/bin/claude`,
`/usr/local/bin/openclaw`, POST on `/openai/deployments/*/chat/completions`, GET
on `/openai/models`) remain present so the doc matches `openclaw-sandbox.yaml`'s
full rule set.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d885cf46-17a7-4777-9c55-af94153900db
📒 Files selected for processing (6)
.agents/skills/nemoclaw-configure-inference/SKILL.md.agents/skills/nemoclaw-configure-inference/references/inference-options.md.agents/skills/nemoclaw-reference/references/network-policies.mddocs/inference/inference-options.mddocs/inference/switch-inference-providers.mddocs/reference/network-policies.md
✅ Files skipped from review due to trivial changes (2)
- docs/inference/switch-inference-providers.md
- docs/reference/network-policies.md
- Fix SPDX header year in inference-options.md (2025-2026 → 2026) - List all Azure OpenAI network policy rules in docs (was missing completions, embeddings, and deployment listing paths) - Regenerate agent skills Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Addresses CodeRabbit docstring coverage check (was 60%, threshold 80%). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
cv
left a comment
There was a problem hiding this comment.
LGTM — security review WARNING (non-blocking).
Approved with follow-up suggestions:
-
SSRF defense-in-depth: The Azure flow accepts a user-provided endpoint URL but doesn't call
validateEndpointUrl()fromssrf.ts. This is a pre-existing pattern (all custom endpoint flows skip it), mitigated by network policies at runtime. Consider filing a follow-up to add SSRF validation for all user-provided URLs during onboarding. -
Test coverage: No onboard wizard test for the Azure-specific path — only config tests and menu index offsets. Consider adding a test exercising the Azure endpoint URL prompt, empty URL rejection, and back navigation.
Otherwise clean:
- Network policy properly scoped (
*.openai.azure.com, method+path restricted, port 443, TLS, enforce) - Credentials use existing secure store (
getCredential/saveCredential) - Inference routing correctly mediated through proxy
- Endpoint probe validation present via
validateCustomOpenAiLikeSelection()
|
Several v0.0.10 PRs just merged, including changes to |
cv
left a comment
There was a problem hiding this comment.
Withdrawing approval after maintainer discussion.
We don't want to add dedicated provider entries for individual CSP-hosted OpenAI-compatible endpoints. If we add Azure OpenAI, the next ask is AWS Bedrock, then Google Vertex, etc. — each with their own network policy preset, onboard wizard path, and maintenance burden.
Azure OpenAI is OpenAI-compatible and should work through the existing "Other OpenAI-compatible endpoint" option. Users just need to provide their Azure endpoint URL (e.g., https://<resource>.openai.azure.com) and API key.
If there's a specific gap that prevents Azure OpenAI from working through the generic flow, please open an issue describing the blocker and we can address it there. Thank you for the contribution!
Summary
https://<resource>.openai.azure.com/v1) andAZURE_OPENAI_API_KEY, then enter a deployment model name*.openai.azure.comin the sandbox egress rulesFiles changed
src/lib/inference-config.tsazure-openaicase ingetProviderSelectionConfig()bin/lib/onboard.jsREMOTE_PROVIDER_CONFIGentry, menu option, endpoint URL prompt,getSandboxInferenceConfigcase,setupInferenceallowlist, dashboard labelnemoclaw-blueprint/policies/openclaw-sandbox.yamlazure_openainetwork policysrc/lib/inference-config.test.tsazure-openaifrom blocked candidates to approved providers, added full-object and default-model teststest/onboard-selection.test.jsTest plan
npm run buildinnemoclaw/— TypeScript compiles cleanlyvitest run --project plugin— 234 tests passedvitest run --project cli— 1178 tests passed (including all 29 onboard-selection tests)🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Behavior
Network Policies
Documentation
Tests