Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .agents/skills/nemoclaw-reference/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
name: "nemoclaw-reference"
description: "Describes how NemoClaw combines a CLI plugin with a versioned blueprint to move OpenClaw into a controlled sandbox. Use when looking up NemoClaw architecture, plugin structure, or blueprint design. Lists all slash commands and standalone NemoClaw CLI commands. Use when looking up a command, checking command syntax, or browsing the CLI reference. Documents baseline network policy, filesystem rules, and operator approval flow. Use when reviewing default network policies, understanding egress controls, or looking up the approval flow. Diagnoses and resolves common NemoClaw installation, onboarding, and runtime issues. Use when troubleshooting errors, debugging sandbox problems, or resolving setup failures."
description: "Describes how NemoClaw combines a CLI plugin with a versioned blueprint to move OpenClaw into a controlled sandbox. Use when looking up NemoClaw architecture, plugin structure, or blueprint design. Lists all slash commands and standalone NemoClaw CLI commands. Use when looking up a command, checking command syntax, or browsing the CLI reference. Documents configuration options for NemoClaw routed inference providers. Use when configuring inference profiles, looking up provider routing settings, or reviewing available LLM providers. Documents baseline network policy, filesystem rules, and operator approval flow. Use when reviewing default network policies, understanding egress controls, or looking up the approval flow. Use when checking whether a platform, runtime, inference provider, or deployment path is currently supported by NemoClaw. Diagnoses and resolves common NemoClaw installation, onboarding, and runtime issues. Use when troubleshooting errors, debugging sandbox problems, or resolving setup failures."
---

# NemoClaw Reference
Expand All @@ -11,5 +11,7 @@ Describes how NemoClaw combines a CLI plugin with a versioned blueprint to move

- [NemoClaw Architecture: Plugin, Blueprint, and Sandbox Structure](references/architecture.md)
- [NemoClaw CLI Commands Reference](references/commands.md)
- [NemoClaw Inference Profiles](references/inference-profiles.md)
- [NemoClaw Network Policies: Baseline Rules and Operator Approval](references/network-policies.md)
- [NemoClaw Support Matrix](references/support-matrix.md)
- [NemoClaw Troubleshooting Guide](references/troubleshooting.md)
67 changes: 67 additions & 0 deletions .agents/skills/nemoclaw-reference/references/support-matrix.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Support Matrix

Use this page to check the current support status for host platforms, container runtimes, inference providers, and deployment paths.
This page consolidates the compatibility details that are otherwise spread across the quickstart, inference, deployment, and security docs.

## Host Platforms and Container Runtimes

The following table summarizes the current host platform and runtime combinations for the standard NemoClaw install and onboard flow.

| Host platform | Container runtime | Status | Notes |
|---|---|---|---|
| Linux | Docker | Supported | Primary supported path for local and remote installs. |
| macOS (Apple Silicon) | Colima | Supported | Install Xcode Command Line Tools and start Colima before running the installer. |
| macOS (Apple Silicon) | Docker Desktop | Supported | Start Docker Desktop before running the installer. |
| macOS (Intel) | Podman | Not supported | Depends on OpenShell support for Podman on macOS. |
| Windows WSL | Docker Desktop with WSL backend | Supported | Supported target path for WSL-based installs. |
| DGX Spark | Docker | Supported with additional setup | Follow the DGX Spark setup guide for cgroup v2 and Docker configuration. |
Comment on lines +10 to +17
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add missing platform and runtime combinations.

The matrix is missing several platform/runtime entries mentioned in the PR review feedback:

  • Podman on Linux
  • Podman on macOS (Apple Silicon)
  • Colima on macOS (Intel)
  • Docker Desktop on macOS (Intel)

Currently only macOS (Intel) + Podman is listed. Compare against the current README to ensure all documented combinations are included.

Note: Fix this in the source file docs/reference/support-matrix.md first, then regenerate this skill file using python scripts/docs-to-skills.py.

As per coding guidelines: Edit documentation under docs/ directory (never .agents/skills/nemoclaw-*/*.md) and regenerate skills with python scripts/docs-to-skills.py.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/skills/nemoclaw-reference/references/support-matrix.md around lines
10 - 17, Update the canonical docs file docs/reference/support-matrix.md (not
.agents/skills) to add the missing platform/runtime rows: add "Linux | Podman |
Supported/Notes" for Podman on Linux, "macOS (Apple Silicon) | Podman |
Supported" for Podman on Apple Silicon, "macOS (Intel) | Colima | Supported" for
Colima on Intel macOS, and "macOS (Intel) | Docker Desktop | Supported" for
Docker Desktop on Intel macOS; ensure the phrasing and Notes match existing rows
(e.g., mention starting Colima/Docker Desktop and any caveats), save the file,
then regenerate the skill file by running python scripts/docs-to-skills.py so
the .agents/skills/nemoclaw-reference/references/support-matrix.md is updated
accordingly.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Compare DGX Spark references in README and docs

rg -n -C 3 -i 'dgx spark' README.md docs/

Repository: NVIDIA/NemoClaw

Length of output: 4232


Update DGX Spark guidance in support matrix to align with documentation.

The support-matrix.md entry references "Follow the DGX Spark setup guide for cgroup v2 and Docker configuration," but this contradicts guidance elsewhere in the codebase. The README, quickstart, commands reference, and troubleshooting docs consistently state to "use the standard installer and nemoclaw onboard" because current OpenShell releases handle the cgroup behavior. The commands.md even marks nemoclaw setup-spark as deprecated. Update the support-matrix.md entry to match this standard guidance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.agents/skills/nemoclaw-reference/references/support-matrix.md at line 17,
Update the DGX Spark row in support-matrix.md so it matches the rest of the
docs: replace the current guidance that tells users to "Follow the DGX Spark
setup guide for cgroup v2 and Docker configuration" with the standard
recommendation to "use the standard installer and `nemoclaw onboard`" (and
remove the cgroup v2/Docker special-case wording); ensure the cell for "DGX
Spark | Docker" reads that DGX Spark is supported via the standard
installer/onboard flow and note that `nemoclaw setup-spark` is deprecated to
align with README, quickstart, commands.md, and troubleshooting guidance.


## Inference Provider Support

The following provider paths are available in the current product surface.

| Provider path | Status | Notes |
|---|---|---|
| NVIDIA Endpoints | Supported | Uses hosted models on `integrate.api.nvidia.com`. |
| OpenAI | Supported | Uses native OpenAI-compatible model IDs. |
| Other OpenAI-compatible endpoint | Supported | For compatible proxies and gateways. |
| Anthropic | Supported | Uses the `anthropic-messages` provider flow. |
| Other Anthropic-compatible endpoint | Supported | For Claude-compatible proxies and gateways. |
| Google Gemini | Supported | Uses Google's OpenAI-compatible endpoint. |
| Local Ollama | Supported | Available in the standard onboarding flow when Ollama is installed or already running on the host. |
| Local NVIDIA NIM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU. |
| Local vLLM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and an existing `localhost:8000` service. |

## Deployment Paths

The following deployment paths are documented today.

| Deployment path | Status | Notes |
|---|---|---|
| Local host install | Supported | Standard `curl | bash` install path. |
| Remote GPU instance | Supported | Follow the remote GPU deployment guide. |
| Telegram bridge | Supported | Requires host-side bridge setup after sandbox creation. |
| Sandbox hardening profiles | Supported | Available through the documented hardening guidance and policy controls. |

## Version and Environment Requirements

The following runtime requirements apply across the supported paths above.

| Dependency | Requirement |
|---|---|
| Linux | Ubuntu 22.04 LTS or later |
| Node.js | 22.16 or later |
| npm | 10 or later |
| OpenShell | Installed before use |
| RAM | 8 GB minimum, 16 GB recommended |
| Disk | 20 GB free minimum, 40 GB recommended |

If your platform or runtime falls outside this matrix, expect partial support, experimental behavior, or onboarding failures.
If a path is marked experimental, treat it as subject to change without compatibility guarantees.

## Next Steps

- Use the Quickstart (see the `nemoclaw-get-started` skill) to install NemoClaw on a supported platform.
- Use Inference Profiles (see the `nemoclaw-reference` skill) to compare provider-specific behavior and validation.
- Use Deploy to a Remote GPU Instance (see the `nemoclaw-deploy-remote` skill) for persistent remote deployment.
- Use Troubleshooting (see the `nemoclaw-reference` skill) if your environment does not match the supported matrix.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ It installs the [NVIDIA OpenShell](https://github.com/NVIDIA/OpenShell) runtime,
> We welcome issues and discussion from the community while the project evolves.

NemoClaw adds guided onboarding, a hardened blueprint, state management, OpenShell-managed channel messaging, routed inference, and layered protection on top of the [NVIDIA OpenShell](https://github.com/NVIDIA/OpenShell) runtime. For the full feature list, refer to [Overview](https://docs.nvidia.com/nemoclaw/latest/about/overview.html). For the system diagram, component model, and blueprint lifecycle, refer to [How It Works](https://docs.nvidia.com/nemoclaw/latest/about/how-it-works.html) and [Architecture](https://docs.nvidia.com/nemoclaw/latest/reference/architecture.html).
For the current compatibility status across platforms, runtimes, providers, and deployment paths, refer to the [Support Matrix](https://docs.nvidia.com/nemoclaw/latest/reference/support-matrix.html).

## Getting Started

Expand Down Expand Up @@ -145,6 +146,7 @@ Refer to the following pages on the official documentation website for more info
| [Overview](https://docs.nvidia.com/nemoclaw/latest/about/overview.html) | What NemoClaw does and how it fits together. |
| [How It Works](https://docs.nvidia.com/nemoclaw/latest/about/how-it-works.html) | Plugin, blueprint, sandbox lifecycle, and protection layers. |
| [Architecture](https://docs.nvidia.com/nemoclaw/latest/reference/architecture.html) | Plugin structure, blueprint lifecycle, sandbox environment, and host-side state. |
| [Support Matrix](https://docs.nvidia.com/nemoclaw/latest/reference/support-matrix.html) | Current platform, runtime, provider, and deployment support status. |
| [Inference Options](https://docs.nvidia.com/nemoclaw/latest/inference/inference-options.html) | Supported providers, validation, and routed inference configuration. |
| [Network Policies](https://docs.nvidia.com/nemoclaw/latest/reference/network-policies.html) | Baseline rules, operator approval flow, and egress control. |
| [Customize Network Policy](https://docs.nvidia.com/nemoclaw/latest/network-policy/customize-network-policy.html) | Static and dynamic policy changes, presets. |
Expand Down
11 changes: 11 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,16 @@ Plugin structure, blueprint system, and sandbox lifecycle.
{bdg-secondary}`Reference`
:::

:::{grid-item-card} Support Matrix
:link: reference/support-matrix
:link-type: doc

Current platform, runtime, provider, and deployment support status.

+++
{bdg-secondary}`Reference`
:::

:::{grid-item-card} Network Policies
:link: reference/network-policies
:link-type: doc
Expand Down Expand Up @@ -292,6 +302,7 @@ Back Up and Restore <workspace/backup-restore>
:hidden:

Architecture <reference/architecture>
Support Matrix <reference/support-matrix>
Commands <reference/commands>
Network Policies <reference/network-policies>
Troubleshooting <reference/troubleshooting>
Expand Down
89 changes: 89 additions & 0 deletions docs/reference/support-matrix.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
title:
page: "NemoClaw Support Matrix"
nav: "Support Matrix"
description:
main: "Current platform, runtime, provider, and deployment support status for NemoClaw."
agent: "Use when checking whether a platform, runtime, inference provider, or deployment path is currently supported by NemoClaw."
keywords: ["nemoclaw support matrix", "nemoclaw compatibility", "supported platforms", "supported providers"]
topics: ["generative_ai", "ai_agents"]
tags: ["openclaw", "openshell", "compatibility", "platforms", "inference_routing"]
content:
type: reference
difficulty: technical_beginner
audience: ["developer", "engineer"]
status: published
---

<!--
SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
-->

# Support Matrix

Use this page to check the current support status for host platforms, container runtimes, inference providers, and deployment paths.
This page pulls together compatibility details from the quickstart, inference, deployment, and security docs.

## Host Platforms and Container Runtimes

The following table summarizes the current host platform and runtime combinations for the standard NemoClaw install and onboard flow.

| Host platform | Container runtime | Status | Notes |
|---|---|---|---|
| Linux | Docker | Supported | Primary supported path for local and remote installs. |
| macOS (Apple Silicon) | Colima | Supported | Install Xcode Command Line Tools and start Colima before running the installer. |
| macOS (Apple Silicon) | Docker Desktop | Supported | Start Docker Desktop before running the installer. |
| macOS (Intel) | Podman | Not supported | Depends on OpenShell support for Podman on macOS. |
| Windows WSL | Docker Desktop with WSL backend | Supported | Supported target path for WSL-based installs. |
| DGX Spark | Docker | Supported with additional setup | Follow the DGX Spark setup guide for cgroup v2 and Docker configuration. |
Comment on lines +32 to +39
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add missing platform and runtime combinations.

The matrix is missing several platform/runtime entries mentioned in the PR review feedback:

  • Podman on Linux (should indicate support status)
  • Podman on macOS (Apple Silicon) (currently only Intel+Podman is listed)
  • Colima on macOS (Intel) (currently only Apple Silicon+Colima is listed)
  • Docker Desktop on macOS (Intel) (currently only Apple Silicon+Docker Desktop is listed)

Compare against the current README to ensure all documented combinations are included for completeness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/support-matrix.md` around lines 32 - 39, The support matrix
table is missing four platform/runtime rows; add entries for "Linux | Podman",
"macOS (Apple Silicon) | Podman", "macOS (Intel) | Colima", and "macOS (Intel) |
Docker Desktop" to the same markdown table structure used by the existing rows
(e.g., the rows for "macOS (Apple Silicon) | Colima" and "macOS (Intel) |
Podman"), and set each row's Status and Notes to match the canonical guidance
from the README (ensure wording and any prerequisites match existing note style
for entries like "Docker Desktop" and "Colima").

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Fetch and compare support matrix entries with PR `#1413`

echo "=== Checking PR `#1413` for support matrix content ==="
gh pr view 1413 --json files --jq '.files[].path' | rg -i 'readme|support|matrix'

echo -e "\n=== Comparing status entries in PR `#1413` ==="
gh pr diff 1413 | rg -i -C 5 'local ollama|windows.*wsl|macos.*(apple silicon|intel)'

Repository: NVIDIA/NemoClaw

Length of output: 431


🏁 Script executed:

# First, check the current state of the support matrix file
git ls-files docs/reference/support-matrix.md

Repository: NVIDIA/NemoClaw

Length of output: 91


🏁 Script executed:

# Look for PR `#1413` references in git history
git log --oneline --all | grep -i "1413\|support.*matrix" | head -20

Repository: NVIDIA/NemoClaw

Length of output: 106


🏁 Script executed:

# Read the support matrix file to see current state
wc -l docs/reference/support-matrix.md

Repository: NVIDIA/NemoClaw

Length of output: 94


🏁 Script executed:

# Read the full support matrix file
cat -n docs/reference/support-matrix.md

Repository: NVIDIA/NemoClaw

Length of output: 5177


🏁 Script executed:

# Search for PR `#1413` references in commit messages or branches
git log --all --grep="1413" --oneline
git branch -a | grep -i "1413\|support"

Repository: NVIDIA/NemoClaw

Length of output: 130


🏁 Script executed:

# Search more broadly for PR 1413 context
git log --all --oneline | grep -i "support\|ollama\|wsl" | head -30

Repository: NVIDIA/NemoClaw

Length of output: 2001


🏁 Script executed:

# Check remote branches
git branch -r | head -20

Repository: NVIDIA/NemoClaw

Length of output: 101


🏁 Script executed:

# Search for any mention of "1413" anywhere in git history
git log --all --source --remotes --format="%H %s %b" | grep -i "1413" | head -5

Repository: NVIDIA/NemoClaw

Length of output: 184


🌐 Web query:

NemoClaw PR 1413 support matrix

💡 Result:

NemoClaw PR #1413 updates the Container Runtimes / platform support matrix to use explicit status labels (Supported, Caveated, Out of scope). The matrix in that PR’s README is: [1][2]

Platform Supported runtimes Status Notes
Linux Docker Supported Primary supported path.
macOS (Apple Silicon) Colima, Docker Desktop Caveated Requires Xcode Command Line Tools; start runtime before install.
macOS (Intel) Docker Desktop Caveated Start runtime before install.
Windows WSL2 Docker Desktop (WSL backend) Out of scope Not part of the supported NemoClaw target matrix.
Windows native N/A Out of scope Not part of the supported NemoClaw target matrix.
Jetson N/A Out of scope Not part of the supported NemoClaw target matrix.
DGX Spark Docker Supported Use standard installer and nemoclaw onboard.

Sources: PR #1413 conversation/summary and the PR branch README content. [1][2]


🏁 Script executed:

# Check if PR `#1413` might be in a separate branch or tag
git tag | grep -i "1413\|release" | head -20

Repository: NVIDIA/NemoClaw

Length of output: 41


🏁 Script executed:

# Look at the commit that added support matrix to understand context
git show --stat 90d57b2

Repository: NVIDIA/NemoClaw

Length of output: 22181


Align support matrix status labels with PR #1413.

The current support matrix uses status values that differ from PR #1413's terminology. Update to match PR #1413's status classification:

  • Windows WSL (line 38): Change from "Supported" to "Out of scope"
  • macOS (Apple Silicon) (lines 35–36): Consolidate rows and change status from "Supported" to "Caveated"
  • Verify Local Ollama (line 53) status aligns with PR #1413

PR #1413 establishes "Supported", "Caveated", and "Out of scope" as the standard status labels; the current matrix must reflect these consistently.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/support-matrix.md` around lines 32 - 39, Update the support
matrix table rows to use the PR `#1413` terminology: change the "Windows WSL" row
status from "Supported" to "Out of scope" (refer to the "Windows WSL" row),
consolidate the two "macOS (Apple Silicon)" rows (Colima and Docker Desktop)
into a single "macOS (Apple Silicon)" row with status "Caveated" and merge their
Notes (ensure Xcode CLI/Colima and Docker Desktop startup instructions are
preserved), and verify and update the "Local Ollama" entry so its status matches
PR `#1413`'s classification (ensure it uses "Supported", "Caveated", or "Out of
scope" as specified by PR `#1413`).

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Compare DGX Spark setup instructions

echo "=== DGX Spark references in README.md ==="
rg -n -C 5 -i 'dgx spark' README.md

echo -e "\n=== DGX Spark references in docs/ ==="
rg -n -C 5 -i 'dgx spark' docs/

Repository: NVIDIA/NemoClaw

Length of output: 6055


Update DGX Spark setup guidance to match current documentation.

The note in support-matrix.md (line 39) says "Follow the DGX Spark setup guide for cgroup v2 and Docker configuration," but this contradicts the guidance in README.md, quickstart.md, and commands.md, which all state "Use the standard installer and nemoclaw onboard."

The troubleshooting guide confirms that the older cgroup workaround is no longer needed: "Current OpenShell releases handle that behavior themselves, so NemoClaw no longer requires a Spark-specific setup step."

Change the note to: "Use the standard installer and nemoclaw onboard."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/support-matrix.md` at line 39, Update the DGX Spark table row
note so it no longer references cgroup v2 and Docker configuration; locate the
row containing the string "DGX Spark | Docker | Supported with additional setup"
and replace the trailing guidance text with "Use the standard installer and
`nemoclaw onboard`." Ensure the change aligns with README.md, quickstart.md, and
commands.md by removing the old cgroup workaround wording and using the exact
phrase requested.


## Inference Provider Support

The following provider paths are available in the current product surface.

| Provider path | Status | Notes |
|---|---|---|
| NVIDIA Endpoints | Supported | Uses hosted models on `integrate.api.nvidia.com`. |
| OpenAI | Supported | Uses native OpenAI-compatible model IDs. |
| Other OpenAI-compatible endpoint | Supported | For compatible proxies and gateways. |
| Anthropic | Supported | Uses the `anthropic-messages` provider flow. |
| Other Anthropic-compatible endpoint | Supported | For Claude-compatible proxies and gateways. |
| Google Gemini | Supported | Uses Google's OpenAI-compatible endpoint. |
| Local Ollama | Supported | Available in the standard onboarding flow when Ollama is installed or already running on the host. |
| Local NVIDIA NIM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU. |
| Local vLLM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and an existing `localhost:8000` service. |

## Deployment Paths

The docs cover the following deployment paths.

| Deployment path | Status | Notes |
|---|---|---|
| Local host install | Supported | Standard `curl | bash` install path. |
| Remote GPU instance | Supported | Follow the remote GPU deployment guide. |
| Telegram bridge | Supported | Requires host-side bridge setup after sandbox creation. |
| Sandbox hardening profiles | Supported | Available through the documented hardening guidance and policy controls. |

## Version and Environment Requirements

The following runtime requirements apply across the supported paths above.

| Dependency | Requirement |
|---|---|
| Linux | Ubuntu 22.04 LTS or later |
| Node.js | 22.16 or later |
| npm | 10 or later |
| OpenShell | Installed before use |
| RAM | 8 GB minimum, 16 GB recommended |
| Disk | 20 GB free minimum, 40 GB recommended |

If your platform or runtime falls outside this matrix, expect partial support, experimental behavior, or onboarding failures.
If a path is marked experimental, treat it as subject to change without compatibility guarantees.

## Next Steps

- Use the [Quickstart](../get-started/quickstart.md) to install NemoClaw on a supported platform.
- Use [Inference Options](../inference/inference-options.md) to compare provider-specific behavior and validation.
- Use [Deploy to a Remote GPU Instance](../deployment/deploy-to-remote-gpu.md) for persistent remote deployment.
- Use [Troubleshooting](../reference/troubleshooting.md) if your environment does not match the supported matrix.
Loading