-
Notifications
You must be signed in to change notification settings - Fork 2.3k
docs: add support matrix reference page #1415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,67 @@ | ||
| # Support Matrix | ||
|
|
||
| Use this page to check the current support status for host platforms, container runtimes, inference providers, and deployment paths. | ||
| This page consolidates the compatibility details that are otherwise spread across the quickstart, inference, deployment, and security docs. | ||
|
|
||
| ## Host Platforms and Container Runtimes | ||
|
|
||
| The following table summarizes the current host platform and runtime combinations for the standard NemoClaw install and onboard flow. | ||
|
|
||
| | Host platform | Container runtime | Status | Notes | | ||
| |---|---|---|---| | ||
| | Linux | Docker | Supported | Primary supported path for local and remote installs. | | ||
| | macOS (Apple Silicon) | Colima | Supported | Install Xcode Command Line Tools and start Colima before running the installer. | | ||
| | macOS (Apple Silicon) | Docker Desktop | Supported | Start Docker Desktop before running the installer. | | ||
| | macOS (Intel) | Podman | Not supported | Depends on OpenShell support for Podman on macOS. | | ||
| | Windows WSL | Docker Desktop with WSL backend | Supported | Supported target path for WSL-based installs. | | ||
| | DGX Spark | Docker | Supported with additional setup | Follow the DGX Spark setup guide for cgroup v2 and Docker configuration. | | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Compare DGX Spark references in README and docs
rg -n -C 3 -i 'dgx spark' README.md docs/Repository: NVIDIA/NemoClaw Length of output: 4232 Update DGX Spark guidance in support matrix to align with documentation. The support-matrix.md entry references "Follow the DGX Spark setup guide for cgroup v2 and Docker configuration," but this contradicts guidance elsewhere in the codebase. The README, quickstart, commands reference, and troubleshooting docs consistently state to "use the standard installer and 🤖 Prompt for AI Agents |
||
|
|
||
| ## Inference Provider Support | ||
|
|
||
| The following provider paths are available in the current product surface. | ||
|
|
||
| | Provider path | Status | Notes | | ||
| |---|---|---| | ||
| | NVIDIA Endpoints | Supported | Uses hosted models on `integrate.api.nvidia.com`. | | ||
| | OpenAI | Supported | Uses native OpenAI-compatible model IDs. | | ||
| | Other OpenAI-compatible endpoint | Supported | For compatible proxies and gateways. | | ||
| | Anthropic | Supported | Uses the `anthropic-messages` provider flow. | | ||
| | Other Anthropic-compatible endpoint | Supported | For Claude-compatible proxies and gateways. | | ||
| | Google Gemini | Supported | Uses Google's OpenAI-compatible endpoint. | | ||
| | Local Ollama | Supported | Available in the standard onboarding flow when Ollama is installed or already running on the host. | | ||
| | Local NVIDIA NIM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU. | | ||
| | Local vLLM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and an existing `localhost:8000` service. | | ||
|
|
||
| ## Deployment Paths | ||
|
|
||
| The following deployment paths are documented today. | ||
|
|
||
| | Deployment path | Status | Notes | | ||
| |---|---|---| | ||
| | Local host install | Supported | Standard `curl | bash` install path. | | ||
| | Remote GPU instance | Supported | Follow the remote GPU deployment guide. | | ||
| | Telegram bridge | Supported | Requires host-side bridge setup after sandbox creation. | | ||
| | Sandbox hardening profiles | Supported | Available through the documented hardening guidance and policy controls. | | ||
|
|
||
| ## Version and Environment Requirements | ||
|
|
||
| The following runtime requirements apply across the supported paths above. | ||
|
|
||
| | Dependency | Requirement | | ||
| |---|---| | ||
| | Linux | Ubuntu 22.04 LTS or later | | ||
| | Node.js | 22.16 or later | | ||
| | npm | 10 or later | | ||
| | OpenShell | Installed before use | | ||
| | RAM | 8 GB minimum, 16 GB recommended | | ||
| | Disk | 20 GB free minimum, 40 GB recommended | | ||
|
|
||
| If your platform or runtime falls outside this matrix, expect partial support, experimental behavior, or onboarding failures. | ||
| If a path is marked experimental, treat it as subject to change without compatibility guarantees. | ||
|
|
||
| ## Next Steps | ||
|
|
||
| - Use the Quickstart (see the `nemoclaw-get-started` skill) to install NemoClaw on a supported platform. | ||
| - Use Inference Profiles (see the `nemoclaw-reference` skill) to compare provider-specific behavior and validation. | ||
| - Use Deploy to a Remote GPU Instance (see the `nemoclaw-deploy-remote` skill) for persistent remote deployment. | ||
| - Use Troubleshooting (see the `nemoclaw-reference` skill) if your environment does not match the supported matrix. | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,89 @@ | ||||||||||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||||||||||
| title: | ||||||||||||||||||||||||||||||||||
| page: "NemoClaw Support Matrix" | ||||||||||||||||||||||||||||||||||
| nav: "Support Matrix" | ||||||||||||||||||||||||||||||||||
| description: | ||||||||||||||||||||||||||||||||||
| main: "Current platform, runtime, provider, and deployment support status for NemoClaw." | ||||||||||||||||||||||||||||||||||
| agent: "Use when checking whether a platform, runtime, inference provider, or deployment path is currently supported by NemoClaw." | ||||||||||||||||||||||||||||||||||
| keywords: ["nemoclaw support matrix", "nemoclaw compatibility", "supported platforms", "supported providers"] | ||||||||||||||||||||||||||||||||||
| topics: ["generative_ai", "ai_agents"] | ||||||||||||||||||||||||||||||||||
| tags: ["openclaw", "openshell", "compatibility", "platforms", "inference_routing"] | ||||||||||||||||||||||||||||||||||
| content: | ||||||||||||||||||||||||||||||||||
| type: reference | ||||||||||||||||||||||||||||||||||
| difficulty: technical_beginner | ||||||||||||||||||||||||||||||||||
| audience: ["developer", "engineer"] | ||||||||||||||||||||||||||||||||||
| status: published | ||||||||||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| <!-- | ||||||||||||||||||||||||||||||||||
| SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||||||||||||||||||||||||||||||||||
| SPDX-License-Identifier: Apache-2.0 | ||||||||||||||||||||||||||||||||||
| --> | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| # Support Matrix | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| Use this page to check the current support status for host platforms, container runtimes, inference providers, and deployment paths. | ||||||||||||||||||||||||||||||||||
| This page pulls together compatibility details from the quickstart, inference, deployment, and security docs. | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| ## Host Platforms and Container Runtimes | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| The following table summarizes the current host platform and runtime combinations for the standard NemoClaw install and onboard flow. | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| | Host platform | Container runtime | Status | Notes | | ||||||||||||||||||||||||||||||||||
| |---|---|---|---| | ||||||||||||||||||||||||||||||||||
| | Linux | Docker | Supported | Primary supported path for local and remote installs. | | ||||||||||||||||||||||||||||||||||
| | macOS (Apple Silicon) | Colima | Supported | Install Xcode Command Line Tools and start Colima before running the installer. | | ||||||||||||||||||||||||||||||||||
| | macOS (Apple Silicon) | Docker Desktop | Supported | Start Docker Desktop before running the installer. | | ||||||||||||||||||||||||||||||||||
| | macOS (Intel) | Podman | Not supported | Depends on OpenShell support for Podman on macOS. | | ||||||||||||||||||||||||||||||||||
| | Windows WSL | Docker Desktop with WSL backend | Supported | Supported target path for WSL-based installs. | | ||||||||||||||||||||||||||||||||||
| | DGX Spark | Docker | Supported with additional setup | Follow the DGX Spark setup guide for cgroup v2 and Docker configuration. | | ||||||||||||||||||||||||||||||||||
|
Comment on lines
+32
to
+39
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add missing platform and runtime combinations. The matrix is missing several platform/runtime entries mentioned in the PR review feedback:
Compare against the current README to ensure all documented combinations are included for completeness. 🤖 Prompt for AI Agents🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Fetch and compare support matrix entries with PR `#1413`
echo "=== Checking PR `#1413` for support matrix content ==="
gh pr view 1413 --json files --jq '.files[].path' | rg -i 'readme|support|matrix'
echo -e "\n=== Comparing status entries in PR `#1413` ==="
gh pr diff 1413 | rg -i -C 5 'local ollama|windows.*wsl|macos.*(apple silicon|intel)'Repository: NVIDIA/NemoClaw Length of output: 431 🏁 Script executed: # First, check the current state of the support matrix file
git ls-files docs/reference/support-matrix.mdRepository: NVIDIA/NemoClaw Length of output: 91 🏁 Script executed: # Look for PR `#1413` references in git history
git log --oneline --all | grep -i "1413\|support.*matrix" | head -20Repository: NVIDIA/NemoClaw Length of output: 106 🏁 Script executed: # Read the support matrix file to see current state
wc -l docs/reference/support-matrix.mdRepository: NVIDIA/NemoClaw Length of output: 94 🏁 Script executed: # Read the full support matrix file
cat -n docs/reference/support-matrix.mdRepository: NVIDIA/NemoClaw Length of output: 5177 🏁 Script executed: # Search for PR `#1413` references in commit messages or branches
git log --all --grep="1413" --oneline
git branch -a | grep -i "1413\|support"Repository: NVIDIA/NemoClaw Length of output: 130 🏁 Script executed: # Search more broadly for PR 1413 context
git log --all --oneline | grep -i "support\|ollama\|wsl" | head -30Repository: NVIDIA/NemoClaw Length of output: 2001 🏁 Script executed: # Check remote branches
git branch -r | head -20Repository: NVIDIA/NemoClaw Length of output: 101 🏁 Script executed: # Search for any mention of "1413" anywhere in git history
git log --all --source --remotes --format="%H %s %b" | grep -i "1413" | head -5Repository: NVIDIA/NemoClaw Length of output: 184 🌐 Web query:
💡 Result: NemoClaw PR
Sources: PR 🏁 Script executed: # Check if PR `#1413` might be in a separate branch or tag
git tag | grep -i "1413\|release" | head -20Repository: NVIDIA/NemoClaw Length of output: 41 🏁 Script executed: # Look at the commit that added support matrix to understand context
git show --stat 90d57b2Repository: NVIDIA/NemoClaw Length of output: 22181 Align support matrix status labels with PR The current support matrix uses status values that differ from PR
PR 🤖 Prompt for AI Agents
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Compare DGX Spark setup instructions
echo "=== DGX Spark references in README.md ==="
rg -n -C 5 -i 'dgx spark' README.md
echo -e "\n=== DGX Spark references in docs/ ==="
rg -n -C 5 -i 'dgx spark' docs/Repository: NVIDIA/NemoClaw Length of output: 6055 Update DGX Spark setup guidance to match current documentation. The note in support-matrix.md (line 39) says "Follow the DGX Spark setup guide for cgroup v2 and Docker configuration," but this contradicts the guidance in README.md, quickstart.md, and commands.md, which all state "Use the standard installer and The troubleshooting guide confirms that the older cgroup workaround is no longer needed: "Current OpenShell releases handle that behavior themselves, so NemoClaw no longer requires a Spark-specific setup step." Change the note to: "Use the standard installer and 🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| ## Inference Provider Support | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| The following provider paths are available in the current product surface. | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| | Provider path | Status | Notes | | ||||||||||||||||||||||||||||||||||
| |---|---|---| | ||||||||||||||||||||||||||||||||||
| | NVIDIA Endpoints | Supported | Uses hosted models on `integrate.api.nvidia.com`. | | ||||||||||||||||||||||||||||||||||
| | OpenAI | Supported | Uses native OpenAI-compatible model IDs. | | ||||||||||||||||||||||||||||||||||
| | Other OpenAI-compatible endpoint | Supported | For compatible proxies and gateways. | | ||||||||||||||||||||||||||||||||||
| | Anthropic | Supported | Uses the `anthropic-messages` provider flow. | | ||||||||||||||||||||||||||||||||||
| | Other Anthropic-compatible endpoint | Supported | For Claude-compatible proxies and gateways. | | ||||||||||||||||||||||||||||||||||
| | Google Gemini | Supported | Uses Google's OpenAI-compatible endpoint. | | ||||||||||||||||||||||||||||||||||
| | Local Ollama | Supported | Available in the standard onboarding flow when Ollama is installed or already running on the host. | | ||||||||||||||||||||||||||||||||||
| | Local NVIDIA NIM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU. | | ||||||||||||||||||||||||||||||||||
| | Local vLLM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and an existing `localhost:8000` service. | | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| ## Deployment Paths | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| The docs cover the following deployment paths. | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| | Deployment path | Status | Notes | | ||||||||||||||||||||||||||||||||||
| |---|---|---| | ||||||||||||||||||||||||||||||||||
| | Local host install | Supported | Standard `curl | bash` install path. | | ||||||||||||||||||||||||||||||||||
| | Remote GPU instance | Supported | Follow the remote GPU deployment guide. | | ||||||||||||||||||||||||||||||||||
| | Telegram bridge | Supported | Requires host-side bridge setup after sandbox creation. | | ||||||||||||||||||||||||||||||||||
| | Sandbox hardening profiles | Supported | Available through the documented hardening guidance and policy controls. | | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| ## Version and Environment Requirements | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| The following runtime requirements apply across the supported paths above. | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| | Dependency | Requirement | | ||||||||||||||||||||||||||||||||||
| |---|---| | ||||||||||||||||||||||||||||||||||
| | Linux | Ubuntu 22.04 LTS or later | | ||||||||||||||||||||||||||||||||||
| | Node.js | 22.16 or later | | ||||||||||||||||||||||||||||||||||
| | npm | 10 or later | | ||||||||||||||||||||||||||||||||||
| | OpenShell | Installed before use | | ||||||||||||||||||||||||||||||||||
| | RAM | 8 GB minimum, 16 GB recommended | | ||||||||||||||||||||||||||||||||||
| | Disk | 20 GB free minimum, 40 GB recommended | | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| If your platform or runtime falls outside this matrix, expect partial support, experimental behavior, or onboarding failures. | ||||||||||||||||||||||||||||||||||
| If a path is marked experimental, treat it as subject to change without compatibility guarantees. | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| ## Next Steps | ||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||
| - Use the [Quickstart](../get-started/quickstart.md) to install NemoClaw on a supported platform. | ||||||||||||||||||||||||||||||||||
| - Use [Inference Options](../inference/inference-options.md) to compare provider-specific behavior and validation. | ||||||||||||||||||||||||||||||||||
| - Use [Deploy to a Remote GPU Instance](../deployment/deploy-to-remote-gpu.md) for persistent remote deployment. | ||||||||||||||||||||||||||||||||||
| - Use [Troubleshooting](../reference/troubleshooting.md) if your environment does not match the supported matrix. | ||||||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing platform and runtime combinations.
The matrix is missing several platform/runtime entries mentioned in the PR review feedback:
Currently only macOS (Intel) + Podman is listed. Compare against the current README to ensure all documented combinations are included.
Note: Fix this in the source file
docs/reference/support-matrix.mdfirst, then regenerate this skill file usingpython scripts/docs-to-skills.py.As per coding guidelines: Edit documentation under
docs/directory (never.agents/skills/nemoclaw-*/*.md) and regenerate skills withpython scripts/docs-to-skills.py.🤖 Prompt for AI Agents