diff --git a/.agents/skills/nemoclaw-reference/SKILL.md b/.agents/skills/nemoclaw-reference/SKILL.md index 07a828cd4..00bb1e3ac 100644 --- a/.agents/skills/nemoclaw-reference/SKILL.md +++ b/.agents/skills/nemoclaw-reference/SKILL.md @@ -1,6 +1,6 @@ --- name: "nemoclaw-reference" -description: "Describes how NemoClaw combines a CLI plugin with a versioned blueprint to move OpenClaw into a controlled sandbox. Use when looking up NemoClaw architecture, plugin structure, or blueprint design. Lists all slash commands and standalone NemoClaw CLI commands. Use when looking up a command, checking command syntax, or browsing the CLI reference. Documents baseline network policy, filesystem rules, and operator approval flow. Use when reviewing default network policies, understanding egress controls, or looking up the approval flow. Diagnoses and resolves common NemoClaw installation, onboarding, and runtime issues. Use when troubleshooting errors, debugging sandbox problems, or resolving setup failures." +description: "Describes how NemoClaw combines a CLI plugin with a versioned blueprint to move OpenClaw into a controlled sandbox. Use when looking up NemoClaw architecture, plugin structure, or blueprint design. Lists all slash commands and standalone NemoClaw CLI commands. Use when looking up a command, checking command syntax, or browsing the CLI reference. Documents configuration options for NemoClaw routed inference providers. Use when configuring inference profiles, looking up provider routing settings, or reviewing available LLM providers. Documents baseline network policy, filesystem rules, and operator approval flow. Use when reviewing default network policies, understanding egress controls, or looking up the approval flow. Use when checking whether a platform, runtime, inference provider, or deployment path is currently supported by NemoClaw. Diagnoses and resolves common NemoClaw installation, onboarding, and runtime issues. Use when troubleshooting errors, debugging sandbox problems, or resolving setup failures." --- # NemoClaw Reference @@ -11,5 +11,7 @@ Describes how NemoClaw combines a CLI plugin with a versioned blueprint to move - [NemoClaw Architecture: Plugin, Blueprint, and Sandbox Structure](references/architecture.md) - [NemoClaw CLI Commands Reference](references/commands.md) +- [NemoClaw Inference Profiles](references/inference-profiles.md) - [NemoClaw Network Policies: Baseline Rules and Operator Approval](references/network-policies.md) +- [NemoClaw Support Matrix](references/support-matrix.md) - [NemoClaw Troubleshooting Guide](references/troubleshooting.md) diff --git a/.agents/skills/nemoclaw-reference/references/support-matrix.md b/.agents/skills/nemoclaw-reference/references/support-matrix.md new file mode 100644 index 000000000..acd7c7439 --- /dev/null +++ b/.agents/skills/nemoclaw-reference/references/support-matrix.md @@ -0,0 +1,67 @@ +# Support Matrix + +Use this page to check the current support status for host platforms, container runtimes, inference providers, and deployment paths. +This page consolidates the compatibility details that are otherwise spread across the quickstart, inference, deployment, and security docs. + +## Host Platforms and Container Runtimes + +The following table summarizes the current host platform and runtime combinations for the standard NemoClaw install and onboard flow. + +| Host platform | Container runtime | Status | Notes | +|---|---|---|---| +| Linux | Docker | Supported | Primary supported path for local and remote installs. | +| macOS (Apple Silicon) | Colima | Supported | Install Xcode Command Line Tools and start Colima before running the installer. | +| macOS (Apple Silicon) | Docker Desktop | Supported | Start Docker Desktop before running the installer. | +| macOS (Intel) | Podman | Not supported | Depends on OpenShell support for Podman on macOS. | +| Windows WSL | Docker Desktop with WSL backend | Supported | Supported target path for WSL-based installs. | +| DGX Spark | Docker | Supported with additional setup | Follow the DGX Spark setup guide for cgroup v2 and Docker configuration. | + +## Inference Provider Support + +The following provider paths are available in the current product surface. + +| Provider path | Status | Notes | +|---|---|---| +| NVIDIA Endpoints | Supported | Uses hosted models on `integrate.api.nvidia.com`. | +| OpenAI | Supported | Uses native OpenAI-compatible model IDs. | +| Other OpenAI-compatible endpoint | Supported | For compatible proxies and gateways. | +| Anthropic | Supported | Uses the `anthropic-messages` provider flow. | +| Other Anthropic-compatible endpoint | Supported | For Claude-compatible proxies and gateways. | +| Google Gemini | Supported | Uses Google's OpenAI-compatible endpoint. | +| Local Ollama | Supported | Available in the standard onboarding flow when Ollama is installed or already running on the host. | +| Local NVIDIA NIM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU. | +| Local vLLM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and an existing `localhost:8000` service. | + +## Deployment Paths + +The following deployment paths are documented today. + +| Deployment path | Status | Notes | +|---|---|---| +| Local host install | Supported | Standard `curl | bash` install path. | +| Remote GPU instance | Supported | Follow the remote GPU deployment guide. | +| Telegram bridge | Supported | Requires host-side bridge setup after sandbox creation. | +| Sandbox hardening profiles | Supported | Available through the documented hardening guidance and policy controls. | + +## Version and Environment Requirements + +The following runtime requirements apply across the supported paths above. + +| Dependency | Requirement | +|---|---| +| Linux | Ubuntu 22.04 LTS or later | +| Node.js | 22.16 or later | +| npm | 10 or later | +| OpenShell | Installed before use | +| RAM | 8 GB minimum, 16 GB recommended | +| Disk | 20 GB free minimum, 40 GB recommended | + +If your platform or runtime falls outside this matrix, expect partial support, experimental behavior, or onboarding failures. +If a path is marked experimental, treat it as subject to change without compatibility guarantees. + +## Next Steps + +- Use the Quickstart (see the `nemoclaw-get-started` skill) to install NemoClaw on a supported platform. +- Use Inference Profiles (see the `nemoclaw-reference` skill) to compare provider-specific behavior and validation. +- Use Deploy to a Remote GPU Instance (see the `nemoclaw-deploy-remote` skill) for persistent remote deployment. +- Use Troubleshooting (see the `nemoclaw-reference` skill) if your environment does not match the supported matrix. diff --git a/README.md b/README.md index 9be810f9b..d7247a026 100644 --- a/README.md +++ b/README.md @@ -21,6 +21,7 @@ It installs the [NVIDIA OpenShell](https://github.com/NVIDIA/OpenShell) runtime, > We welcome issues and discussion from the community while the project evolves. NemoClaw adds guided onboarding, a hardened blueprint, state management, OpenShell-managed channel messaging, routed inference, and layered protection on top of the [NVIDIA OpenShell](https://github.com/NVIDIA/OpenShell) runtime. For the full feature list, refer to [Overview](https://docs.nvidia.com/nemoclaw/latest/about/overview.html). For the system diagram, component model, and blueprint lifecycle, refer to [How It Works](https://docs.nvidia.com/nemoclaw/latest/about/how-it-works.html) and [Architecture](https://docs.nvidia.com/nemoclaw/latest/reference/architecture.html). +For the current compatibility status across platforms, runtimes, providers, and deployment paths, refer to the [Support Matrix](https://docs.nvidia.com/nemoclaw/latest/reference/support-matrix.html). ## Getting Started @@ -145,6 +146,7 @@ Refer to the following pages on the official documentation website for more info | [Overview](https://docs.nvidia.com/nemoclaw/latest/about/overview.html) | What NemoClaw does and how it fits together. | | [How It Works](https://docs.nvidia.com/nemoclaw/latest/about/how-it-works.html) | Plugin, blueprint, sandbox lifecycle, and protection layers. | | [Architecture](https://docs.nvidia.com/nemoclaw/latest/reference/architecture.html) | Plugin structure, blueprint lifecycle, sandbox environment, and host-side state. | +| [Support Matrix](https://docs.nvidia.com/nemoclaw/latest/reference/support-matrix.html) | Current platform, runtime, provider, and deployment support status. | | [Inference Options](https://docs.nvidia.com/nemoclaw/latest/inference/inference-options.html) | Supported providers, validation, and routed inference configuration. | | [Network Policies](https://docs.nvidia.com/nemoclaw/latest/reference/network-policies.html) | Baseline rules, operator approval flow, and egress control. | | [Customize Network Policy](https://docs.nvidia.com/nemoclaw/latest/network-policy/customize-network-policy.html) | Static and dynamic policy changes, presets. | diff --git a/docs/index.md b/docs/index.md index 5d6adc91d..d98c5be8a 100644 --- a/docs/index.md +++ b/docs/index.md @@ -166,6 +166,16 @@ Plugin structure, blueprint system, and sandbox lifecycle. {bdg-secondary}`Reference` ::: +:::{grid-item-card} Support Matrix +:link: reference/support-matrix +:link-type: doc + +Current platform, runtime, provider, and deployment support status. + ++++ +{bdg-secondary}`Reference` +::: + :::{grid-item-card} Network Policies :link: reference/network-policies :link-type: doc @@ -292,6 +302,7 @@ Back Up and Restore :hidden: Architecture +Support Matrix Commands Network Policies Troubleshooting diff --git a/docs/reference/support-matrix.md b/docs/reference/support-matrix.md new file mode 100644 index 000000000..4baca6383 --- /dev/null +++ b/docs/reference/support-matrix.md @@ -0,0 +1,89 @@ +--- +title: + page: "NemoClaw Support Matrix" + nav: "Support Matrix" +description: + main: "Current platform, runtime, provider, and deployment support status for NemoClaw." + agent: "Use when checking whether a platform, runtime, inference provider, or deployment path is currently supported by NemoClaw." +keywords: ["nemoclaw support matrix", "nemoclaw compatibility", "supported platforms", "supported providers"] +topics: ["generative_ai", "ai_agents"] +tags: ["openclaw", "openshell", "compatibility", "platforms", "inference_routing"] +content: + type: reference + difficulty: technical_beginner + audience: ["developer", "engineer"] +status: published +--- + + + +# Support Matrix + +Use this page to check the current support status for host platforms, container runtimes, inference providers, and deployment paths. +This page pulls together compatibility details from the quickstart, inference, deployment, and security docs. + +## Host Platforms and Container Runtimes + +The following table summarizes the current host platform and runtime combinations for the standard NemoClaw install and onboard flow. + +| Host platform | Container runtime | Status | Notes | +|---|---|---|---| +| Linux | Docker | Supported | Primary supported path for local and remote installs. | +| macOS (Apple Silicon) | Colima | Supported | Install Xcode Command Line Tools and start Colima before running the installer. | +| macOS (Apple Silicon) | Docker Desktop | Supported | Start Docker Desktop before running the installer. | +| macOS (Intel) | Podman | Not supported | Depends on OpenShell support for Podman on macOS. | +| Windows WSL | Docker Desktop with WSL backend | Supported | Supported target path for WSL-based installs. | +| DGX Spark | Docker | Supported with additional setup | Follow the DGX Spark setup guide for cgroup v2 and Docker configuration. | + +## Inference Provider Support + +The following provider paths are available in the current product surface. + +| Provider path | Status | Notes | +|---|---|---| +| NVIDIA Endpoints | Supported | Uses hosted models on `integrate.api.nvidia.com`. | +| OpenAI | Supported | Uses native OpenAI-compatible model IDs. | +| Other OpenAI-compatible endpoint | Supported | For compatible proxies and gateways. | +| Anthropic | Supported | Uses the `anthropic-messages` provider flow. | +| Other Anthropic-compatible endpoint | Supported | For Claude-compatible proxies and gateways. | +| Google Gemini | Supported | Uses Google's OpenAI-compatible endpoint. | +| Local Ollama | Supported | Available in the standard onboarding flow when Ollama is installed or already running on the host. | +| Local NVIDIA NIM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and a NIM-capable GPU. | +| Local vLLM | Experimental | Requires `NEMOCLAW_EXPERIMENTAL=1` and an existing `localhost:8000` service. | + +## Deployment Paths + +The docs cover the following deployment paths. + +| Deployment path | Status | Notes | +|---|---|---| +| Local host install | Supported | Standard `curl | bash` install path. | +| Remote GPU instance | Supported | Follow the remote GPU deployment guide. | +| Telegram bridge | Supported | Requires host-side bridge setup after sandbox creation. | +| Sandbox hardening profiles | Supported | Available through the documented hardening guidance and policy controls. | + +## Version and Environment Requirements + +The following runtime requirements apply across the supported paths above. + +| Dependency | Requirement | +|---|---| +| Linux | Ubuntu 22.04 LTS or later | +| Node.js | 22.16 or later | +| npm | 10 or later | +| OpenShell | Installed before use | +| RAM | 8 GB minimum, 16 GB recommended | +| Disk | 20 GB free minimum, 40 GB recommended | + +If your platform or runtime falls outside this matrix, expect partial support, experimental behavior, or onboarding failures. +If a path is marked experimental, treat it as subject to change without compatibility guarantees. + +## Next Steps + +- Use the [Quickstart](../get-started/quickstart.md) to install NemoClaw on a supported platform. +- Use [Inference Options](../inference/inference-options.md) to compare provider-specific behavior and validation. +- Use [Deploy to a Remote GPU Instance](../deployment/deploy-to-remote-gpu.md) for persistent remote deployment. +- Use [Troubleshooting](../reference/troubleshooting.md) if your environment does not match the supported matrix.