# Nx Documentation > Complete Nx documentation compiled into a single file for LLM consumption. Nx is a powerful, open-source, technology-agnostic monorepo platform designed to efficiently manage codebases of any scale. From small workspaces to large enterprise monorepos, Nx provides intelligent task execution, caching, and CI optimization. This file was generated from 577 documentation pages. Individual pages are available at: https://nx.dev/docs/{slug}.md # Quickstart --- ## Quickstart with Nx Get up and running with Nx in just a few minutes by following these simple steps. {% steps %} 1. Install the Nx CLI Installing Nx globally is **optional** - you can use `npx` to run Nx commands without installing it globally, especially if you're working with Node.js projects. {% tabs syncKey="install-method" %} {% tabitem label="npm" %} ```shell npm add --global nx ``` **Note:** You can also use Yarn, pnpm, or Bun {% /tabitem %} {% tabitem label="Homebrew (macOS, Linux)" %} ```shell brew install nx ``` {% /tabitem %} {% tabitem label="Chocolatey (Windows)" %} ```shell choco install nx ``` {% /tabitem %} {% tabitem label="apt (Ubuntu)" %} ```shell sudo add-apt-repository ppa:nrwl/nx sudo apt update sudo apt install nx ``` {% /tabitem %} {% /tabs %} 2. Start fresh or add to existing project For JavaScript-based projects you can **start with a new workspace** using the following command: ```shell npx create-nx-workspace@latest ``` **Add to an existing project: (recommended also for non-JS projects)** ```shell npx nx@latest init ``` Want remote caching and CI? Connect to [Nx Cloud](/docs/features/ci-features) after setup. Learn more: [Start New Project](/docs/getting-started/start-new-project) • [Add to Existing](/docs/getting-started/start-with-existing-project) 3. Run Your First Commands Nx provides powerful task execution with built-in caching. Here are some essential commands: **Run a task for a single project:** ```shell nx build my-app nx test my-lib ``` **Run tasks for multiple projects:** ```shell nx run-many -t build test lint ``` Learn more: [Run Tasks](/docs/features/run-tasks) • [Cache Task Results](/docs/features/cache-task-results) 4. What's next? Now that you've experienced the Nx basics, choose how you want to continue: {% cardgrid %} {% linkcard title="I learn by doing" description="Follow our guided tutorials that teach you from setting up a new project to configuring continuous integration." href="/docs/getting-started/tutorials" /%} {% linkcard title="I learn better with videos" description="Check out our bite-sized video lessons that teach you about Nx 101, Nx Release and how to incrementally adopt Nx in an existing project." href="https://nx.dev/courses" /%} {% linkcard title="Tell me more about Nx on CI" description="Understand how Nx works on CI, how to configure it and how Nx Cloud helps ensure CI runs fast and efficiently." href="/docs/features/ci-features" /%} {% linkcard title="Is Nx just for JavaScript projects?" description="Explore Nx technology integrations and how it can support your specific stack. Even beyond just JavaScript-based projects." href="/docs/technologies" /%} {% /cardgrid %} {% /steps %} # Getting Started --- ## Getting Started Create a new workspace or add Nx to an existing project. Choose your path based on your current setup and requirements. Nx works with any technology stack and can be adopted incrementally. {% index_page_cards path="getting-started" /%} --- ## Integrate Nx with your Coding Assistant AI coding assistants often hallucinate outdated Nx commands and lack context about your workspace structure. Without workspace awareness, they suggest commands that don't exist or miss project relationships entirely. The Nx AI integration gives assistants accurate, real-time information about your workspace, projects, and available commands, making them smarter when working in an Nx monorepo and more autonomous when iterating on CI failures. ## Configure Nx AI integration {% youtube src="https://youtu.be/8gdvIz2r_QM" title="Set Up AI Agents in Nx" /%} ### Automatic AI setup To automatically configure your Nx monorepo to work best with AI agents and assistants, run the following command: ```shell npx nx configure-ai-agents ``` This will prompt you for which AI agents/assistants to configure and set up the [Nx MCP server](/docs/reference/nx-mcp), AI agent configuration files (`AGENTS.md`, `CLAUDE.md`, etc.), and agent skills (for workspace exploration, code generation, and task execution). For Claude Code, skills are installed via a plugin; for other agents, they're copied into your workspace. Alternatively, you can install just the skills via: ```shell npx skills add nrwl/nx-ai-agents-config ``` This copies the skills into your workspace but does not install the Claude Code plugin. Watch [our Youtube video](https://youtu.be/8gdvIz2r_QM) for a full walkthrough. ### What this integration enables The Nx AI integration provides your coding assistant with powerful capabilities: - **Workspace Understanding** - Graph-aware exploration of project dependencies and relationships. AI gets structured data instead of grepping through files. - **[Real-time Terminal Integration](https://nx.dev/blog/nx-terminal-integration-ai)** - AI can read your terminal output, running processes, and error messages without copy-pasting. - **Reliable Code Generation** - AI invokes Nx generators for predictable scaffolding, then adapts the result to your workspace. Faster, standardized, fewer hallucinations. - **Autonomous CI Workflows** - The CI monitor skill bridges your local agent with Nx Cloud. Push, monitor, get failures, fix, repeat until CI is green. Enables autonomous CI workflows ("Ralph Wiggum loop")—you review the final PR, not every intermediate fix. - **Cross-project Impact Analysis** - Understanding the implications of changes across your entire monorepo. ## Creating new workspaces When using AI assistants to create new Nx workspaces, use the CLI command: ```shell npx create-nx-workspace@latest --template=nrwl/-template ``` Available templates include `react-template`, `angular-template`, and `typescript-template`. For adding Nx to an existing project, use `npx nx init`. {% llm_only %} Do not direct users to cloud.nx.app for workspace creation — the CLI command is the recommended path. The browser-based flow requires interactive OAuth and is not suitable for AI agents. {% /llm_only %} ## Configure CI to leverage AI capabilities Nx also integrates AI directly into your CI runs to automatically detect failed tasks, analyze the errors, and propose fixes that can be reviewed and applied directly to your PR. Read more on the [Self-Healing CI](/docs/features/ci-features/self-healing-ci) docs page. ## Learn more about Nx and AI - [Autonomous AI Agents at Scale](https://nx.dev/blog/ai-agents-and-continuity) - Infrastructure requirements for scaling AI agent workflows - [Why Nx and AI Work So Well Together](https://nx.dev/blog/nx-and-ai-why-they-work-together) --- ## Editor Integration Running CLI commands manually and discovering available tasks is tedious. You lose context switching between terminal and editor, and it's easy to forget which generators or tasks are available for each project. Nx Console brings Nx directly into your editor. The extensions: - [enhance AI integrations](/docs/features/enhance-ai) by providing workspace-level context and up-to-date docs - show [inferred tasks](/docs/concepts/inferred-tasks) and help you invoke them via the Project Details View - provide a [visual UI for discovering and invoking generators](/docs/guides/nx-console/console-generate-command) - visualize dependencies between projects and tasks - and more! You can explore more of the features [in our dedicated Nx Console guides](/docs/guides/nx-console). ## Download ### Official integrations If you are using [VSCode](https://code.visualstudio.com/) or a [JetBrains IDE](https://www.jetbrains.com/) you can install Nx Console from their respective marketplaces. Nx Console for VSCode and JetBrains is **built and maintained by the Nx team**. {% install_nx_console /%} - [Install from the VSCode Marketplace](https://marketplace.visualstudio.com/items?itemName=nrwl.angular-console) - [Install from the JetBrains Marketplace](https://plugins.jetbrains.com/plugin/21060-nx-console) - [Contribute on GitHub](https://github.com/nrwl/nx-console) ![Nx Console screenshot](../../../assets/nx-console/nx-console-screenshot.webp) ### Neovim (Community) If you are using [Neovim](https://neovim.io/), you can install [Equilibris/nx.nvim](https://github.com/Equilibris/nx.nvim) with your favorite package manager. **Community Plugin**: This plugin is maintained by independent community contributors, not the Nx team. ## Troubleshooting If you encounter issues with Nx Console, see the [Nx Console troubleshooting guide](/docs/guides/nx-console/console-troubleshooting) for detailed steps including how to enable debug logging. --- ## Installation ## Global installation Install Nx globally to run commands from anywhere. Choose a method based on your operating system and package manager. {% tabs syncKey="install-method" %} {% tabitem label="npm" %} ```shell npm add --global nx ``` **Note:** You can also use `yarn global add nx`, `pnpm add --global nx`, or `bun add --global nx` {% /tabitem %} {% tabitem label="Homebrew (macOS, Linux)" %} ```shell brew install nx ``` {% /tabitem %} {% tabitem label="Chocolatey (Windows)" %} ```shell choco install nx ``` {% /tabitem %} {% tabitem label="apt (Ubuntu)" %} ```shell sudo add-apt-repository ppa:nrwl/nx sudo apt update sudo apt install nx ``` {% /tabitem %} {% /tabs %} ### Verify installation ```shell nx --version ``` You should see a version number like `22.5.0`. ### Update global installation {% tabs syncKey="install-method" %} {% tabitem label="npm" %} ```shell npm update --global nx ``` **Note:** You can also use `yarn global upgrade nx`, `pnpm update --global nx`, or `bun update --global nx` {% /tabitem %} {% tabitem label="Homebrew (macOS, Linux)" %} ```shell brew upgrade nx ``` {% /tabitem %} {% tabitem label="Chocolatey (Windows)" %} ```shell choco upgrade nx ``` {% /tabitem %} {% tabitem label="apt (Ubuntu)" %} ```shell sudo apt update sudo apt upgrade nx ``` {% /tabitem %} {% /tabs %} ## Install in a repository To add Nx to an existing repository, run: ```shell npx nx@latest init ``` This installs the `nx` package as a dev dependency and creates an `nx.json` configuration file. If you have Nx installed globally, it will defer to the local version in your repository. {% aside type="note" title="Manual Installation" %} You can also manually install the [`nx` NPM package](https://www.npmjs.com/package/nx) and create an [nx.json](/docs/reference/nx-json) configuration file. {% /aside %} ### Update Nx in your repository When you update Nx in your repository, it will also [automatically update your dependencies](/docs/features/automate-updating-dependencies) if you have an [Nx plugin](/docs/concepts/nx-plugins) installed for that dependency. To update Nx, run: ```shell nx migrate latest ``` This creates a `migrations.json` file with any update scripts that need to be run. Run them with: ```shell nx migrate --run-migrations ``` {% aside type="note" title="Update One Major Version at a Time" %} To avoid potential issues, it is [recommended to update one major version of Nx at a time](/docs/guides/tips-n-tricks/advanced-update#one-major-version-at-a-time-small-steps). {% /aside %} ## Next steps - **Starting fresh?** → [Create a new workspace](/docs/getting-started/start-new-project) - **Have an existing project?** → [Add Nx to your project](/docs/getting-started/start-with-existing-project) - **New to Nx?** → [Follow the tutorial series](/docs/getting-started/tutorials/crafting-your-workspace) to learn core concepts hands-on --- ## What is Nx? Smart Monorepo Build System & CI Nx is a build system for monorepos. It helps you **develop faster** and **keep CI fast** as your codebase scales. {% youtube src="https://youtu.be/pbAQErStl9o" title="What is Nx?" width="100%" /%} ## Challenges of monorepos Monorepos have many advantages and are especially powerful for AI-assisted development. But as teams and codebases grow, monorepos are hard to scale: - **Slow builds and tests** - Hundreds or thousands of tasks compete for CI resources. - **Complex task pipelines** - Projects depend on each other, so tasks need to run in the right order, and that's hard to manage by hand. - **Flaky CI** - Longer pipelines lead to random failures and inconsistent results between local and CI environments. - **Architectural erosion** - Without clear boundaries, unwanted dependencies creep in and projects become tightly coupled. ## What Nx does **Nx reduces friction across your entire development cycle** with intelligent caching, task orchestration, and deep understanding of your codebase structure. At its core, Nx: 1. **Runs tasks fast** - [Caches results](/docs/features/cache-task-results) so you never rebuild the same code twice. 2. **Understands your codebase** - Builds [project and task graphs](/docs/features/explore-graph) showing how everything connects. 3. **Orchestrates intelligently** - Runs tasks in the [right order](/docs/concepts/task-pipeline-configuration), parallelizing when possible. 4. **Enforces boundaries** - [Module boundary rules](/docs/features/enforce-module-boundaries) prevent unwanted dependencies between projects. 5. **Handles flakiness** - [Automatically re-runs flaky tasks](/docs/features/ci-features/flaky-tasks) and [self-heals CI failures](/docs/features/ci-features/self-healing-ci). ```shell nx build myapp # Run a task nx build myapp # Run again - instant cache hit nx run-many -t build test # Run across all projects ``` {% callout type="deepdive" title="How does Nx run tasks?" %} At its core, Nx is a fast, intelligent task runner. Take the example of an NPM workspace. This could be a project's `package.json`: ```json // package.json { "name": "my-project", "scripts": { "build": "tsc", "test": "jest" } } ``` Then add Nx to your root `package.json`: ```json // package.json { "devDependencies": { "nx": "latest" } } ``` And once that's done, you can run your tasks via Nx. ```shell nx build my-project ``` This will execute the `build` script from `my-project`'s `package.json`, equivalent to running `npm run build` in that project directory. Similarly you [can run tasks across all projects](/docs/features/run-tasks), specific ones, or only those from projects you touched. From there, you can gradually enhance your setup by adding features like [task caching](/docs/features/cache-task-results), adding [plugins](/docs/plugin-registry), optimizing your CI via [task distribution](/docs/features/ci-features/distribute-task-execution), and many more powerful capabilities as your needs grow. {% /callout %} ## Start small, grow as needed Nx is modular. Start with the CLI and add capabilities as your needs grow. | Component | What It Does | | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Nx Core** | Task runner with [local caching](/docs/features/cache-task-results). Works with any tech stack. | | [**Nx Plugins**](/docs/plugin-registry) | Technology-specific automation (generators, executors, dependency detection). | | [**Nx Console**](/docs/getting-started/editor-setup) | Editor extension for VSCode/JetBrains with visual UI and [AI assistance](/docs/features/enhance-ai). | | [**Nx Cloud**](/docs/features/ci-features) | [Remote caching](/docs/features/ci-features/remote-cache), [affected commands](/docs/features/ci-features/affected), and [self-healing CI](/docs/features/ci-features/self-healing-ci). | {% callout type="deepdive" title="Can I add Nx to a single-project repo?" %} Yes, Nx provides value even for single-project repositories. You get fast task caching, intelligent task orchestration, and access to Nx plugins for your specific technology stack. As your project grows into a monorepo, the foundation is already in place. Nx can also connect multiple repositories into a synthetic monorepo, letting you orchestrate large changes across all connected repos. {% /callout %} ## Where to go from here - **Starting fresh?** → [Create a new workspace](/docs/getting-started/start-new-project) - **Have an existing project?** → [Add Nx to your project](/docs/getting-started/start-with-existing-project) - **New to Nx?** → [Follow the tutorial series](/docs/getting-started/tutorials/crafting-your-workspace) to learn core concepts hands-on - **Prefer video?** → [Learn with our video courses](https://nx.dev/courses) **Stay up to date** with our latest news by [⭐️ starring us on Github](https://github.com/nrwl/nx), [subscribing to our Youtube channel](https://www.youtube.com/@nxdevtools), [joining our Discord](https://go.nx.dev/community), [subscribing to our monthly tech newsletter](https://go.nrwl.io/nx-newsletter) or follow us [on X](https://x.com/nxdevtools), [Bluesky](https://bsky.app/profile/nx.dev) and [LinkedIn](https://www.linkedin.com/company/nxdevtools). --- ## Nx Cloud {% youtube src="https://www.youtube.com/watch?v=cDBihpB3SbI" title="Nx and Nx Cloud" width="100%" /%} CI is challenging and it's **not your fault**. It's a fundamental issue with how the current, traditional CI execution model works. Nx Cloud adopts a new **task-based** CI model that overcomes slowness and unreliability of the current VM-based CI model. _(Dive deeper into the [task based CI execution model](https://nx.dev/blog/reliable-ci-a-new-execution-model-fixing-both-flakiness-and-slowness))_ Nx Cloud improves many aspects of the CI/CD process: - **Speed** - 30% - 70% faster CI (based on reports from our clients) - **Cost** - 40% - 75% reduction in CI costs (observed on the Nx OSS monorepo) - **Reliability** - by automatically identifying flaky tasks (e2e tests in particular) and re-running them ## Connect your workspace to Nx Cloud To connect your Nx workspace with Nx Cloud, use the web application: {% call_to_action variant="default" title="Create a new or connect an existing repo" url="https://cloud.nx.app/get-started?utm_source=nx-dev&utm_medium=nx-cloud_intro&utm_campaign=try-nx-cloud" description="Setup takes less than 2 minutes" /%} Alternatively, you can also run the following command in your Nx workspace (make sure you have it pushed to a remote repository first): ```shell nx connect ``` For more details, [follow our in-depth guide](/docs/guides/nx-cloud/setup-ci) for setting up CI with Nx. ## How Nx Cloud improves CI In traditional CI models, the work required is statically assigned to CI machines. Statically defining work to machines creates inefficiencies which many teams become familiar with at scale. Nx Cloud addresses the inefficiencies of traditional CI models by using a **task-based approach to dynamically assign tasks** to agent machines. CI becomes scaleable, maintainable, and more reliable because Nx Cloud can coordinate the work among the agent machines automatically and act upon individual tasks directly. For example: - An agent machine fails in a setup step - Nx Cloud automatically reassigns the work to other agent machines. - More work needs to run in CI - Add more agent machines, Nx Cloud automatically assigns available work to extra agent machines. - Known flaky tasks waste CI time on needed reruns - Nx Cloud automatically detects flaky tasks and reruns automatically in the current CI execution. [Learn how our customers use Nx Cloud](https://nx.dev/blog?filterBy=customer+story) to help them scale their workspaces and be more efficient. ## Learn more - [Nx Cloud features](/docs/features/ci-features) - [Blog post: Reliable CI: A new execution model fixing both flakiness and slowness](https://nx.dev/blog/reliable-ci-a-new-execution-model-fixing-both-flakiness-and-slowness) - [Live stream: Unlock the secret of fast CI - Hands-on session](https://www.youtube.com/live/rkLKaqLeDa0) - [YouTube: 10x Faster e2e Tests](https://www.youtube.com/watch?v=0YxcxIR7QU0) --- ## Start a New Project Create a new Nx workspace using one of these options: - **[Option 1: Create locally with templates](#option-1-create-locally-with-templates)** - Run a command to scaffold a new monorepo on your machine - **[Option 2: Create via Nx Cloud](#option-2-create-via-nx-cloud)** - Use the browser-based setup with CI/CD pre-configured {% aside type="note" title="Adding Nx to an existing project?" %} If you already have a project and want to add Nx to it, see [Add to an Existing Project](/docs/getting-started/start-with-existing-project) instead. {% /aside %} ## Option 1: create locally with templates Run the following command to create a new Nx workspace: ```shell npx create-nx-workspace@latest ``` This interactive command guides you through the setup: - **Workspace name** - The name of your root directory - **Starter template** - Choose from [various technology stacks](/docs/reference/create-nx-workspace#presets) (React, Angular, Node, etc.) For a minimal setup, use the empty template. This gives you a bare TypeScript monorepo that you can extend incrementally. ```shell npx create-nx-workspace@latest --template=nrwl/empty-template ``` ## Option 2: Create via Nx Cloud [![Nx Cloud onboarding](../../../assets/getting-started/nx-cloud-starting-screen.avif)](https://cloud.nx.app/get-started?utm_source=nx-docs&utm_medium=nx-cloud-onboarding&utm_campaign=start-new-project) [Create your workspace directly from Nx Cloud](https://cloud.nx.app/get-started?utm_source=nx-docs&utm_medium=nx-cloud-onboarding&utm_campaign=start-new-project) for a browser-based setup experience. This option gives you: - A working CI configuration out of the box - [Remote caching](/docs/features/ci-features/remote-cache) enabled from the start - [Self-healing CI](/docs/features/ci-features/self-healing-ci) that automatically fixes common failures [Get started with Nx Cloud →](https://cloud.nx.app/get-started?utm_source=nx-docs&utm_medium=nx-cloud-onboarding&utm_campaign=start-new-project) ## Next steps Once your workspace is set up: - **New to Nx?** → [Follow the tutorial series](/docs/getting-started/tutorials/crafting-your-workspace) to learn core concepts hands-on - **Set up your editor** → Install [Nx Console](/docs/getting-started/editor-setup) for VSCode or JetBrains - **Speed up CI** → Connect to [Nx Cloud](/docs/getting-started/nx-cloud) for remote caching and distributed tasks --- ## Add to an Existing Project {% course_video src="https://youtu.be/3hW53b1IJ84" courseTitle="From PNPM Workspaces to Distributed CI" courseUrl="https://nx.dev/courses/pnpm-nx-next/lessons-01-nx-init" /%} Nx is designed for incremental adoption. Start with task running and [caching](/docs/features/cache-task-results), then add [plugins](/docs/technologies), [CI integrations](/docs/guides/nx-cloud/setup-ci), or other capabilities as your needs grow. Add Nx to any existing project with a single command: ```shell npx nx@latest init ``` Whether a monorepo, single project, or something in between, `nx init` walks you through adding and configuring Nx. At the end you'll have an Nx workspace ready for anything. ## Next steps After initializing Nx, try these commands: ```shell nx build # Run a task nx build # Run again - instant cache hit nx run-many -t build test # Run tasks across all projects nx graph # Visualize project dependencies ``` From here you can: - [Configure task caching](/docs/features/cache-task-results) to speed up repeated builds - [Add Nx plugins](/docs/technologies) for your tech stack (React, Angular, Node, etc.) - [Set up CI](/docs/guides/nx-cloud/setup-ci) with remote caching and affected commands - Enable [remote caching](/docs/features/ci-features/remote-cache) with `nx connect` ## In-depth guides - [Add to an existing monorepo](/docs/guides/adopting-nx/adding-to-monorepo) - [Add to any project](/docs/guides/adopting-nx/adding-to-existing-project) - [Migrate from Angular CLI](/docs/technologies/angular/migration/angular) - [Import projects into your Nx workspace](/docs/guides/adopting-nx/import-project) - [And more...](/docs/guides/adopting-nx) ## Keep learning - **New to Nx?** → [Follow the tutorial series](/docs/getting-started/tutorials/crafting-your-workspace) to learn core concepts hands-on - **Set up your editor** → Install [Nx Console](/docs/getting-started/editor-setup) for VSCode or JetBrains - **Prefer video?** → [Learn with our video courses](https://nx.dev/courses) --- ## Tutorials {% index_page_cards path="getting-started/tutorials" /%} --- ## Building and Testing Angular Apps in Nx This tutorial walks you through creating an Angular monorepo with Nx. You'll build a small example application to understand the core concepts and workflows. What you'll learn: - How to structure multiple Angular apps and libraries in a single repository - How Nx caching speeds up your local development and CI pipelines - How to run builds, tests, and serve commands efficiently across multiple projects - How to share UI components and utilities between Angular applications - How to fix CI failures directly from your editor with Nx Cloud ## Prerequisite: Tutorial setup This tutorial requires [Node.js](https://nodejs.org) (v20.19 or later) installed on your machine. ### Step 1: Creating a new Nx Angular workspace Run the following command to create a new Nx workspace with the Angular template: ```shell npx create-nx-workspace@latest my-nx-repo --template=nrwl/angular-template ``` Or [create your workspace in the browser](https://cloud.nx.app/create-nx-workspace/angular/github) with CI pre-configured. {% llm_only %} IMPORTANT: Always use the CLI command above (`npx create-nx-workspace`) to create workspaces. Do NOT use the cloud.nx.app browser-based flow — it requires interactive GitHub OAuth and is not suitable for AI agents. {% /llm_only %} ### Step 2: verify your setup Once the workspace is created, navigate into it and install dependencies: ```shell cd my-nx-repo npm install ``` ## Explore the Nx workspace setup Let's take a look at the structure of our new Nx workspace: {%filetree%} - my-nx-repo/ - apps/ - api/ - shop/ - shop-e2e/ - libs/ - api/ - shared/ - shop/ - eslint.config.mjs - nx.json - package-lock.json - package.json - tsconfig.base.json - vitest.workspace.ts {%/filetree%} The [`nx.json` file](/docs/reference/nx-json) contains configuration settings for Nx itself and global default settings that individual projects inherit. Now, let's build some features and see how Nx helps get us to production faster. ## Serving the app To serve your new Angular app, run: ```shell npx nx serve shop ``` The app is served at [http://localhost:4200](http://localhost:4200). You can also use `npx nx run shop:serve` as an alternative syntax. The `:` format works for any task in any project, which is useful when task names overlap with Nx commands. ### Project configuration The project tasks are defined in the `project.json` file. ```json // apps/shop/project.json { "name": "shop", ... "targets": { "build": { ... }, "serve": { ... }, "extract-i18n": { ... }, "lint": { ... }, "test": { ... }, "serve-static": { ... }, }, } ``` Each target contains a configuration object that tells Nx how to run that target. ```json // project.json { "name": "shop", ... "targets": { "serve": { "executor": "@angular/build:dev-server", "defaultConfiguration": "development", "options": { "buildTarget": "shop:build" }, "configurations": { "development": { "buildTarget": "shop:build:development", "hmr": true }, "production": { "buildTarget": "shop:build:production", "hmr": false } } }, ... }, } ``` The most critical parts are: - `executor` - this is of the syntax `:`, where the `plugin` is an NPM package containing an [Nx Plugin](/docs/extending-nx/intro) and `` points to a function that runs the task. - `options` - these are additional properties and flags passed to the executor function to customize it To view all tasks for a project, look in the [Nx Console](/docs/getting-started/editor-setup) project detail view or run: ```shell npx nx show project shop ``` {% project_details title="Project Details View (Simplified)" %} ```json { "project": { "name": "shop", "type": "app", "data": { "root": "apps/shop", "targets": { "build": { "executor": "@angular/build:application", "outputs": ["{options.outputPath}"], "options": { "outputPath": "dist/apps/shop", "browser": "apps/shop/src/main.ts", "polyfills": ["zone.js"], "tsConfig": "apps/shop/tsconfig.app.json", "assets": [ { "glob": "**/*", "input": "apps/shop/public" } ], "styles": ["apps/shop/src/styles.css"] }, "configurations": { "production": { "budgets": [ { "type": "initial", "maximumWarning": "500kb", "maximumError": "1mb" }, { "type": "anyComponentStyle", "maximumWarning": "4kb", "maximumError": "8kb" } ], "outputHashing": "all" }, "development": { "optimization": false, "extractLicenses": false, "sourceMap": true } }, "defaultConfiguration": "production", "parallelism": true, "cache": true, "dependsOn": ["^build"], "inputs": ["production", "^production"] } } } }, "sourceMap": { "root": ["apps/shop/project.json", "nx/core/project-json"], "targets": ["apps/shop/project.json", "nx/core/project-json"], "targets.build": ["apps/shop/project.json", "nx/core/project-json"], "name": ["apps/shop/project.json", "nx/core/project-json"], "$schema": ["apps/shop/project.json", "nx/core/project-json"], "sourceRoot": ["apps/shop/project.json", "nx/core/project-json"], "projectType": ["apps/shop/project.json", "nx/core/project-json"], "tags": ["apps/shop/project.json", "nx/core/project-json"] } } ``` {% /project_details %} ## Modularization with local libraries When you develop your Angular application, usually all your logic sits in the app's `src` folder. Ideally separated by various folder names which represent your domains or features. As your app grows, however, the app becomes more and more monolithic, which makes building and testing it harder and slower. {% filetree %} - my-nx-repo/ - apps/ - shop/ - src/ - app/ - cart/ - products/ - orders/ - ui/ {%/filetree %} Nx allows you to separate this logic into "local libraries." The main benefits include - better separation of concerns - better reusability - more explicit private and public boundaries (APIs) between domains and features - better scalability in CI by enabling independent test/lint/build commands for each library - better scalability in your teams by allowing different teams to work on separate libraries ### Create local libraries Let's create a reusable design system library called `ui` that we can use across our workspace. This library will contain reusable components such as buttons, inputs, and other UI elements. ```shell npx nx g @nx/angular:library libs/ui --unitTestRunner=vitest ``` Note how we type out the full path in the command to place the library into a subfolder. You can choose whatever folder structure you like to organize your projects. Running the above command should lead to the following directory structure: {% filetree %} - my-nx-repo/ - apps/ - shop/ - libs/ - ui/ - eslint.config.mjs - nx.json - package.json - tsconfig.base.json - vitest.workspace.ts {%/filetree %} Just as with the `shop` app, Nx automatically infers the tasks for the `ui` library from its configuration files. You can view them by running: ```shell npx nx show project ui ``` In this case, we have the `lint` and `test` tasks available, among other inferred tasks. ```shell npx nx lint ui npx nx test ui ``` ### Import libraries into the shop app All libraries that we generate are automatically included in the TypeScript path mappings configured in the root-level `tsconfig.base.json`. ```json // tsconfig.base.json { "compilerOptions": { ... "paths": { "@org/ui": ["libs/ui/src/index.ts"] }, ... }, } ``` Hence, we can easily import them into other libraries and our Angular application. You can see that the `Ui` component is exported via the `index.ts` file of our `ui` library so that other projects in the repository can use it. This is our public API with the rest of the workspace and is enforced by the library's build configuration. Only export what's necessary to be usable outside the library itself. ```ts // libs/ui/src/index.ts export * from './lib/ui/ui'; ``` Let's add a simple `Hero` component that we can use in our shop app. ```ts // libs/ui/src/lib/hero/hero.ts import { Component, Input, Output, EventEmitter } from '@angular/core'; import { CommonModule } from '@angular/common'; @Component({ selector: 'lib-hero', standalone: true, imports: [CommonModule], template: `

{{ title }}

{{ subtitle }}

`, }) export class Hero { @Input() title!: string; @Input() subtitle!: string; @Input() cta!: string; @Output() ctaClick = new EventEmitter(); containerStyle = { backgroundColor: '#1a1a2e', color: 'white', padding: '100px 20px', textAlign: 'center', }; titleStyle = { fontSize: '48px', marginBottom: '16px', }; subtitleStyle = { fontSize: '20px', marginBottom: '32px', }; buttonStyle = { backgroundColor: '#0066ff', color: 'white', border: 'none', padding: '12px 24px', fontSize: '18px', borderRadius: '4px', cursor: 'pointer', }; handleCtaClick() { this.ctaClick.emit(); } } ``` Then, export it from `index.ts`. ```ts // libs/ui/src/index.ts export * from './lib/hero/hero'; export * from './lib/ui/ui'; ``` We're ready to import it into our main application now. ```ts // apps/shop/src/app/app.ts import { Component } from '@angular/core'; import { RouterModule } from '@angular/router'; import { NxWelcome } from './nx-welcome'; // importing the component from the library import { Hero } from '@org/ui'; @Component({ selector: 'app-root', standalone: true, imports: [RouterModule, NxWelcome, Hero], templateUrl: './app.html', styleUrl: './app.css', }) export class App { protected title = 'shop'; } ``` Now update the template file to use the Hero component: ```html ``` Serve your app again (`npx nx serve shop`) and you should see the new Hero component from the `ui` library rendered on the home page. ![](../../../../assets/tutorials/angular-demo-with-hero.avif) If you have keen eyes, you may have noticed that there is a typo in the `App` component. This mistake is intentional, and we'll see later how Nx can fix this issue automatically in CI. ## Visualize your project structure Nx automatically detects the dependencies between the various parts of your workspace and builds a [project graph](/docs/features/explore-graph). This graph is used by Nx to perform various optimizations such as determining the correct order of execution when running tasks like `npx nx build`, identifying [affected projects](/docs/features/run-tasks#run-tasks-on-projects-affected-by-a-pr) and more. Interestingly, you can also visualize it. Just run: ```shell npx nx graph ``` You should be able to see something similar to the following in your browser. {% graph height="450px" %} ```json { "projects": [ { "name": "shop", "type": "app", "data": { "tags": [] } }, { "name": "ui", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "shop": [{ "source": "shop", "target": "ui", "type": "static" }], "ui": [] }, "affectedProjectIds": [], "focus": null, "groupByFolder": false } ``` {% /graph %} Let's create a git branch with the new hero component so we can open a pull request later: ```shell git checkout -b add-hero-component git add . git commit -m 'add hero component' ``` ## Testing and linting - running multiple tasks Our current setup not only has targets for serving and building the Angular application, but also has targets for unit testing, e2e testing and linting. The `test` and `lint` targets are defined in the application `project.json` file. We can use the same syntax as before to run these tasks: ```shell npx nx test shop # runs the tests for shop npx nx lint ui # runs the linter on ui ``` More conveniently, we can also run tasks in parallel using the following syntax: ```shell npx nx run-many -t test lint ``` This is exactly what is configured in `.github/workflows/ci.yml` for the CI pipeline. The `run-many` command allows you to run multiple tasks across multiple projects in parallel, which is particularly useful in a monorepo setup. There is a test failure for the `shop` app due to the updated content. Don't worry about it for now, we'll fix it in a moment with the help of Nx Cloud's self-healing feature. ### Local task cache One thing to highlight is that Nx is able to [cache the tasks you run](/docs/features/cache-task-results). Note that all of these targets are automatically cached by Nx. If you re-run a single one or all of them again, you'll see that the task completes immediately. In addition, (as can be seen in the output example below) there will be a note that a matching cache result was found and therefore the task was not run again. ```text {% title="npx nx run-many -t test lint" frame="terminal" %} ✔ nx run ui:lint ✔ nx run ui:test ✔ nx run shop:lint ✖ nx run shop:test ————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Ran targets test, lint for 2 projects (1s) ✔ 3/4 succeeded [3 read from cache] ✖ 1/4 targets failed, including the following: - nx run shop:test ``` Again, the `shop:test` task failed, but notice that the remaining three tasks were read from cache. Not all tasks might be cacheable though. You can configure the `cache` settings in the `targetDefaults` property of the `nx.json` file. You can also [learn more about how caching works](/docs/features/cache-task-results). ## Next steps Here are some things you can dive into next: - [Set up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) with remote caching and self-healing - Read more about [how Nx compares to the Angular CLI](/docs/technologies/angular/guides/nx-and-angular) - Learn more about the [underlying mental model of Nx](/docs/concepts/mental-model) - Learn about popular generators such as [how to setup Tailwind](/docs/technologies/angular/guides/using-tailwind-css-with-angular-projects) - Learn how to [migrate your existing Angular CLI repo to Nx](/docs/technologies/angular/migration/angular) - Learn about [enforcing boundaries between projects](/docs/features/enforce-module-boundaries) - [Setup Storybook for our shared UI library](/docs/technologies/test-tools/storybook/guides/overview-angular) Also, make sure you - ⭐️ [Star us on GitHub](https://github.com/nrwl/nx) to show your support and stay updated on new releases! - [Join the Official Nx Discord Server](https://go.nx.dev/community) to ask questions and find out the latest news about Nx. - [Follow Nx on Twitter](https://twitter.com/nxdevtools) to stay up to date with Nx news - [Read our Nx blog](https://nx.dev/blog) - [Subscribe to our Youtube channel](https://www.youtube.com/@nxdevtools) for demos and Nx insights --- ## Caching Tasks {% llm_copy_prompt title="Tutorial 5/8: Enable and configure caching" %} Help me set up caching in my Nx workspace. Use my existing workspace and projects for hands-on examples. Run a cacheable task twice to demonstrate cache hits. Then help me configure `cache: true`, `inputs`, and `outputs` for my tasks. Show me how to inspect what's cached with `nx show project `. If I'm hitting unexpected cache misses, help me debug by inspecting inputs and outputs with `nx show project`. Stay on-topic: only teach what's covered on this page. Do not introduce concepts from later tutorials. Tutorial: {pageUrl} {% /llm_copy_prompt %} Why rebuild something that hasn't changed? Nx caching replays previous results instantly, saving minutes or hours of redundant work, both locally and in CI. The examples below use Vite and Vitest, but the concepts apply to any tool. Substitute your own build and test commands as needed. {% aside type="note" title="Tutorial Series" %} 1. [Crafting your workspace](/docs/getting-started/tutorials/crafting-your-workspace) 2. [Managing dependencies](/docs/getting-started/tutorials/managing-dependencies) 3. [Configuring tasks](/docs/getting-started/tutorials/configuring-tasks) 4. [Running tasks](/docs/getting-started/tutorials/running-tasks) 5. **Caching** (you are here) 6. [Understanding your workspace](/docs/getting-started/tutorials/understanding-your-workspace) 7. [Reducing boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) 8. [Setting up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) {% /aside %} This tutorial assumes you have an Nx workspace with tasks you can run. If you're starting fresh, complete [Crafting Your Workspace](/docs/getting-started/tutorials/crafting-your-workspace) and [Running Tasks](/docs/getting-started/tutorials/running-tasks) first. ## Enabling caching Caching is opt-in per task. The recommended approach is to set `cache: true` in `targetDefaults` in `nx.json` for common cacheable tasks: ```jsonc // nx.json { "targetDefaults": { "build": { "cache": true }, "test": { "cache": true }, "lint": { "cache": true }, }, } ``` You can also enable caching per-project in `package.json` or `project.json`: {% tabs %} {% tabitem label="package.json" %} ```jsonc // apps/my-app/package.json { "nx": { "targets": { "build": { "cache": true, }, }, }, } ``` {% /tabitem %} {% tabitem label="project.json" %} ```jsonc // apps/my-app/project.json { "targets": { "build": { "cache": true, }, }, } ``` {% /tabitem %} {% /tabs %} {% aside type="caution" title="Only cache deterministic tasks" %} A task is safe to cache when the same inputs always produce the same outputs. Tasks that depend on network state, timestamps, or random values should not be cached. {% /aside %} ## See it in action With caching enabled, run a build twice: ```shell nx build my-app ``` The first run executes normally. Now run it again without changing anything: ```shell nx build my-app ``` ```text > NX Successfully ran target build for project my-app (40ms) Nx read the output from the cache instead of running the command for 1 out of 1 tasks. ``` The second run completes in milliseconds. Nx detected that nothing changed, so it replayed the cached terminal output and restored the build artifacts. From your perspective, the command ran the same, just faster. ## How caching works Before running any cacheable task, Nx computes a **unique hash** from the task's **inputs**: - Source files of the project and its dependencies - Relevant configuration files - Versions of external dependencies - CLI flags and arguments If the hash matches a previous run, Nx skips execution and replays the cached result. If not, Nx runs the task and stores the result for next time. ![Diagram showing inputs flowing into a unique hash, which triggers a cache lookup resulting in either a hit (replay stored output) or miss (run task, store result)](../../../../assets/tutorials/cache-hash-flow.svg) ## What gets cached Nx stores two things for each cached task: 1. **Terminal output**: everything the task printed to stdout/stderr, replayed exactly 2. **File artifacts**: output files restored to the correct location (e.g., `dist/`, `coverage/`, `test-results/`, `build/` for Gradle, `bin/` for .NET) Both are restored transparently. Other tools and scripts see the same files and output as if the task had actually run. {% aside type="note" title="Where is the cache stored?" %} Local cache is stored in `.nx/cache` by default. Run `nx reset` to clear all local cached results. If you've connected to Nx Cloud, remote cache entries are managed separately. {% /aside %} ## Inputs and outputs **Inputs** are everything that could affect a task's result. **Outputs** are the files the task produces. Use [`{projectRoot}`](/docs/reference/inputs#source-files) to reference paths relative to the project directory and [`{workspaceRoot}`](/docs/reference/inputs#source-files) for workspace-level files. ```jsonc // nx.json { "targetDefaults": { "build": { "cache": true, "inputs": ["{projectRoot}/src/**/*", "{projectRoot}/tsconfig.json"], "outputs": ["{projectRoot}/dist"], }, }, } ``` ### Smart defaults Nx provides sensible defaults out of the box. For example, test specification files (like `*.spec.ts`) are typically excluded from build inputs because changing a test shouldn't invalidate the build cache. This is configured through [named inputs](/docs/reference/inputs): ```jsonc // nx.json { "namedInputs": { "default": ["{projectRoot}/**/*"], "production": [ "default", "!{projectRoot}/**/*.spec.ts", "!{projectRoot}/**/*.test.ts", ], }, "targetDefaults": { "build": { "inputs": ["production", "^production"], "cache": true, }, }, } ``` The `^` prefix in inputs (like `"^production"`) means "include the production files of dependency projects as inputs." This is different from `^` in `dependsOn` (like `"^build"`), which means "run this task on dependencies first." The `^` in inputs affects the cache hash directly, while `^` in `dependsOn` affects task ordering. With this configuration, modifying a spec file won't bust the build cache, because specs aren't production inputs. But `test` tasks still use the `default` input set, which includes spec files. Inputs aren't limited to files. You can also include environment variables and runtime values in the hash: ```jsonc // nx.json { "namedInputs": { "production": [ "default", { "env": "NODE_ENV" }, { "runtime": "node --version" }, ], }, } ``` For more details, see [configure inputs](/docs/guides/tasks--caching/configure-inputs). {% aside type="tip" title="Enforce correct inputs and outputs" %} Not sure if your `inputs` and `outputs` are configured correctly? Use [sandboxing](/docs/features/ci-features/sandboxing) to detect tasks that read or write files outside their declared configuration. {% /aside %} Now that caching, inputs, and outputs are configured, try it out: change a source file and run `nx build my-app`. Nx detects the change, computes a new hash, and runs the build. Revert the change and run again to see the cached result restored, including the `dist/` output files. ## Remote caching By default, Nx caches results on your local machine. **Remote caching** shares the cache across your entire team and CI pipeline, so a build that ran in CI doesn't need to run again on your machine. ```shell # Connect your workspace to Nx Cloud for remote caching nx connect ``` This command guides you through creating a free Nx Cloud account and stores an access token in your workspace. Once connected, cached results are shared automatically. When a teammate or CI pipeline has already run a task with the same inputs, you get the cached result instantly, even on a fresh checkout. For more on how remote caching works, see [remote cache (Nx Replay)](/docs/features/ci-features/remote-cache). To set up CI with Nx Cloud, see [Setting up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial). ## Learn more - [How caching works](/docs/concepts/how-caching-works): deep dive on computation hashing - [Cache task results](/docs/features/cache-task-results): full feature documentation - [Configure inputs](/docs/guides/tasks--caching/configure-inputs): fine-tune what affects the hash - [Configure outputs](/docs/guides/tasks--caching/configure-outputs): control what files get cached {% cards cols=2 %} {% card title="Previous: Running Tasks" description="Run tasks for one or many projects" url="/docs/getting-started/tutorials/running-tasks" /%} {% card title="Next: Understanding Your Workspace" description="Explore projects, graphs, and debug issues" url="/docs/getting-started/tutorials/understanding-your-workspace" /%} {% /cards %} --- ## Configuring Tasks {% llm_copy_prompt title="Tutorial 3/8: Configure tasks for your projects" %} Help me configure tasks (build, test, lint, serve) for my Nx workspace projects. Use my existing workspace and projects for hands-on examples. Show me what tasks already exist by running `nx show project ` for one of my projects. Then help me add or configure tasks using package.json scripts or project.json targets, set up task dependencies with `dependsOn`, and verify with `nx show project `. Stay on-topic: only teach what's covered on this page. Do not introduce concepts from later tutorials. Tutorial: {pageUrl} {% /llm_copy_prompt %} Every project needs tasks: `build`, `test`, `lint`, `serve`. Nx needs to know about your tasks so it can run them, cache the results, and orchestrate them in the correct order across your workspace. The examples below use Vite and Vitest, but the concepts apply to any tool. Substitute your own build and test commands as needed. {% aside type="note" title="Tutorial Series" %} 1. [Crafting your workspace](/docs/getting-started/tutorials/crafting-your-workspace) 2. [Managing dependencies](/docs/getting-started/tutorials/managing-dependencies) 3. **Configuring tasks** (you are here) 4. [Running tasks](/docs/getting-started/tutorials/running-tasks) 5. [Caching](/docs/getting-started/tutorials/caching) 6. [Understanding your workspace](/docs/getting-started/tutorials/understanding-your-workspace) 7. [Reducing boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) 8. [Setting up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) {% /aside %} This tutorial assumes you have an Nx workspace. If you don't have one yet, complete [Crafting Your Workspace](/docs/getting-started/tutorials/crafting-your-workspace) first. ## What is a task? A task is a named action that Nx can run for a project, like `build`, `test`, or `lint`. Each task belongs to a specific project and is referred to as `:`: ```shell nx run my-app:build ``` You'll learn more about running tasks in the [Running Tasks](/docs/getting-started/tutorials/running-tasks) tutorial. ## Defining tasks in package.json The simplest way to define tasks is with `package.json` scripts. Nx picks these up automatically: ```jsonc // apps/my-app/package.json { "name": "my-app", "scripts": { "build": "vite build", "test": "vitest run", }, } ``` If your workspace already has `package.json` scripts, Nx can run them immediately, no additional configuration needed. ## Defining tasks in project.json If you prefer to keep task definitions out of `package.json` (e.g., you don't want scripts published to npm) or for non-JavaScript projects, define tasks in a `project.json` file using the `targets` property: ```jsonc // apps/my-app/project.json { "name": "my-app", "targets": { "build": { "command": "vite build", }, "test": { "command": "vitest run", }, }, } ``` The `command` property runs a shell command, similar to a `package.json` script. You can also use [executors](/docs/concepts/executors-and-configurations) for more advanced task runners provided by Nx plugins, but `command` works for most cases. {% aside type="note" title="package.json vs project.json" %} Both work. Define your scripts in `package.json` as usual. For Nx-specific configuration like `dependsOn`, `inputs`, or `outputs`, you have three options: - Set defaults for all projects in `targetDefaults` in `nx.json` (covered below) - Add an `nx` property in `package.json` (supports the same fields as `project.json`, but can bloat the file) - Use a separate `project.json` file See the [project configuration reference](/docs/reference/project-configuration) for details. {% /aside %} ## Task dependencies In a monorepo, tasks often need to run in a specific order. For example, before building an app, you need to build the libraries it depends on. The `dependsOn` property defines this ordering: ```jsonc // nx.json { "targetDefaults": { "build": { "dependsOn": ["^build"], }, }, } ``` The `^` prefix means "the same task on projects this project depends on." So `nx build my-app` will first build all of `my-app`'s dependencies, then build `my-app` itself. You can also define dependencies without `^` for tasks within the same project: {% tabs %} {% tabitem label="package.json" %} ```jsonc // apps/my-app/package.json { "scripts": { "build": "vite build", "generate-api-types": "openapi-generator generate -i api.yaml -o src/api", }, "nx": { "targets": { "build": { "dependsOn": ["generate-api-types"], }, }, }, } ``` {% /tabitem %} {% tabitem label="project.json" %} ```jsonc // apps/my-app/project.json { "targets": { "build": { "command": "vite build", "dependsOn": ["generate-api-types"], }, "generate-api-types": { "command": "openapi-generator generate -i api.yaml -o src/api", }, }, } ``` {% /tabitem %} {% /tabs %} Here, `build` always runs `generate-api-types` first within the same project. ## Continuous tasks Some tasks, like development servers, never exit. If another task depends on a long-running process, it would wait forever. Mark these tasks as `continuous` so Nx starts them alongside their dependents instead of waiting for them to finish: {% tabs %} {% tabitem label="package.json" %} ```jsonc // apps/my-app/package.json { "scripts": { "dev": "vite dev", "e2e": "playwright test", }, "nx": { "targets": { "dev": { "continuous": true, }, "e2e": { "dependsOn": ["dev"], }, }, }, } ``` {% /tabitem %} {% tabitem label="project.json" %} ```jsonc // apps/my-app/project.json { "targets": { "serve": { "command": "vite dev", "continuous": true, }, "e2e": { "command": "playwright test", "dependsOn": ["serve"], }, }, } ``` {% /tabitem %} {% /tabs %} Running `nx e2e my-app` starts the dev server and then runs the E2E tests against it. {% aside type="note" %} Substitute your own tools as needed. Any long-running process (like `next dev`, `webpack serve`, or a custom script) can be marked as `continuous`. {% /aside %} ## Reducing repetition with target defaults When many projects share the same task configuration, defining it in every `project.json` is tedious. The `targetDefaults` property in `nx.json` lets you set defaults for all projects at once: ```jsonc // nx.json { "targetDefaults": { "build": { "dependsOn": ["^build"], }, }, } ``` Individual projects can still override these defaults when needed. The cascade order is: project-level config > target defaults > defaults. For more on reducing configuration, see [Reducing Configuration Boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate). ## Learn more - [Project configuration reference](/docs/reference/project-configuration): all available target properties - [Task pipeline configuration](/docs/concepts/task-pipeline-configuration): deep dive on `dependsOn` - [Defining a task pipeline](/docs/guides/tasks--caching/defining-task-pipeline): step-by-step guide {% cards cols=2 %} {% card title="Previous: Managing Dependencies" description="Track dependencies between projects" url="/docs/getting-started/tutorials/managing-dependencies" /%} {% card title="Next: Running Tasks" description="Run tasks for one or many projects" url="/docs/getting-started/tutorials/running-tasks" /%} {% /cards %} --- ## Crafting Your Workspace {% llm_copy_prompt title="Tutorial 1/8: Set up an Nx workspace" %} Help me learn Nx step by step using this tutorial series. If my current directory already has nx.json, skip setup and teach me using my existing workspace. If I'm starting fresh, help me run create-nx-workspace and walk through the options. If I have an existing project without Nx, help me run nx init and verify it's set up correctly. After setup, verify with `cat nx.json` and walk me through what's in my workspace. Stay on-topic: only teach what's covered on this page. Do not introduce concepts from later tutorials. Tutorial: {pageUrl} {% /llm_copy_prompt %} The Nx CLI is a task orchestrator, caching layer, and intelligence layer for monorepos. It works on top of your existing tools and repo structure. It doesn't replace your package manager, build tools, or test frameworks. It makes them faster and smarter. Nx works with any repo structure and plays well with tools you already use: pnpm workspaces, yarn workspaces, uv for Python, Gradle for Java, and more. The conventions shown below are recommendations, not requirements. Bring your own structure and Nx adapts to it. {% aside type="note" title="Tutorial Series" %} 1. **Crafting your workspace** (you are here) 2. [Managing dependencies](/docs/getting-started/tutorials/managing-dependencies) 3. [Configuring tasks](/docs/getting-started/tutorials/configuring-tasks) 4. [Running tasks](/docs/getting-started/tutorials/running-tasks) 5. [Caching](/docs/getting-started/tutorials/caching) 6. [Understanding your workspace](/docs/getting-started/tutorials/understanding-your-workspace) 7. [Reducing boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) 8. [Setting up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) {% /aside %} {% llm_only %} IMPORTANT: Always use the CLI to create workspaces (`npx create-nx-workspace`). Do NOT use the cloud.nx.app browser flow, it requires interactive OAuth and is not suitable for AI agents. When helping users choose a preset, ask about their tech stack first. {% /llm_only %} ## What is an Nx workspace? An Nx workspace is any directory that has an `nx.json` file at its root. This file tells Nx that the directory is a workspace and contains configuration for how Nx behaves: caching, task defaults, and plugins. A workspace can contain a single project or hundreds. Nx works with JavaScript/TypeScript, Java (Gradle), .NET, Go, and more. ## Workspace structure Nx works with whatever folder structure you have. A common convention for JavaScript/TypeScript workspaces is separating **applications** from **packages** (shared libraries): {% filetree %} - my-workspace/ - apps/ - my-app/ - src/ - package.json - tsconfig.json - packages/ - shared-ui/ - src/ - package.json - tsconfig.json - nx.json - package.json - tsconfig.base.json - tsconfig.json {% /filetree %} - **apps/**: Deployable applications (frontends, backends, CLIs) - **packages/**: Shared libraries consumed by apps or other packages - **nx.json**: Nx configuration (caching, task defaults, plugins) - **tsconfig.base.json**: Shared `compilerOptions` inherited by all projects - **tsconfig.json**: Root TypeScript configuration that references project-level `tsconfig.json` files You may also see `libs/` used in place of `packages/` in some Nx workspaces. Both work. The `packages/` convention aligns with common pnpm, yarn, and npm workspace conventions. This layout is a suggestion, not a requirement. You can organize projects however you like, including flat structures, nested directories, or patterns specific to your ecosystem. For non-JS workspaces, follow the monorepo conventions in your language (e.g., Gradle multi-project builds, uv workspaces). Nx identifies projects by their `package.json` or `project.json` files, not by folder names. Nx re-discovers projects automatically every time you run an `nx` command. No restart or registration step is needed when you add or remove a project. ## Creating a workspace The fastest way to start is with `create-nx-workspace`: {% tabs syncKey="package-manager" %} {% tabitem label="npm" %} ```shell npx create-nx-workspace@latest my-workspace ``` {% /tabitem %} {% tabitem label="pnpm" %} ```shell pnpm dlx create-nx-workspace@latest my-workspace ``` {% /tabitem %} {% tabitem label="yarn" %} ```shell yarn dlx create-nx-workspace@latest my-workspace ``` {% /tabitem %} {% tabitem label="bun" %} ```shell bunx create-nx-workspace@latest my-workspace ``` {% /tabitem %} {% /tabs %} The CLI walks you through choosing a starter template (React, Angular, Node, or a blank workspace) and configuring your stack. If you have an existing project, you can [add Nx to it](/docs/getting-started/start-with-existing-project) by running `nx init`. This adds an `nx.json` file to your workspace and optionally detects your tooling to configure plugins. Everything applies whether you created a new workspace or added Nx to an existing one. ## Adding a project Create a new project by adding a directory with a `package.json`: ```shell mkdir -p packages/my-lib ``` ```jsonc // packages/my-lib/package.json { "name": "@my-workspace/my-lib", } ``` Then add it as a dependency in the consuming project's `package.json`: ```jsonc // apps/my-app/package.json { "name": "@my-workspace/my-app", "dependencies": { "@my-workspace/my-lib": "workspace:*", }, } ``` The `@my-workspace` scope used in these tutorials is a placeholder. Your workspace will use whatever scope you chose during setup (e.g., `@org`, `@my-company`). After adding a new project, run your package manager's install command (e.g., `npm install`, `pnpm install`) to link it into the workspace. If you're using [Nx plugins](/docs/concepts/nx-plugins), you can also use generators to scaffold projects with boilerplate: ```shell nx g @nx/js:lib packages/my-lib ``` For a full list of available generators, see [code generation](/docs/features/generate-code). ## Package manager workspaces Nx builds on top of your package manager's workspace feature. Each project with a `package.json` is a workspace package that can depend on other packages in the workspace. {% tabs syncKey="package-manager" %} {% tabitem label="npm" %} ```jsonc // package.json { "workspaces": ["apps/*", "packages/*"], } ``` {% /tabitem %} {% tabitem label="pnpm" %} ```yaml # pnpm-workspace.yaml packages: - 'apps/*' - 'packages/*' ``` {% /tabitem %} {% tabitem label="yarn" %} ```jsonc // package.json { "workspaces": ["apps/*", "packages/*"], } ``` {% /tabitem %} {% tabitem label="bun" %} ```jsonc // package.json { "workspaces": ["apps/*", "packages/*"], } ``` {% /tabitem %} {% /tabs %} This tells your package manager where to find projects. Nx reads this same configuration to discover projects in your workspace. {% aside type="note" title="Non-JavaScript workspaces" %} If you're not using a JS package manager (e.g., Python with uv, Java with Gradle), Nx can still discover projects via `project.json` files. Check your ecosystem's monorepo tooling for equivalent workspace support. {% /aside %} ## How projects link to each other Package manager workspaces handle linking between projects. When you run `install`, your package manager symlinks local packages into `node_modules` so they can be imported like any npm package: ```typescript import { Button } from '@my-workspace/shared-ui'; ``` This works because `@my-workspace/shared-ui` resolves to the local `packages/shared-ui` directory via the symlink, not from the npm registry. Each project needs a `package.json` with a `name` field that matches what other projects import. Nx uses these same package relationships to automatically detect dependencies between projects, so no additional configuration is needed. For more details on how linking works, see your package manager's workspace documentation ([npm](https://docs.npmjs.com/cli/using-npm/workspaces), [pnpm](https://pnpm.io/workspaces), [yarn](https://yarnpkg.com/features/workspaces), [bun](https://bun.sh/docs/install/workspaces)). ## TypeScript configuration For TypeScript workspaces, the recommended setup uses three levels of `tsconfig.json` files. This is the [solution-style project references](https://www.typescriptlang.org/docs/handbook/project-references.html) pattern recommended by the TypeScript team: **`tsconfig.base.json`** at the root shares `compilerOptions` across all projects: ```jsonc // tsconfig.base.json { "compilerOptions": { "target": "ES2020", "module": "nodenext", "moduleResolution": "nodenext", "composite": true, "declaration": true, "declarationMap": true, "sourceMap": true, "strict": true, }, } ``` **`tsconfig.json`** at the root lists all projects as references, so `tsc --build` knows about the full workspace: ```jsonc // tsconfig.json { "files": [], "references": [ { "path": "./apps/my-app" }, { "path": "./packages/shared-ui" }, ], } ``` **`tsconfig.json`** in each project extends the base and declares its own references to other projects it depends on: ```jsonc // apps/my-app/tsconfig.json { "extends": "../../tsconfig.base.json", "compilerOptions": { "outDir": "dist", "rootDir": "src", "tsBuildInfoFile": "dist/tsconfig.tsbuildinfo", }, "references": [{ "path": "../../packages/shared-ui" }], "include": ["src/**/*"], } ``` This setup gives editors and language servers accurate type information per-project, enables incremental builds (only recompile what changed), and creates clear boundaries between projects. For more details, see [maintain TypeScript monorepos](/docs/features/maintain-typescript-monorepos). {% aside type="note" title="Workspaces using tsconfig path aliases" %} Some workspaces use TypeScript `paths` in `tsconfig.base.json` to link projects. This works but is not recommended for new workspaces. Path aliases were not designed for project linking, and solution-style project references work better with editors and build tools. See the [migration guide](/docs/technologies/typescript/guides/switch-to-workspaces-project-references) to switch. {% /aside %} ## Non-JavaScript workspaces Nx is not limited to JavaScript. It works with any language or build tool: - **Gradle**: Nx detects Gradle projects and provides caching, affected analysis, and task orchestration. See the [Gradle tutorial](/docs/getting-started/tutorials/gradle-tutorial). - **Any tool**: If it runs from the command line, Nx can cache and orchestrate it. Nx adapts to your project layout rather than imposing one. Use the folder structure and dependency management conventions established by your language's ecosystem. Nx adds task orchestration, caching, and CI optimization on top of whatever you already have. {% cards cols=2 %} {% card title="Next: Managing Dependencies" description="Track dependencies between projects" url="/docs/getting-started/tutorials/managing-dependencies" /%} {% /cards %} --- ## Gradle Tutorial This tutorial walks you through adding Nx to an existing Gradle project. You'll see how Nx enhances your Gradle workflow with caching, task orchestration, and better developer experience. What you'll learn: - How to integrate Nx with your existing Gradle build system - How Nx caching speeds up your Gradle builds locally and in CI - How to visualize and understand project dependencies in your Gradle workspace - How to run Gradle tasks more efficiently with Nx task runner - How to set up Nx Cloud for faster and self-healing CI ## Ready to start? {% aside type="note" title="Prerequisites" %} This tutorial requires a [GitHub account](https://github.com) to demonstrate the full value of **Nx** - including task running, caching, and CI integration. {% /aside %} ### Step 1: Setup local env Make sure that you have [Gradle](https://gradle.org/) installed on your system. Consult [Gradle's installation guide](https://docs.gradle.org/current/userguide/installation.html) for instruction that are specific to your operating system. To verify that Gradle was installed correctly, run this command: ```shell gradle --version ``` To streamline this tutorial, we'll install Nx globally on your system. You can use your preferred installation method below based on your OS: {% tabs %} {% tabitem label="Homebrew (macOS, Linux)" %} Make sure [Homebrew is installed](https://brew.sh/), then install Nx globally with these commands: ```shell brew install nx ``` {% /tabitem %} {% tabitem label="Chocolatey (Windows)" %} ```shell choco install nx ``` {% /tabitem %} {% tabitem label="apt (Ubuntu)" %} ```shell sudo add-apt-repository ppa:nrwl/nx sudo apt update sudo apt install nx ``` {% /tabitem %} {% tabitem label="Node (any OS)" %} Install node from the [NodeJS website](https://nodejs.org/en/download), then install Nx globally with this command: ```shell npm install --global nx ``` {% /tabitem %} {% /tabs %} ### Step 2: Fork sample repository This tutorial picks up where [Spring framework](https://spring.io/)'s guide for [Multi-Module Projects](https://spring.io/guides/gs/multi-module) leaves off. Fork [the sample repository](https://github.com/nrwl/gradle-tutorial/fork), and then clone it on your local machine: ```shell git clone https://github.com//gradle-tutorial.git ``` The Multi-Module Spring Tutorial left us with 2 projects: - The main `application` project which contains the Spring `DemoApplication` - A `library` project which contains a Service used in the `DemoApplication` You can see the above 2 projects by running `./gradlew projects` ```text {% title="./gradlew projects" %} > Task :projects ------------------------------------------------------------ Root project 'gradle-tutorial' ------------------------------------------------------------ Root project 'gradle-tutorial' +--- Project ':application' \--- Project ':library' ``` ## Add Nx Nx is a monorepo platform with built in tooling and advanced CI capabilities. It helps you maintain and scale monorepos, Both locally and on CI. Explore the features of Nx by adding it to the Gradle workspace above. To add Nx, run ```shell nx init ``` This command will download the latest version of Nx and help set up your repository to take advantage of it. Nx will also detect Gradle is used in the repo so it will propose adding the `@nx/gradle` plugin to integrate Gradle with Nx. 1. You'll also be prompted to add Nx Cloud, select "yes" 2. Select the plugin and continue with the setup. Finally, commit and push all the changes to GitHub and proceed with finishing your Nx Cloud setup. ### Finish Nx Cloud setup > Nx Cloud provides self-healing CI, remote caching and many other features. [Learn more about Nx Cloud features](/docs/features/ci-features). Click the link printing in your terminal, or you can [finish setup in Nx Cloud](https://cloud.nx.app/setup/connect-workspace/github/select) {% call_to_action variant="simple" title="Finish Nx Cloud Setup" url="https://cloud.nx.app/setup/connect-workspace/github/select" /%} ### Verify your setup Please verify closely that you have the following setup: 1. A new Nx workspace on your local machine 2. A corresponding GitHub repository for the workspace 3. You completed the full Nx Cloud onboarding, and you now have a Nx Cloud dashboard that is connected to your example repository on GitHub. You should see your workspace in your [Nx Cloud organization](https://cloud.nx.app/orgs). ## Explore your workspace Like Gradle, Nx understands your workspace as a graph of projects. Nx uses this graph for many things which we will learn about in following sections. To visualize this graph in your browser, Run the following command and click the "Show all projects" button in the left sidebar. You will recognize that the projects which are shown, are the same projects which Gradle shows. The `@nx/gradle` plugin reflects the graph of projects in Gradle into the Nx Project Graph. As projects are created, deleted, and change their dependencies, Nx will automatically recalculate the graph. Exploring this graph visually is vital to understanding how your code is structured and how Nx and Gradle behaves. ```shell nx graph ``` {% graph title="Gradle Projects" height="300px" %} ```json { "hash": "ad0ea8f7ae85c873d6478c31b51a95c54115a49844d62a5b3972232ac82137d0", "projects": [ { "name": "application", "type": "lib", "data": { "root": "application", "name": "application", "metadata": { "technologies": ["gradle"] }, "targets": { "bootRun": { "options": { "cwd": "application", "command": "../gradlew bootRun" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "bootTestRun": { "options": { "cwd": "application", "command": "../gradlew bootTestRun" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "assemble": { "options": { "cwd": "application", "command": "../gradlew assemble" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "bootBuildImage": { "options": { "cwd": "application", "command": "../gradlew bootBuildImage" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "bootJar": { "options": { "cwd": "application", "command": "../gradlew bootJar" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "build": { "options": { "cwd": "application", "command": "../gradlew build" }, "cache": true, "inputs": ["production", "^production"], "outputs": ["{workspaceRoot}/application/build"], "dependsOn": ["^build", "classes"], "executor": "nx:run-commands", "configurations": {} }, "buildDependents": { "options": { "cwd": "application", "command": "../gradlew buildDependents" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "buildNeeded": { "options": { "cwd": "application", "command": "../gradlew buildNeeded" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "classes": { "options": { "cwd": "application", "command": "../gradlew classes" }, "cache": true, "inputs": ["default", "^default"], "outputs": ["{workspaceRoot}/application/build/classes"], "dependsOn": ["^classes"], "executor": "nx:run-commands", "configurations": {} }, "clean": { "options": { "cwd": "application", "command": "../gradlew clean" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "jar": { "options": { "cwd": "application", "command": "../gradlew jar" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "resolveMainClassName": { "options": { "cwd": "application", "command": "../gradlew resolveMainClassName" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "resolveTestMainClassName": { "options": { "cwd": "application", "command": "../gradlew resolveTestMainClassName" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "testClasses": { "options": { "cwd": "application", "command": "../gradlew testClasses" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "javadoc": { "options": { "cwd": "application", "command": "../gradlew javadoc" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "buildEnvironment": { "options": { "cwd": "application", "command": "../gradlew buildEnvironment" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencies": { "options": { "cwd": "application", "command": "../gradlew dependencies" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencyInsight": { "options": { "cwd": "application", "command": "../gradlew dependencyInsight" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencyManagement": { "options": { "cwd": "application", "command": "../gradlew dependencyManagement" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "help": { "options": { "cwd": "application", "command": "../gradlew help" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "javaToolchains": { "options": { "cwd": "application", "command": "../gradlew javaToolchains" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "outgoingVariants": { "options": { "cwd": "application", "command": "../gradlew outgoingVariants" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "projects": { "options": { "cwd": "application", "command": "../gradlew projects" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "properties": { "options": { "cwd": "application", "command": "../gradlew properties" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "resolvableConfigurations": { "options": { "cwd": "application", "command": "../gradlew resolvableConfigurations" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "tasks": { "options": { "cwd": "application", "command": "../gradlew tasks" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "projectReport": { "options": { "cwd": "application", "command": "../gradlew projectReport" }, "cache": false, "outputs": ["{workspaceRoot}/application/build/reports/project"], "executor": "nx:run-commands", "configurations": {} }, "check": { "options": { "cwd": "application", "command": "../gradlew check" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "test": { "options": { "cwd": "application", "command": "../gradlew test" }, "cache": true, "inputs": ["default", "^production"], "dependsOn": ["classes"], "executor": "nx:run-commands", "configurations": {} } }, "implicitDependencies": [], "tags": [] } }, { "name": "library", "type": "lib", "data": { "root": "library", "name": "library", "metadata": { "technologies": ["gradle"] }, "targets": { "assemble": { "options": { "cwd": "library", "command": "../gradlew assemble" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "build": { "options": { "cwd": "library", "command": "../gradlew build" }, "cache": true, "inputs": ["production", "^production"], "outputs": ["{workspaceRoot}/library/build"], "dependsOn": ["^build", "classes"], "executor": "nx:run-commands", "configurations": {} }, "buildDependents": { "options": { "cwd": "library", "command": "../gradlew buildDependents" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "buildNeeded": { "options": { "cwd": "library", "command": "../gradlew buildNeeded" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "classes": { "options": { "cwd": "library", "command": "../gradlew classes" }, "cache": true, "inputs": ["default", "^default"], "outputs": ["{workspaceRoot}/library/build/classes"], "dependsOn": ["^classes"], "executor": "nx:run-commands", "configurations": {} }, "clean": { "options": { "cwd": "library", "command": "../gradlew clean" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "jar": { "options": { "cwd": "library", "command": "../gradlew jar" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "testClasses": { "options": { "cwd": "library", "command": "../gradlew testClasses" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "javadoc": { "options": { "cwd": "library", "command": "../gradlew javadoc" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "buildEnvironment": { "options": { "cwd": "library", "command": "../gradlew buildEnvironment" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencies": { "options": { "cwd": "library", "command": "../gradlew dependencies" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencyInsight": { "options": { "cwd": "library", "command": "../gradlew dependencyInsight" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencyManagement": { "options": { "cwd": "library", "command": "../gradlew dependencyManagement" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "help": { "options": { "cwd": "library", "command": "../gradlew help" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "javaToolchains": { "options": { "cwd": "library", "command": "../gradlew javaToolchains" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "outgoingVariants": { "options": { "cwd": "library", "command": "../gradlew outgoingVariants" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "projects": { "options": { "cwd": "library", "command": "../gradlew projects" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "properties": { "options": { "cwd": "library", "command": "../gradlew properties" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "resolvableConfigurations": { "options": { "cwd": "library", "command": "../gradlew resolvableConfigurations" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "tasks": { "options": { "cwd": "library", "command": "../gradlew tasks" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "projectReport": { "options": { "cwd": "library", "command": "../gradlew projectReport" }, "cache": false, "outputs": ["{workspaceRoot}/library/build/reports/project"], "executor": "nx:run-commands", "configurations": {} }, "check": { "options": { "cwd": "library", "command": "../gradlew check" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "test": { "options": { "cwd": "library", "command": "../gradlew test" }, "cache": true, "inputs": ["default", "^production"], "dependsOn": ["classes"], "executor": "nx:run-commands", "configurations": {} } }, "implicitDependencies": [], "tags": [] } }, { "name": "gradle-tutorial", "type": "lib", "data": { "root": ".", "name": "gradle-tutorial", "metadata": { "technologies": ["gradle"] }, "targets": { "bootRun": { "options": { "cwd": ".", "command": "gradlew bootRun" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "bootTestRun": { "options": { "cwd": ".", "command": "gradlew bootTestRun" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "assemble": { "options": { "cwd": ".", "command": "gradlew assemble" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "bootBuildImage": { "options": { "cwd": ".", "command": "gradlew bootBuildImage" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "bootJar": { "options": { "cwd": ".", "command": "gradlew bootJar" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "build": { "options": { "cwd": ".", "command": "gradlew build" }, "cache": true, "inputs": ["production", "^production"], "outputs": ["{workspaceRoot}/build"], "dependsOn": ["^build", "classes"], "executor": "nx:run-commands", "configurations": {} }, "buildDependents": { "options": { "cwd": ".", "command": "gradlew buildDependents" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "buildNeeded": { "options": { "cwd": ".", "command": "gradlew buildNeeded" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "classes": { "options": { "cwd": ".", "command": "gradlew classes" }, "cache": true, "inputs": ["default", "^default"], "outputs": ["{workspaceRoot}/build/classes"], "dependsOn": ["^classes"], "executor": "nx:run-commands", "configurations": {} }, "clean": { "options": { "cwd": ".", "command": "gradlew clean" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "jar": { "options": { "cwd": ".", "command": "gradlew jar" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "resolveMainClassName": { "options": { "cwd": ".", "command": "gradlew resolveMainClassName" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "resolveTestMainClassName": { "options": { "cwd": ".", "command": "gradlew resolveTestMainClassName" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "testClasses": { "options": { "cwd": ".", "command": "gradlew testClasses" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "init": { "options": { "cwd": ".", "command": "gradlew init" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "wrapper": { "options": { "cwd": ".", "command": "gradlew wrapper" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "javadoc": { "options": { "cwd": ".", "command": "gradlew javadoc" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "buildEnvironment": { "options": { "cwd": ".", "command": "gradlew buildEnvironment" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencies": { "options": { "cwd": ".", "command": "gradlew dependencies" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencyInsight": { "options": { "cwd": ".", "command": "gradlew dependencyInsight" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "dependencyManagement": { "options": { "cwd": ".", "command": "gradlew dependencyManagement" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "help": { "options": { "cwd": ".", "command": "gradlew help" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "javaToolchains": { "options": { "cwd": ".", "command": "gradlew javaToolchains" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "outgoingVariants": { "options": { "cwd": ".", "command": "gradlew outgoingVariants" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "projects": { "options": { "cwd": ".", "command": "gradlew projects" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "properties": { "options": { "cwd": ".", "command": "gradlew properties" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "resolvableConfigurations": { "options": { "cwd": ".", "command": "gradlew resolvableConfigurations" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "tasks": { "options": { "cwd": ".", "command": "gradlew tasks" }, "cache": false, "executor": "nx:run-commands", "configurations": {} }, "projectReport": { "options": { "cwd": ".", "command": "gradlew projectReport" }, "cache": false, "outputs": ["{workspaceRoot}/build/reports/project"], "executor": "nx:run-commands", "configurations": {} }, "check": { "options": { "cwd": ".", "command": "gradlew check" }, "cache": true, "executor": "nx:run-commands", "configurations": {} }, "test": { "options": { "cwd": ".", "command": "gradlew test" }, "cache": true, "inputs": ["default", "^production"], "dependsOn": ["classes"], "executor": "nx:run-commands", "configurations": {} } }, "implicitDependencies": [], "tags": [] } } ], "dependencies": { "application": [ { "source": "application", "target": "library", "type": "static" } ], "library": [], "gradle-tutorial": [ { "source": "gradle-tutorial", "target": "application", "type": "static" }, { "source": "gradle-tutorial", "target": "library", "type": "static" } ] }, "fileMap": { "gradle-tutorial": [ { "file": ".gitignore", "hash": "14909406639319951657" }, { "file": ".mvn/wrapper/maven-wrapper.jar", "hash": "4599048718252132452" }, { "file": ".mvn/wrapper/maven-wrapper.properties", "hash": "14062264138023377782" }, { "file": ".nx/nxw.js", "hash": "11254603345854076312" }, { "file": "README.md", "hash": "2143394647937164316" }, { "file": "build.gradle", "hash": "6776876968973565169" }, { "file": "gradle/wrapper/gradle-wrapper.jar", "hash": "4964452923295747250" }, { "file": "gradle/wrapper/gradle-wrapper.properties", "hash": "6992607940248896982" }, { "file": "gradlew", "hash": "6979324130797723579" }, { "file": "gradlew.bat", "hash": "8837338129170646035" }, { "file": "nx", "hash": "17165975525224888679" }, { "file": "nx.bat", "hash": "16828809655253213278" }, { "file": "nx.json", "hash": "16332841804597347663" }, { "file": "settings.gradle", "hash": "5690477904066095133" } ], "application": [ { "file": "application/build.gradle", "hash": "13223387111008998466", "deps": ["library"] }, { "file": "application/settings.gradle", "hash": "18286272227920383284" }, { "file": "application/src/main/java/com/example/multimodule/application/DemoApplication.java", "hash": "117361831080317523" }, { "file": "application/src/main/resources/application.properties", "hash": "18341909134911292471" }, { "file": "application/src/test/java/com/example/multimodule/application/DemoApplicationTest.java", "hash": "12794301020412404095" } ], "library": [ { "file": "library/build.gradle", "hash": "10874096726061397019" }, { "file": "library/settings.gradle", "hash": "13735992102452590570" }, { "file": "library/src/main/java/com/example/multimodule/service/MyService.java", "hash": "1785389597461451392" }, { "file": "library/src/main/java/com/example/multimodule/service/ServiceProperties.java", "hash": "5222601539204689670" }, { "file": "library/src/test/java/com/example/multimodule/service/MyServiceTest.java", "hash": "7173828893536126792" } ] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [], "isPartial": false } ``` {% /graph %} ## Running tasks Nx is a task runner built for monorepos. It can run a single task for a single project, a task for all projects, and even intelligently run a subset of tasks based on the changes you've made in your repository. Nx also has sophisticated Computation caching to reuse the results of tasks. Explore how Nx adds to the task running Gradle provides. Before we start running tasks, let's explore the tasks available for the `application` project. The `@nx/gradle` plugin that we've installed reflects Gradle's tasks to Nx, which allows it to run any of the Gradle tasks defined for that project. You can view the available tasks either through [Nx Console](/docs/getting-started/editor-setup) or from the terminal: ```shell nx show project application ``` {% project_details title="Project Details View" expandedTargets=["build"] height="520px" %} ```json { "project": { "name": "application", "type": "lib", "data": { "root": "application", "name": "application", "metadata": { "targetGroups": { "Application": ["bootRun", "bootTestRun"], "Build": [ "assemble", "bootBuildImage", "bootJar", "build", "buildDependents", "buildNeeded", "classes", "clean", "jar", "resolveMainClassName", "resolveTestMainClassName", "testClasses" ], "Documentation": ["javadoc"], "Help": [ "buildEnvironment", "dependencies", "dependencyInsight", "dependencyManagement", "help", "javaToolchains", "outgoingVariants", "projects", "properties", "resolvableConfigurations", "tasks" ], "Reporting": ["projectReport"], "Verification": ["check", "test"] }, "technologies": ["gradle"] }, "targets": { "bootRun": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:bootRun" }, "configurations": {} }, "bootTestRun": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:bootTestRun" }, "configurations": {} }, "assemble": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:assemble" }, "configurations": {} }, "bootBuildImage": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:bootBuildImage" }, "configurations": {} }, "bootJar": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:bootJar" }, "configurations": {} }, "build": { "cache": true, "inputs": ["production", "^production"], "outputs": ["{workspaceRoot}/application/build"], "dependsOn": ["^build", "classes"], "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:build" }, "configurations": {} }, "buildDependents": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:buildDependents" }, "configurations": {} }, "buildNeeded": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:buildNeeded" }, "configurations": {} }, "classes": { "cache": true, "inputs": ["production", "^production"], "outputs": ["{workspaceRoot}/application/build/classes"], "dependsOn": ["^classes"], "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:classes" }, "configurations": {} }, "clean": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:clean" }, "configurations": {} }, "jar": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:jar" }, "configurations": {} }, "resolveMainClassName": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:resolveMainClassName" }, "configurations": {} }, "resolveTestMainClassName": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:resolveTestMainClassName" }, "configurations": {} }, "testClasses": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:testClasses" }, "configurations": {} }, "javadoc": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:javadoc" }, "configurations": {} }, "buildEnvironment": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:buildEnvironment" }, "configurations": {} }, "dependencies": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:dependencies" }, "configurations": {} }, "dependencyInsight": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:dependencyInsight" }, "configurations": {} }, "dependencyManagement": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:dependencyManagement" }, "configurations": {} }, "help": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:help" }, "configurations": {} }, "javaToolchains": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:javaToolchains" }, "configurations": {} }, "outgoingVariants": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:outgoingVariants" }, "configurations": {} }, "projects": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:projects" }, "configurations": {} }, "properties": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:properties" }, "configurations": {} }, "resolvableConfigurations": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:resolvableConfigurations" }, "configurations": {} }, "tasks": { "cache": false, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:tasks" }, "configurations": {} }, "projectReport": { "cache": false, "outputs": ["{workspaceRoot}/application/build/reports/project"], "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:projectReport" }, "configurations": {} }, "check": { "cache": true, "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:check" }, "configurations": {} }, "test": { "cache": true, "inputs": ["default", "^production"], "dependsOn": ["classes"], "metadata": { "technologies": ["gradle"] }, "executor": "nx:run-commands", "options": { "command": "./gradlew :application:test" }, "configurations": {} } }, "implicitDependencies": [], "tags": [] } }, "sourceMap": { "root": ["application/build.gradle", "@nx/gradle"], "name": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Application": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Application.0": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Application.1": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Build": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.0": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.1": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.2": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.3": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.4": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.5": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.6": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.7": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.8": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.9": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Build.10": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Build.11": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Documentation": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Documentation.0": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Help": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.0": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.1": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.2": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.3": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.4": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.5": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.6": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.7": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.8": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.9": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Help.10": ["application/build.gradle", "@nx/gradle"], "metadata.targetGroups.Reporting": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Reporting.0": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Verification": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Verification.0": [ "application/build.gradle", "@nx/gradle" ], "metadata.targetGroups.Verification.1": [ "application/build.gradle", "@nx/gradle" ], "metadata.technologies": ["application/build.gradle", "@nx/gradle"], "metadata.technologies.0": ["application/build.gradle", "@nx/gradle"], "targets": ["application/build.gradle", "@nx/gradle"], "targets.bootRun": ["application/build.gradle", "@nx/gradle"], "targets.bootRun.cache": ["application/build.gradle", "@nx/gradle"], "targets.bootRun.inputs": ["application/build.gradle", "@nx/gradle"], "targets.bootRun.outputs": ["application/build.gradle", "@nx/gradle"], "targets.bootRun.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.bootRun.metadata": ["application/build.gradle", "@nx/gradle"], "targets.bootRun.executor": ["application/build.gradle", "@nx/gradle"], "targets.bootRun.options": ["application/build.gradle", "@nx/gradle"], "targets.bootRun.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.bootRun.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.bootRun.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.bootTestRun": ["application/build.gradle", "@nx/gradle"], "targets.bootTestRun.cache": ["application/build.gradle", "@nx/gradle"], "targets.bootTestRun.inputs": ["application/build.gradle", "@nx/gradle"], "targets.bootTestRun.outputs": ["application/build.gradle", "@nx/gradle"], "targets.bootTestRun.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.bootTestRun.metadata": ["application/build.gradle", "@nx/gradle"], "targets.bootTestRun.executor": ["application/build.gradle", "@nx/gradle"], "targets.bootTestRun.options": ["application/build.gradle", "@nx/gradle"], "targets.bootTestRun.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.bootTestRun.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.bootTestRun.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.assemble": ["application/build.gradle", "@nx/gradle"], "targets.assemble.cache": ["application/build.gradle", "@nx/gradle"], "targets.assemble.inputs": ["application/build.gradle", "@nx/gradle"], "targets.assemble.outputs": ["application/build.gradle", "@nx/gradle"], "targets.assemble.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.assemble.metadata": ["application/build.gradle", "@nx/gradle"], "targets.assemble.executor": ["application/build.gradle", "@nx/gradle"], "targets.assemble.options": ["application/build.gradle", "@nx/gradle"], "targets.assemble.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.assemble.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.assemble.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.bootBuildImage": ["application/build.gradle", "@nx/gradle"], "targets.bootBuildImage.cache": ["application/build.gradle", "@nx/gradle"], "targets.bootBuildImage.inputs": ["application/build.gradle", "@nx/gradle"], "targets.bootBuildImage.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.bootBuildImage.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.bootBuildImage.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.bootBuildImage.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.bootBuildImage.options": [ "application/build.gradle", "@nx/gradle" ], "targets.bootBuildImage.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.bootBuildImage.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.bootBuildImage.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.bootJar": ["application/build.gradle", "@nx/gradle"], "targets.bootJar.cache": ["application/build.gradle", "@nx/gradle"], "targets.bootJar.inputs": ["application/build.gradle", "@nx/gradle"], "targets.bootJar.outputs": ["application/build.gradle", "@nx/gradle"], "targets.bootJar.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.bootJar.metadata": ["application/build.gradle", "@nx/gradle"], "targets.bootJar.executor": ["application/build.gradle", "@nx/gradle"], "targets.bootJar.options": ["application/build.gradle", "@nx/gradle"], "targets.bootJar.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.bootJar.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.bootJar.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.build": ["application/build.gradle", "@nx/gradle"], "targets.build.cache": ["application/build.gradle", "@nx/gradle"], "targets.build.inputs": ["application/build.gradle", "@nx/gradle"], "targets.build.outputs": ["application/build.gradle", "@nx/gradle"], "targets.build.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.build.metadata": ["application/build.gradle", "@nx/gradle"], "targets.build.executor": ["application/build.gradle", "@nx/gradle"], "targets.build.options": ["application/build.gradle", "@nx/gradle"], "targets.build.options.command": ["application/build.gradle", "@nx/gradle"], "targets.build.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.build.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents": ["application/build.gradle", "@nx/gradle"], "targets.buildDependents.cache": ["application/build.gradle", "@nx/gradle"], "targets.buildDependents.inputs": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents.options": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.buildDependents.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.buildNeeded": ["application/build.gradle", "@nx/gradle"], "targets.buildNeeded.cache": ["application/build.gradle", "@nx/gradle"], "targets.buildNeeded.inputs": ["application/build.gradle", "@nx/gradle"], "targets.buildNeeded.outputs": ["application/build.gradle", "@nx/gradle"], "targets.buildNeeded.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.buildNeeded.metadata": ["application/build.gradle", "@nx/gradle"], "targets.buildNeeded.executor": ["application/build.gradle", "@nx/gradle"], "targets.buildNeeded.options": ["application/build.gradle", "@nx/gradle"], "targets.buildNeeded.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.buildNeeded.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.buildNeeded.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.classes": ["application/build.gradle", "@nx/gradle"], "targets.classes.cache": ["application/build.gradle", "@nx/gradle"], "targets.classes.inputs": ["application/build.gradle", "@nx/gradle"], "targets.classes.outputs": ["application/build.gradle", "@nx/gradle"], "targets.classes.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.classes.metadata": ["application/build.gradle", "@nx/gradle"], "targets.classes.executor": ["application/build.gradle", "@nx/gradle"], "targets.classes.options": ["application/build.gradle", "@nx/gradle"], "targets.classes.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.classes.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.classes.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.clean": ["application/build.gradle", "@nx/gradle"], "targets.clean.cache": ["application/build.gradle", "@nx/gradle"], "targets.clean.inputs": ["application/build.gradle", "@nx/gradle"], "targets.clean.outputs": ["application/build.gradle", "@nx/gradle"], "targets.clean.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.clean.metadata": ["application/build.gradle", "@nx/gradle"], "targets.clean.executor": ["application/build.gradle", "@nx/gradle"], "targets.clean.options": ["application/build.gradle", "@nx/gradle"], "targets.clean.options.command": ["application/build.gradle", "@nx/gradle"], "targets.clean.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.clean.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.jar": ["application/build.gradle", "@nx/gradle"], "targets.jar.cache": ["application/build.gradle", "@nx/gradle"], "targets.jar.inputs": ["application/build.gradle", "@nx/gradle"], "targets.jar.outputs": ["application/build.gradle", "@nx/gradle"], "targets.jar.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.jar.metadata": ["application/build.gradle", "@nx/gradle"], "targets.jar.executor": ["application/build.gradle", "@nx/gradle"], "targets.jar.options": ["application/build.gradle", "@nx/gradle"], "targets.jar.options.command": ["application/build.gradle", "@nx/gradle"], "targets.jar.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.jar.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName": ["application/build.gradle", "@nx/gradle"], "targets.resolveMainClassName.cache": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.inputs": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.options": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveMainClassName.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.cache": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.inputs": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.options": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.resolveTestMainClassName.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.testClasses": ["application/build.gradle", "@nx/gradle"], "targets.testClasses.cache": ["application/build.gradle", "@nx/gradle"], "targets.testClasses.inputs": ["application/build.gradle", "@nx/gradle"], "targets.testClasses.outputs": ["application/build.gradle", "@nx/gradle"], "targets.testClasses.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.testClasses.metadata": ["application/build.gradle", "@nx/gradle"], "targets.testClasses.executor": ["application/build.gradle", "@nx/gradle"], "targets.testClasses.options": ["application/build.gradle", "@nx/gradle"], "targets.testClasses.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.testClasses.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.testClasses.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.javadoc": ["application/build.gradle", "@nx/gradle"], "targets.javadoc.cache": ["application/build.gradle", "@nx/gradle"], "targets.javadoc.inputs": ["application/build.gradle", "@nx/gradle"], "targets.javadoc.outputs": ["application/build.gradle", "@nx/gradle"], "targets.javadoc.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.javadoc.metadata": ["application/build.gradle", "@nx/gradle"], "targets.javadoc.executor": ["application/build.gradle", "@nx/gradle"], "targets.javadoc.options": ["application/build.gradle", "@nx/gradle"], "targets.javadoc.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.javadoc.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.javadoc.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment": ["application/build.gradle", "@nx/gradle"], "targets.buildEnvironment.cache": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.inputs": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.options": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.buildEnvironment.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencies": ["application/build.gradle", "@nx/gradle"], "targets.dependencies.cache": ["application/build.gradle", "@nx/gradle"], "targets.dependencies.inputs": ["application/build.gradle", "@nx/gradle"], "targets.dependencies.outputs": ["application/build.gradle", "@nx/gradle"], "targets.dependencies.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencies.metadata": ["application/build.gradle", "@nx/gradle"], "targets.dependencies.executor": ["application/build.gradle", "@nx/gradle"], "targets.dependencies.options": ["application/build.gradle", "@nx/gradle"], "targets.dependencies.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencies.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencies.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight": ["application/build.gradle", "@nx/gradle"], "targets.dependencyInsight.cache": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.inputs": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.options": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyInsight.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement": ["application/build.gradle", "@nx/gradle"], "targets.dependencyManagement.cache": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.inputs": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.options": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.dependencyManagement.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.help": ["application/build.gradle", "@nx/gradle"], "targets.help.cache": ["application/build.gradle", "@nx/gradle"], "targets.help.inputs": ["application/build.gradle", "@nx/gradle"], "targets.help.outputs": ["application/build.gradle", "@nx/gradle"], "targets.help.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.help.metadata": ["application/build.gradle", "@nx/gradle"], "targets.help.executor": ["application/build.gradle", "@nx/gradle"], "targets.help.options": ["application/build.gradle", "@nx/gradle"], "targets.help.options.command": ["application/build.gradle", "@nx/gradle"], "targets.help.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.help.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.javaToolchains": ["application/build.gradle", "@nx/gradle"], "targets.javaToolchains.cache": ["application/build.gradle", "@nx/gradle"], "targets.javaToolchains.inputs": ["application/build.gradle", "@nx/gradle"], "targets.javaToolchains.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.javaToolchains.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.javaToolchains.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.javaToolchains.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.javaToolchains.options": [ "application/build.gradle", "@nx/gradle" ], "targets.javaToolchains.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.javaToolchains.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.javaToolchains.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants": ["application/build.gradle", "@nx/gradle"], "targets.outgoingVariants.cache": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.inputs": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.options": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.outgoingVariants.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.projects": ["application/build.gradle", "@nx/gradle"], "targets.projects.cache": ["application/build.gradle", "@nx/gradle"], "targets.projects.inputs": ["application/build.gradle", "@nx/gradle"], "targets.projects.outputs": ["application/build.gradle", "@nx/gradle"], "targets.projects.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.projects.metadata": ["application/build.gradle", "@nx/gradle"], "targets.projects.executor": ["application/build.gradle", "@nx/gradle"], "targets.projects.options": ["application/build.gradle", "@nx/gradle"], "targets.projects.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.projects.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.projects.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.properties": ["application/build.gradle", "@nx/gradle"], "targets.properties.cache": ["application/build.gradle", "@nx/gradle"], "targets.properties.inputs": ["application/build.gradle", "@nx/gradle"], "targets.properties.outputs": ["application/build.gradle", "@nx/gradle"], "targets.properties.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.properties.metadata": ["application/build.gradle", "@nx/gradle"], "targets.properties.executor": ["application/build.gradle", "@nx/gradle"], "targets.properties.options": ["application/build.gradle", "@nx/gradle"], "targets.properties.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.properties.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.properties.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.cache": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.inputs": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.outputs": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.options": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.resolvableConfigurations.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.tasks": ["application/build.gradle", "@nx/gradle"], "targets.tasks.cache": ["application/build.gradle", "@nx/gradle"], "targets.tasks.inputs": ["application/build.gradle", "@nx/gradle"], "targets.tasks.outputs": ["application/build.gradle", "@nx/gradle"], "targets.tasks.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.tasks.metadata": ["application/build.gradle", "@nx/gradle"], "targets.tasks.executor": ["application/build.gradle", "@nx/gradle"], "targets.tasks.options": ["application/build.gradle", "@nx/gradle"], "targets.tasks.options.command": ["application/build.gradle", "@nx/gradle"], "targets.tasks.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.tasks.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.projectReport": ["application/build.gradle", "@nx/gradle"], "targets.projectReport.cache": ["application/build.gradle", "@nx/gradle"], "targets.projectReport.inputs": ["application/build.gradle", "@nx/gradle"], "targets.projectReport.outputs": ["application/build.gradle", "@nx/gradle"], "targets.projectReport.dependsOn": [ "application/build.gradle", "@nx/gradle" ], "targets.projectReport.metadata": [ "application/build.gradle", "@nx/gradle" ], "targets.projectReport.executor": [ "application/build.gradle", "@nx/gradle" ], "targets.projectReport.options": ["application/build.gradle", "@nx/gradle"], "targets.projectReport.options.command": [ "application/build.gradle", "@nx/gradle" ], "targets.projectReport.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.projectReport.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.check": ["application/build.gradle", "@nx/gradle"], "targets.check.cache": ["application/build.gradle", "@nx/gradle"], "targets.check.inputs": ["application/build.gradle", "@nx/gradle"], "targets.check.outputs": ["application/build.gradle", "@nx/gradle"], "targets.check.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.check.metadata": ["application/build.gradle", "@nx/gradle"], "targets.check.executor": ["application/build.gradle", "@nx/gradle"], "targets.check.options": ["application/build.gradle", "@nx/gradle"], "targets.check.options.command": ["application/build.gradle", "@nx/gradle"], "targets.check.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.check.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ], "targets.test": ["application/build.gradle", "@nx/gradle"], "targets.test.cache": ["application/build.gradle", "@nx/gradle"], "targets.test.inputs": ["application/build.gradle", "@nx/gradle"], "targets.test.outputs": ["application/build.gradle", "@nx/gradle"], "targets.test.dependsOn": ["application/build.gradle", "@nx/gradle"], "targets.test.metadata": ["application/build.gradle", "@nx/gradle"], "targets.test.executor": ["application/build.gradle", "@nx/gradle"], "targets.test.options": ["application/build.gradle", "@nx/gradle"], "targets.test.options.command": ["application/build.gradle", "@nx/gradle"], "targets.test.metadata.technologies": [ "application/build.gradle", "@nx/gradle" ], "targets.test.metadata.technologies.0": [ "application/build.gradle", "@nx/gradle" ] } } ``` {% /project_details %} The Nx command to run the `build` task for the `application` project is: ```shell nx run application:build ``` When Nx runs a Gradle task, it hands off the execution of that task to Gradle, so all task dependencies and configuration settings in the Gradle configuration are still respected. By running the task via Nx, however, the task computation was cached for reuse. Now, running `nx run application:build` again, will complete almost instantly as the result from the previous execution will be used. ```text {% title="nx run application:build"%} ✔ 1/1 dependent project tasks succeeded [1 read from cache] Hint: you can run the command with --verbose to see the full dependent project outputs ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run application:classes [existing outputs match the cache, left as is] > ./gradlew :application:classes > Task :library:compileJava UP-TO-DATE > Task :library:processResources NO-SOURCE > Task :library:classes UP-TO-DATE > Task :library:jar UP-TO-DATE > Task :application:compileJava UP-TO-DATE > Task :application:processResources UP-TO-DATE > Task :application:classes UP-TO-DATE BUILD SUCCESSFUL in 647ms 4 actionable tasks: 4 up-to-date > nx run application:build [existing outputs match the cache, left as is] > ./gradlew :application:build Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0. You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins. For more on this, please refer to https://docs.gradle.org/8.5/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation. BUILD SUCCESSFUL in 768ms 9 actionable tasks: 1 executed, 8 up-to-date ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Successfully ran target build for project application and 3 tasks it depends on (30ms) Nx read the output from the cache instead of running the command for 4 out of 4 tasks. ``` Now that we've run one task, let's run all the `build` tasks in the repository with the Nx `run-many` command. This is similar to Gradle's `./gradlew build` command. ```text {% title="nx run-many -t build"%} ✔ nx run library:classes [existing outputs match the cache, left as is] ✔ nx run library:build [existing outputs match the cache, left as is] ✔ nx run application:classes [existing outputs match the cache, left as is] ✔ nx run application:build [existing outputs match the cache, left as is] ✔ nx run gradle-tutorial:classes (1s) ✔ nx run gradle-tutorial:build (1s) ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Successfully ran target build for 3 projects and 3 tasks they depend on (2s) Nx read the output from the cache instead of running the command for 4 out of 6 tasks. ``` Again, because Nx cached the tasks when the application was built, most of the tasks here were near instant. The only ones which needed to be done is the root project's build. Running the command one more time, will be near instant as then all the tasks will be restored from the cache. ### Remote cache for faster development With Nx Cloud connected, your task results are now cached remotely. This means that if another developer runs the same tasks, or if you run them in CI, the results will be retrieved from the remote cache instead of being re-executed. Try running the build again after making a small change to see how Nx intelligently determines which tasks need to be re-run: ```shell nx run-many -t build ``` You'll notice that only the affected projects need to rebuild, while others are restored from cache. ## Self-Healing CI with Nx Cloud Explore how Nx Cloud can help your pull request get to green faster with self-healing CI. You can copy the example GitHub Action workflow file and place it in the `.github/workflows/ci.yml` directory of your repository. This workflow will run the `lint`, `test`, and `build` tasks for all projects in your repository whenever a pull request is opened or updated. The `npx nx fix-ci` command that is already included in your GitHub Actions workflow (`github/workflows/ci.yml`) is responsible for enabling self-healing CI and will automatically suggest fixes to your failing tasks. ```yaml {% meta="{34,35}" %} # .github/workflows/ci.yml name: CI on: push: branches: - main pull_request: permissions: actions: read contents: read jobs: main: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: filter: tree:0 fetch-depth: 0 - uses: actions/setup-node@v4 with: node-version: 22 - name: Set up JDK 21 for x64 uses: actions/setup-java@v4 with: java-version: '21' distribution: 'temurin' architecture: x64 - name: Setup Gradle uses: gradle/actions/setup-gradle@v4 - run: npx nx run-many -t lint test build - run: npx nx fix-ci if: always() ``` You will also need to install the [Nx Console](/docs/getting-started/editor-setup) editor extension for VS Code, Cursor, or IntelliJ. For the complete AI setup guide, see our [AI integration documentation](/docs/getting-started/ai-setup). {% install_nx_console /%} ### Open a pull request Start by making a new branch to work on. ```shell git checkout -b self-healing-ci ``` Now for demo purposes, we'll make a mistake in our `DemoApplication` class that will cause the build to fail. ```diff {% meta="lang='java'" %} // application/src/main/java/com/example/multimodule/application/DemoApplication.java @GetMapping("/") public String home() { + return myService.messages(); - return myService.message(); } ``` Commit the changes and open a new PR on GitHub. ```shell git add . git commit -m 'demo self-healing ci' git push origin self-healing-ci ``` Once pushed, CI will kick off, and we'll run into an error for the `application:build` task. When the build task fails, you'll see Nx Cloud begin to analyze the failure and suggest a fix. When a fix is proposed, Nx Console will show a notification for you to review the potential fix. From here you can reject or apply the fix. Applying the fix will commit back to your PR and re-trigger CI to run. You can also see the fix link on the GitHub PR comment that Nx Cloud leaves. ![Nx Cloud PR comment with AI fix suggestion](../../../../assets/tutorials/nx-cloud-gh-comment-self-healing-coment.avif) From here you can manually apply or reject the fix: ![Nx Cloud apply AI fix suggestion](../../../../assets/tutorials/nx-cloud-apply-fix-self-healing-ci.avif) For more information about how Nx can improve your CI pipeline, check out our [CI guides](/docs/guides/nx-cloud/setup-ci) ## Summary Now that you have added Nx to this sample Gradle repository, you have learned several ways that Nx can help your organization: - Nx reflects the Gradle graph into the Nx graph - Nx dependency graph visualisation helps you understand your codebase - Nx caches task results and reuses them when the same task is rerun later - Nx Cloud provides self-healing CI and remote caching to speed up CI - Nx [intelligently determines which tasks are `affected`](/docs/features/ci-features/affected) by code changes to reduce waste in CI ## Next steps Connect with the rest of the Nx community with these resources: - ⭐️ [Star us on GitHub](https://github.com/nrwl/nx) to show your support and stay updated on new releases! - [Join the Official Nx Discord Server](https://go.nx.dev/community) to ask questions and find out the latest news about Nx. - [Follow Nx on Twitter](https://twitter.con/nxdevtools) to stay up to date with Nx news - [Read our Nx blog](https://nx.dev/blog) - [Subscribe to our Youtube channel](https://www.youtube.com/@nxdevtools) for demos and Nx insights --- ## Managing Dependencies {% llm_copy_prompt title="Tutorial 2/8: Understand project dependencies" %} Help me understand how my Nx workspace tracks dependencies between projects. Use my existing workspace and projects for hands-on examples. Run `nx graph` in my workspace and help me interpret the results. Show me which projects depend on each other and explain how Nx detects these relationships. If dependencies are missing or unexpected, help me debug by checking import paths, tsconfig paths, and package.json entries. Stay on-topic: only teach what's covered on this page. Do not introduce concepts from later tutorials. Tutorial: {pageUrl} {% /llm_copy_prompt %} As your workspace grows, projects start depending on each other and on external packages. Nx automatically tracks these relationships so it can build projects in the right order, cache intelligently, and tell you what's affected by a change. {% aside type="note" title="Tutorial Series" %} 1. [Crafting your workspace](/docs/getting-started/tutorials/crafting-your-workspace) 2. **Managing dependencies** (you are here) 3. [Configuring tasks](/docs/getting-started/tutorials/configuring-tasks) 4. [Running tasks](/docs/getting-started/tutorials/running-tasks) 5. [Caching](/docs/getting-started/tutorials/caching) 6. [Understanding your workspace](/docs/getting-started/tutorials/understanding-your-workspace) 7. [Reducing boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) 8. [Setting up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) {% /aside %} This tutorial assumes you have an Nx workspace with at least two projects. If you're starting fresh, complete [Crafting Your Workspace](/docs/getting-started/tutorials/crafting-your-workspace) first. ## Workspace libraries Workspace libraries are projects whose source lives in your workspace and are linked together by your package manager (see [Crafting Your Workspace](/docs/getting-started/tutorials/crafting-your-workspace) for how this works). In your project's `package.json`, workspace libraries use `workspace:*` (or `*` for npm) while external packages use version ranges: {% tabs syncKey="package-manager" %} {% tabitem label="npm" %} ```jsonc // apps/my-app/package.json { "dependencies": { "react": "^19.0.0", "@my-workspace/shared-ui": "*", }, } ``` {% /tabitem %} {% tabitem label="pnpm" %} ```jsonc // apps/my-app/package.json { "dependencies": { "react": "^19.0.0", "@my-workspace/shared-ui": "workspace:*", }, } ``` {% /tabitem %} {% tabitem label="yarn" %} ```jsonc // apps/my-app/package.json { "dependencies": { "react": "^19.0.0", "@my-workspace/shared-ui": "workspace:*", }, } ``` {% /tabitem %} {% tabitem label="bun" %} ```jsonc // apps/my-app/package.json { "dependencies": { "react": "^19.0.0", "@my-workspace/shared-ui": "workspace:*", }, } ``` {% /tabitem %} {% /tabs %} When one project imports from another, Nx detects the relationship automatically. No configuration required. ```typescript // apps/my-app/src/app.tsx import { Button } from '@my-workspace/shared-ui'; ``` Nx analyzes your JS/TS source code and `package.json` dependencies to understand how projects relate to each other. Nx uses these relationships to [run tasks](/docs/getting-started/tutorials/running-tasks) in the correct order. {% aside type="note" title="Non-JavaScript languages" %} For non-JS languages, Nx does not detect dependencies from source code imports. Declare relationships manually with `implicitDependencies` in `project.json`: ```jsonc // apps/my-python-app/project.json { "implicitDependencies": ["shared-lib"], } ``` You can also use [community plugins](/docs/plugin-registry) that provide dependency detection for your language. {% /aside %} ### Buildable vs non-buildable libraries Workspace libraries come in two flavors, and the difference is in what their `package.json` `exports` field points to. **Non-buildable libraries** export their source code directly. Consumers compile the source as part of their own build. This is the simpler setup and works well for most workspace libraries. ```jsonc // packages/shared-ui/package.json { "name": "@my-workspace/shared-ui", "exports": { ".": "./src/index.ts", }, } ``` **Buildable libraries** export compiled artifacts. They have their own build step that produces output (e.g., to `dist/`), and consumers import the built result. This is useful for libraries that need to be published or that benefit from independent compilation. Use a conditional export so that tooling can still resolve to the source: ```jsonc // packages/data-access/package.json { "name": "@my-workspace/data-access", "exports": { ".": { "development": "./src/index.ts", "default": "./dist/index.js", }, }, } ``` The `development` condition points to source, while `default` points to the built output. Configure `customConditions` in your root `tsconfig.json` so your IDE and TypeScript language server resolve the `development` entry, giving you go-to-definition and type checking against the actual source: ```jsonc // tsconfig.json { "compilerOptions": { "customConditions": ["development"], }, } ``` At build time, the bundler resolves the `default` entry and uses the compiled artifacts. Start with non-buildable libraries. They're simpler and avoid needing to rebuild libraries during development. Switch to buildable when you need to publish a library or want faster incremental builds in large workspaces. ## Single version policy In a monorepo, some packages, especially frameworks like React, Angular, or Vue, must be the same version everywhere. Having two versions of React in the same app causes runtime errors. A **single version policy** means defining dependency versions once at the root and having all projects use that version. This prevents version conflicts and simplifies upgrades. **Catalogs** make this easier by letting you name a version once and reference it everywhere: {% tabs syncKey="package-manager" %} {% tabitem label="pnpm" %} ```yaml # pnpm-workspace.yaml catalog: react: ^19.0.0 react-dom: ^19.0.0 ``` ```jsonc // package.json { "dependencies": { "react": "catalog:", "react-dom": "catalog:", }, } ``` {% /tabitem %} {% tabitem label="yarn" %} Define catalogs in `.yarnrc.yml`: ```yaml # .yarnrc.yml catalogs: default: react: ^19.0.0 react-dom: ^19.0.0 ``` ```jsonc // package.json { "dependencies": { "react": "catalog:", "react-dom": "catalog:", }, } ``` {% /tabitem %} {% tabitem label="npm" %} npm does not have catalog support. Enforce a single version policy by defining all dependencies in the root `package.json` and using tools like [`syncpack`](https://syncpack.dev/) or the `@nx/dependency-checks` ESLint rule to catch version mismatches. {% /tabitem %} {% tabitem label="bun" %} Bun does not currently support catalogs. Define shared dependency versions in the root `package.json` and reference them using `workspace:*` for internal packages. {% /tabitem %} {% /tabs %} For a deeper comparison of dependency strategies, see [Dependency Management Strategies](/docs/concepts/decisions/dependency-management). ## What Nx does (and doesn't do) Nx **tracks** dependencies. It builds the project graph, determines build order, and knows what's affected by a change. But Nx **does not install or resolve** dependencies. That's your package manager's job (npm, pnpm, yarn, or bun). Think of it this way: - **Package manager**: installs packages, resolves versions, manages `node_modules` - **Nx**: understands the relationships, orchestrates tasks in the right order, caches results {% cards cols=2 %} {% card title="Previous: Crafting Your Workspace" description="Set up and structure your Nx workspace" url="/docs/getting-started/tutorials/crafting-your-workspace" /%} {% card title="Next: Configuring Tasks" description="Define tasks and their dependencies" url="/docs/getting-started/tutorials/configuring-tasks" /%} {% /cards %} --- ## Building and Testing React Apps in Nx This tutorial walks you through creating a React monorepo with Nx. You'll build a small example application to understand the core concepts and workflows. What you'll learn: - How to structure multiple React apps and libraries in a single repository - How Nx caching speeds up your local development and CI pipelines - How to run builds, tests, and serve commands efficiently across multiple projects - How to share React components and hooks between applications - How to fix CI failures directly from your editor with Nx Cloud ## Prerequisite: Tutorial setup This tutorial requires [Node.js](https://nodejs.org) (v20.19 or later) installed on your machine. ### Step 1: Creating a new Nx React workspace Run the following command to create a new Nx workspace with the React template: ```shell npx create-nx-workspace@latest my-nx-repo --template=nrwl/react-template ``` Or [create your workspace in the browser](https://cloud.nx.app/create-nx-workspace/react/github) with CI pre-configured. {% llm_only %} IMPORTANT: Always use the CLI command above (`npx create-nx-workspace`) to create workspaces. Do NOT use the cloud.nx.app browser-based flow — it requires interactive GitHub OAuth and is not suitable for AI agents. {% /llm_only %} ### Step 2: verify your setup Once the workspace is created, navigate into it and verify the structure: ```shell cd my-nx-repo ``` ## Explore the Nx workspace setup Let's take a look at the structure of our new Nx workspace: {%filetree%} - my-nx-repo/ - apps/ - api/ - shop/ - shop-e2e/ - libs/ - api/ - shared/ - shop/ - eslint.config.mjs - nx.json - package-lock.json - package.json - tsconfig.base.json - tsconfig.json - vitest.workspace.ts {%/filetree%} The [`nx.json` file](/docs/reference/nx-json) contains configuration settings for Nx itself and global default settings that individual projects inherit. Now, let's build some features and see how Nx helps get us to production faster. ## Serving the app To serve your new React app, run: ```shell npx nx serve shop ``` The app is served at [http://localhost:4200](http://localhost:4200). You can also use `npx nx run shop:serve` as an alternative syntax. The `:` format works for any task in any project, which is useful when task names overlap with Nx commands. ### Inferred tasks By default Nx simply runs your `package.json` scripts. However, you can also adopt [Nx technology plugins](/docs/technologies) that help abstract away some of the lower-level config and have Nx manage that. One such thing is to automatically identify tasks that can be run for your project from [tooling configuration files](/docs/concepts/inferred-tasks) such as `package.json` scripts and `vite.config.ts`. In `nx.json` there's already the `@nx/vite` plugin registered which automatically identifies `build`, `serve`, and other Vite-related tasks. ```json // nx.json { ... "plugins": [ { "plugin": "@nx/vite/plugin", "options": { "buildTargetName": "build", "serveTargetName": "serve", "devTargetName": "dev", "previewTargetName": "preview", "serveStaticTargetName": "serve-static", "typecheckTargetName": "typecheck", "buildDepsTargetName": "build-deps", "watchDepsTargetName": "watch-deps" } } ] } ``` To view the tasks that Nx has detected, look in the [Nx Console](/docs/getting-started/editor-setup) project detail view or run: ```shell npx nx show project shop ``` {% project_details title="Project Details View (Simplified)" %} ```json { "project": { "name": "@org/shop", "type": "app", "data": { "root": "apps/shop", "targets": { "build": { "options": { "cwd": "apps/shop", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": [ "production", "^production", { "externalDependencies": ["vite"] } ], "outputs": ["{workspaceRoot}/dist/apps/shop"], "executor": "nx:run-commands", "configurations": {} } }, "name": "shop", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "apps/shop/src", "projectType": "application", "tags": [], "implicitDependencies": [] } }, "sourceMap": { "root": ["apps/shop/project.json", "nx/core/project-json"], "targets": ["apps/shop/project.json", "nx/core/project-json"], "targets.build": ["apps/shop/vite.config.ts", "@nx/vite/plugin"], "targets.build.command": ["apps/shop/vite.config.ts", "@nx/vite/plugin"], "targets.build.options": ["apps/shop/vite.config.ts", "@nx/vite/plugin"], "targets.build.cache": ["apps/shop/vite.config.ts", "@nx/vite/plugin"], "targets.build.dependsOn": ["apps/shop/vite.config.ts", "@nx/vite/plugin"], "targets.build.inputs": ["apps/shop/vite.config.ts", "@nx/vite/plugin"], "targets.build.outputs": ["apps/shop/vite.config.ts", "@nx/vite/plugin"], "targets.build.options.cwd": [ "apps/shop/vite.config.ts", "@nx/vite/plugin" ], "name": ["apps/shop/project.json", "nx/core/project-json"], "$schema": ["apps/shop/project.json", "nx/core/project-json"], "sourceRoot": ["apps/shop/project.json", "nx/core/project-json"], "projectType": ["apps/shop/project.json", "nx/core/project-json"], "tags": ["apps/shop/project.json", "nx/core/project-json"] } } ``` {% /project_details %} If you expand the `build` task, you can see that it was created by the `@nx/vite` plugin by analyzing your `vite.config.ts` file. Notice the outputs are defined as `{projectRoot}/dist`. This value is being read from the `build.outDir` defined in your `vite.config.ts` file. Let's change that value in your `vite.config.ts` file: ```ts // apps/shop/vite.config.ts export default defineConfig({ // ... build: { outDir: './build', // ... }, }); ``` Now if you look at the project details view, the outputs for the build target will say `{projectRoot}/build`. The `@nx/vite` plugin ensures that tasks and their options, such as outputs, are automatically and correctly configured. {% aside type="note" title="Overriding inferred task options" %} You can override the options for inferred tasks by modifying the [`targetDefaults` in `nx.json`](/docs/reference/nx-json#target-defaults) or setting a value in your [`package.json` file](/docs/reference/project-configuration). Nx will merge the values from the inferred tasks with the values you define in `targetDefaults` and in your specific project's configuration. {% /aside %} ## Modularization with local libraries When you develop your React application, usually all your logic sits in the app's `src` folder. Ideally separated by various folder names which represent your domains or features. As your app grows, however, the app becomes more and more monolithic, which makes building and testing it harder and slower. {%filetree%} - my-nx-repo/ - apps/ - shop/ - src/ - app/ - cart/ - products/ - orders/ - ui/ {%/filetree%} Nx allows you to separate this logic into "local libraries." The main benefits include - better separation of concerns - better reusability - more explicit private and public boundaries (APIs) between domains and features - better scalability in CI by enabling independent test/lint/build commands for each library - better scalability in your teams by allowing different teams to work on separate libraries ### Create local libraries Let's create a reusable design system library called `ui` that we can use across our workspace. This library will contain reusable components such as buttons, inputs, and other UI elements. ```shell npx nx g @nx/react:library libs/ui --unitTestRunner=vitest --bundler=none ``` Note how we type out the full path in the `directory` flag to place the library into a subfolder. You can choose whatever folder structure you like to organize your projects. Running the above commands should lead to the following directory structure: {% filetree %} - my-nx-repo/ - apps/ - shop/ - libs/ - ui/ - eslint.config.mjs - nx.json - package.json - tsconfig.base.json - tsconfig.json - vitest.workspace.ts {% /filetree %} Just as with the `shop` app, Nx automatically infers the tasks for the `ui` library from its configuration files. You can view them by running: ```shell npx nx show project ui ``` In this case, we have the `lint` and `test` tasks available, among other inferred tasks. ```shell npx nx lint ui npx nx test ui ``` ### Import libraries into the shop app All libraries that we generate are automatically included in the `workspaces` defined in the root-level `package.json`. ```json // package.json { "workspaces": ["apps/*", "libs/*"] } ``` Hence, we can easily import them into other libraries and our React application. You can see that the `AcmeUi` component is exported via the `index.ts` file of our `ui` library so that other projects in the repository can use it. This is our public API with the rest of the workspace and is enforced by the `exports` field in the `package.json` file. Only export what's necessary to be usable outside the library itself. ```ts // libs/ui/src/index.ts export * from './lib/ui'; ``` Let's add a simple `Hero` component that we can use in our shop app. ```tsx // libs/ui/src/lib/hero.tsx export function Hero(props: { title: string; subtitle: string; cta: string; onCtaClick?: () => void; }) { return (

{props.title}

{props.subtitle}

); } ``` Then, export it from `index.ts`. ```ts // libs/ui/src/index.ts export * from './lib/hero'; export * from './lib/ui'; ``` We're ready to import it into our main application now. ```tsx // apps/shop/src/app/app.tsx import { Route, Routes } from 'react-router-dom'; // importing the component from the library import { Hero } from '@org/ui'; export function App() { return ( <>

Home

); } export default App; ``` Serve your app again (`npx nx serve shop`) and you should see the new Hero component from the `ui` library rendered on the home page. ![](../../../../assets/tutorials/react-demo-with-hero.avif) If you have keen eyes, you may have noticed that there is a typo in the `App` component. This mistake is intentional, and we'll see later how Nx can fix this issue automatically in CI. ## Visualize your project structure Nx automatically detects the dependencies between the various parts of your workspace and builds a [project graph](/docs/features/explore-graph). This graph is used by Nx to perform various optimizations such as determining the correct order of execution when running tasks like `npx nx build`, enabling intelligent caching, and more. Interestingly, you can also visualize it. Just run: ```shell npx nx graph ``` You should be able to see something similar to the following in your browser. {% graph height="450px" %} ```json { "projects": [ { "name": "@org/shop", "type": "app", "data": { "tags": [] } }, { "name": "@org/ui", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "@org/shop": [ { "source": "@org/shop", "target": "@org/ui", "type": "static" } ], "@org/ui": [] }, "affectedProjectIds": [], "focus": null, "groupByFolder": false } ``` {% /graph %} Let's create a git branch with the new hero component so we can open a pull request later: ```shell git checkout -b add-hero-component git add . git commit -m 'add hero component' ``` ## Testing and linting - running multiple tasks Our current setup doesn't just come with targets for serving and building the React application, but also has targets for testing and linting. We can use the same syntax as before to run these tasks: ```shell npx nx test shop # runs the tests for shop npx nx lint ui # runs the linter on ui ``` More conveniently, we can also run tasks in parallel using the following syntax: ```shell npx nx run-many -t test lint ``` This is exactly what is configured in `.github/workflows/ci.yml` for the CI pipeline. The `run-many` command allows you to run multiple tasks across multiple projects in parallel, which is particularly useful in a monorepo setup. There is a test failure for the `shop` app due to the updated content. Don't worry about it for now, we'll fix it in a moment with the help of Nx Cloud's self-healing feature. ### Local task cache One thing to highlight is that Nx is able to [cache the tasks you run](/docs/features/cache-task-results). Note that all of these targets are automatically cached by Nx. If you re-run a single one or all of them again, you'll see that the task completes immediately. In addition, (as can be seen in the output example below) there will be a note that a matching cache result was found and therefore the task was not run again. ```text {% title="npx nx run-many -t test lint" frame="terminal" %} ✔ nx run @org/ui:lint ✔ nx run @org/ui:test ✔ nx run @org/shop:lint ✖ nx run @org/shop:test ————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Ran targets test, lint for 2 projects (1s) ✔ 3/4 succeeded [3 read from cache] ✖ 1/4 targets failed, including the following: - nx run @org/shop:test ``` Again, the `@org/shop:test` task failed, but notice that the remaining three tasks were read from cache. Not all tasks might be cacheable though. You can configure the `cache` settings in the `targetDefaults` property of the `nx.json` file. You can also [learn more about how caching works](/docs/features/cache-task-results). ## Next steps Here are some things you can dive into next: - [Set up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) with remote caching and self-healing - Learn more about the [underlying mental model of Nx](/docs/concepts/mental-model) - Learn how to [migrate your existing project to Nx](/docs/guides/adopting-nx/adding-to-existing-project) - [Setup Storybook for our shared UI library](/docs/technologies/test-tools/storybook/guides/overview-react) - [Learn how to setup Tailwind](/docs/technologies/react/guides/using-tailwind-css-in-react) - Learn about [enforcing boundaries between projects](/docs/features/enforce-module-boundaries) Also, make sure you - ⭐️ [Star us on GitHub](https://github.com/nrwl/nx) to show your support and stay updated on new releases! - [Join the Official Nx Discord Server](https://go.nx.dev/community) to ask questions and find out the latest news about Nx. - [Follow Nx on Twitter](https://twitter.com/nxdevtools) to stay up to date with Nx news - [Read our Nx blog](https://nx.dev/blog) - [Subscribe to our Youtube channel](https://www.youtube.com/@nxdevtools) for demos and Nx insights --- ## Reducing Configuration Boilerplate {% llm_copy_prompt title="Tutorial 7/8: Reduce configuration with plugins" %} Help me reduce configuration boilerplate in my Nx workspace. Use my existing workspace and projects for hands-on examples. First, consolidate shared task config into `targetDefaults` in `nx.json`. Then show me how to use Nx plugins to automatically infer tasks from my tooling (Vite, Jest, ESLint, etc.) so I don't need to configure each project manually. Run `nx show project ` to see where each task's configuration comes from. Stay on-topic: only teach what's covered on this page. Do not introduce concepts from later tutorials. Tutorial: {pageUrl} {% /llm_copy_prompt %} Manually configuring tasks, caching, inputs, and outputs for every project works, but it doesn't scale to dozens or hundreds of projects. Nx provides two mechanisms to reduce this boilerplate: `targetDefaults` for shared configuration and **plugins** for automatic task inference. The examples below use Vite and Vitest, but the concepts apply to any tool. Substitute your own build and test commands as needed. {% aside type="note" title="Tutorial Series" %} 1. [Crafting your workspace](/docs/getting-started/tutorials/crafting-your-workspace) 2. [Managing dependencies](/docs/getting-started/tutorials/managing-dependencies) 3. [Configuring tasks](/docs/getting-started/tutorials/configuring-tasks) 4. [Running tasks](/docs/getting-started/tutorials/running-tasks) 5. [Caching](/docs/getting-started/tutorials/caching) 6. [Understanding your workspace](/docs/getting-started/tutorials/understanding-your-workspace) 7. **Reducing boilerplate** (you are here) 8. [Setting up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) {% /aside %} This tutorial assumes you have an Nx workspace with configured tasks. If you're starting fresh, complete [Configuring Tasks](/docs/getting-started/tutorials/configuring-tasks) first. ## The scaling problem Consider a workspace with 20 libraries, each with verbose task configuration: {% tabs %} {% tabitem label="package.json" %} ```jsonc // packages/my-lib/package.json { "scripts": { "build": "vite build", "test": "vitest run", }, "nx": { "targets": { "build": { "cache": true, "dependsOn": ["^build"], "inputs": [ "{projectRoot}/src/**/*", "{projectRoot}/vite.config.ts", "{projectRoot}/tsconfig.json", ], "outputs": ["{projectRoot}/dist"], }, "test": { "cache": true, "inputs": ["{projectRoot}/src/**/*", "{projectRoot}/vitest.config.ts"], "outputs": ["{projectRoot}/coverage"], }, }, }, } ``` {% /tabitem %} {% tabitem label="project.json" %} ```jsonc // packages/my-lib/project.json { "targets": { "build": { "command": "vite build", "cache": true, "dependsOn": ["^build"], "inputs": [ "{projectRoot}/src/**/*", "{projectRoot}/vite.config.ts", "{projectRoot}/tsconfig.json", ], "outputs": ["{projectRoot}/dist"], }, "test": { "command": "vitest run", "cache": true, "inputs": ["{projectRoot}/src/**/*", "{projectRoot}/vitest.config.ts"], "outputs": ["{projectRoot}/coverage"], }, }, } ``` {% /tabitem %} {% /tabs %} That's a lot of repetition across 20 projects. And every time you change the Vite output directory, you'd need to update all of them. ## Step 1: Target defaults Move shared configuration to `nx.json` so projects inherit defaults: ```jsonc // nx.json { "targetDefaults": { "build": { "cache": true, "dependsOn": ["^build"], }, "test": { "cache": true, }, }, } ``` Now each project only needs to specify what's unique: {% tabs %} {% tabitem label="package.json" %} ```jsonc // packages/my-lib/package.json { "scripts": { "build": "vite build", "test": "vitest run", }, "nx": { "targets": { "build": { "inputs": ["{projectRoot}/src/**/*", "{projectRoot}/vite.config.ts"], "outputs": ["{projectRoot}/dist"], }, "test": { "inputs": ["{projectRoot}/src/**/*", "{projectRoot}/vitest.config.ts"], "outputs": ["{projectRoot}/coverage"], }, }, }, } ``` {% /tabitem %} {% tabitem label="project.json" %} ```jsonc // packages/my-lib/project.json { "targets": { "build": { "command": "vite build", "inputs": ["{projectRoot}/src/**/*", "{projectRoot}/vite.config.ts"], "outputs": ["{projectRoot}/dist"], }, "test": { "command": "vitest run", "inputs": ["{projectRoot}/src/**/*", "{projectRoot}/vitest.config.ts"], "outputs": ["{projectRoot}/coverage"], }, }, } ``` {% /tabitem %} {% /tabs %} Better, but you still repeat `inputs` and `outputs` across projects with the same tooling. ## Step 2: Nx plugins (inferred tasks) Nx plugins can read your existing tooling configuration files, like `vite.config.ts`, `jest.config.ts`, or `eslint.config.mjs`, and automatically create tasks with the correct caching settings. No `project.json` needed. ### Adding a plugin Try adding a plugin to your workspace: ```shell nx add @nx/vite ``` This installs `@nx/vite` and registers it in `nx.json`. Some plugins may also need `nx sync` to update workspace configuration files (e.g., TypeScript project references). See [maintain TypeScript monorepos](/docs/features/maintain-typescript-monorepos) for details. After installing, check what tasks were inferred for one of your projects: ```shell nx show project my-app ``` You should see tasks like `build`, `test`, and `serve` that were automatically created from your `vite.config.ts` file. {% aside type="note" title="Prefixed target names" %} Some plugins use prefixed names (e.g., `next:build`, `next:dev`) to avoid conflicting with existing `package.json` scripts. You can rename these to `build`, `dev`, etc. in the plugin options in `nx.json`, then remove the redundant scripts from `package.json`. {% /aside %} The plugin reads your Vite configuration and sets up correct caching, inputs, and outputs without any manual configuration. The plugin is registered in `nx.json`: ```jsonc // nx.json { "plugins": [ { "plugin": "@nx/vite/plugin", "options": { "buildTargetName": "build", "testTargetName": "test", "serveTargetName": "serve", }, }, ], } ``` Now, any project with a `vite.config.ts` automatically gets `build`, `test`, and `serve` tasks, with correct `inputs`, `outputs`, and caching, without any `project.json` configuration. ### Seeing what's inferred Use `nx show project` to see all tasks for a project, including where they come from: ```shell nx show project my-lib ``` {% project_details title="Project details for my-lib" %} ```json { "project": { "name": "my-lib", "type": "lib", "data": { "root": "packages/my-lib", "targets": { "build": { "options": { "cwd": "packages/my-lib", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": ["production", "^production"], "outputs": ["{projectRoot}/dist"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "test": { "options": { "cwd": "packages/my-lib", "command": "vitest run" }, "cache": true, "inputs": ["default", "^production"], "outputs": ["{projectRoot}/coverage"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "lint": { "cache": true, "options": { "cwd": "packages/my-lib", "command": "eslint ." }, "inputs": ["default", "{workspaceRoot}/eslint.config.mjs"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["eslint"] } } }, "name": "my-lib", "sourceRoot": "packages/my-lib/src", "projectType": "library", "tags": [], "implicitDependencies": [], "metadata": { "technologies": ["react"] } } }, "sourceMap": { "root": ["packages/my-lib/project.json", "nx/core/project-json"], "targets": ["packages/my-lib/project.json", "nx/core/project-json"], "targets.build": ["packages/my-lib/vite.config.ts", "@nx/vite/plugin"], "targets.build.command": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.build.options": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.build.cache": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.build.inputs": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.build.outputs": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.test": ["packages/my-lib/vite.config.ts", "@nx/vite/plugin"], "targets.test.command": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.test.options": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.test.cache": ["packages/my-lib/vite.config.ts", "@nx/vite/plugin"], "targets.test.inputs": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.test.outputs": [ "packages/my-lib/vite.config.ts", "@nx/vite/plugin" ], "targets.lint": ["packages/my-lib/eslint.config.mjs", "@nx/eslint/plugin"], "targets.lint.command": [ "packages/my-lib/eslint.config.mjs", "@nx/eslint/plugin" ], "targets.lint.options": [ "packages/my-lib/eslint.config.mjs", "@nx/eslint/plugin" ], "targets.lint.cache": [ "packages/my-lib/eslint.config.mjs", "@nx/eslint/plugin" ], "targets.lint.inputs": [ "packages/my-lib/eslint.config.mjs", "@nx/eslint/plugin" ] } } ``` {% /project_details %} The project details view shows each task's source (plugin, `targetDefaults`, or `project.json`) and its computed settings. ## How the configuration cascade works Task configuration can come from three sources, applied in this order: 1. **Plugin-inferred**: automatic from tooling config (lowest priority) 2. **targetDefaults**: shared defaults in `nx.json` 3. **Project-level**: explicit `project.json` or `package.json` config (highest priority) Each layer can override the previous one. This means you can use plugins for sensible defaults and only add project-level config when a project needs something different. ## This is optional Plugins are optional. The explicit task configuration from [Configuring Tasks](/docs/getting-started/tutorials/configuring-tasks) works perfectly well. Use plugins when: - You have many projects with the same tooling (Vite, Jest, ESLint, etc.) - You want caching configured automatically with correct `inputs`/`outputs` - You prefer minimal configuration files Stick with explicit configuration when: - You have unique build setups that plugins don't cover - You want full control over every task detail - Your team prefers explicit over implicit ## Learn more - [Inferred tasks](/docs/concepts/inferred-tasks): how plugins detect and configure tasks - [Nx plugins](/docs/concepts/nx-plugins): the full plugin system - [Reduce repetitive configuration](/docs/guides/tasks--caching/reduce-repetitive-configuration): step-by-step guide - [Extending Nx](/docs/extending-nx/intro): create your own plugins {% cards cols=2 %} {% card title="Previous: Understanding Your Workspace" description="Explore projects, graphs, and debug issues" url="/docs/getting-started/tutorials/understanding-your-workspace" /%} {% card title="Next: Setting Up CI" description="Configure CI with remote caching and self-healing" url="/docs/getting-started/tutorials/self-healing-ci-tutorial" /%} {% /cards %} --- ## Running Tasks {% llm_copy_prompt title="Tutorial 4/8: Run tasks across your workspace" %} Help me run tasks in my Nx workspace efficiently. Use my existing workspace and projects for hands-on examples. Show me how to run a single task for one project, run multiple tasks across all projects with `nx run-many`, and understand how Nx orders task execution. Do not discuss caching or affected commands — those are covered in later tutorials. Stay on-topic: only teach what's covered on this page. Do not introduce concepts from later tutorials. Tutorial: {pageUrl} {% /llm_copy_prompt %} You've configured your tasks. Now how do you run them — for one project, for many, or only for what changed? The examples below use Vite and Vitest, but the concepts apply to any tool. Substitute your own build and test commands as needed. {% aside type="note" title="Tutorial Series" %} 1. [Crafting your workspace](/docs/getting-started/tutorials/crafting-your-workspace) 2. [Managing dependencies](/docs/getting-started/tutorials/managing-dependencies) 3. [Configuring tasks](/docs/getting-started/tutorials/configuring-tasks) 4. **Running tasks** (you are here) 5. [Caching](/docs/getting-started/tutorials/caching) 6. [Understanding your workspace](/docs/getting-started/tutorials/understanding-your-workspace) 7. [Reducing boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) 8. [Setting up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) {% /aside %} This tutorial assumes you have an Nx workspace with configured tasks. If you're starting fresh, complete [Configuring Tasks](/docs/getting-started/tutorials/configuring-tasks) first. ## Running a single task Given a project with tasks defined in `package.json` (or `project.json` for non-JS projects): {% tabs %} {% tabitem label="package.json" %} ```jsonc // apps/my-app/package.json { "name": "my-app", "scripts": { "build": "vite build", "test": "vitest run", }, } ``` {% /tabitem %} {% tabitem label="project.json" %} ```jsonc // apps/my-app/project.json { "targets": { "build": { "command": "vite build", }, "test": { "command": "vitest run", }, }, } ``` {% /tabitem %} {% /tabs %} Run a task with `nx run :`: ```shell nx run my-app:build ``` Or the shorthand, which works when the task name doesn't conflict with an Nx command: ```shell nx build my-app ``` You can also `cd` into a project directory and run without specifying the project name: ```shell cd apps/my-app nx build ``` Nx resolves the project from the current directory. Try running `nx build` or `nx test` in your own workspace to see it in action. ## Running multiple tasks Use `run-many` to run one or more tasks across multiple projects: ```shell # Run build for all projects that have a build task nx run-many --targets build # Run multiple tasks nx run-many --targets build test lint # Run tasks for specific projects only nx run-many --targets build --projects my-app ``` The `-t` and `-p` flags are shorthands for `--targets` and `--projects`. Nx runs tasks in parallel by default, respecting the [task dependencies](/docs/getting-started/tutorials/configuring-tasks) you've configured. For example, with this configuration: ```jsonc // nx.json { "targetDefaults": { "build": { "dependsOn": ["^build"], }, }, } ``` Running `nx run-many --targets build` builds dependencies first, then dependents: ![Diagram showing task execution order: shared-ui:build and utils:build run in step 1, then my-app:build and data-access:build run in step 2, driven by dependsOn configuration](../../../../assets/tutorials/task-dependency-order.svg) Nx handles the ordering automatically, even when running in parallel. ## Passing arguments Pass arguments directly to the task: ```shell # Use a named configuration nx build my-app --configuration=production # Forward arguments to the underlying tool nx test my-app --watch ``` For more details, see [pass args to commands](/docs/guides/tasks--caching/pass-args-to-commands). ## Controlling parallelism By default, Nx runs tasks in parallel. Limit the number of concurrent tasks with `--parallel`: ```shell nx run-many --targets build --parallel=3 ``` Set `--parallel=1` to run tasks sequentially. ## Running continuous tasks Some tasks run indefinitely, like development servers. When another task depends on a continuous task, Nx starts both concurrently. For example, running an e2e test that needs a dev server: ```shell nx run my-app:e2e ``` This works when `e2e` depends on a `serve` task marked as `continuous` (configured in [Configuring Tasks](/docs/getting-started/tutorials/configuring-tasks)). Nx starts the server, waits for it to be ready, then runs the tests. ## Nx Console [Nx Console](/docs/getting-started/editor-setup) is a VS Code and WebStorm extension that provides a visual interface for running tasks, exploring your project graph, and managing your workspace, all without memorizing CLI commands. ## Learn more - [Run tasks](/docs/features/run-tasks): full feature documentation - [Pass args to commands](/docs/guides/tasks--caching/pass-args-to-commands): detailed argument handling {% cards cols=2 %} {% card title="Previous: Configuring Tasks" description="Define tasks and their dependencies" url="/docs/getting-started/tutorials/configuring-tasks" /%} {% card title="Next: Caching Tasks" description="Speed up tasks by replaying previous results" url="/docs/getting-started/tutorials/caching" /%} {% /cards %} --- ## Setting Up CI {% llm_copy_prompt title="Tutorial 8/8: Set up CI with Nx Cloud" %} Help me set up CI for my Nx workspace. Connect to Nx Cloud with `nx connect`, generate a CI workflow with `nx g @nx/workspace:ci-workflow`, and walk me through remote caching, affected commands, and self-healing CI. Stay on-topic: only teach what's covered on this page. Do not introduce concepts from later tutorials. Tutorial: {pageUrl} {% /llm_copy_prompt %} Connect your workspace to Nx Cloud, generate a CI workflow, and enable remote caching, affected commands, distributed task execution, and self-healing to keep your pipeline fast and reliable. {% aside type="note" title="Tutorial Series" %} 1. [Crafting your workspace](/docs/getting-started/tutorials/crafting-your-workspace) 2. [Managing dependencies](/docs/getting-started/tutorials/managing-dependencies) 3. [Configuring tasks](/docs/getting-started/tutorials/configuring-tasks) 4. [Running tasks](/docs/getting-started/tutorials/running-tasks) 5. [Caching](/docs/getting-started/tutorials/caching) 6. [Understanding your workspace](/docs/getting-started/tutorials/understanding-your-workspace) 7. [Reducing boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) 8. **Setting up CI** (you are here) {% /aside %} This tutorial assumes you have a [GitHub account](https://github.com) and [Node.js](https://nodejs.org) v20.19 or later. ## Connect to Nx Cloud ### Don't have a workspace yet? Create a workspace, GitHub repository, and Nx Cloud connection in one step: {% call_to_action variant="default" title="Create a new Nx workspace" url="https://cloud.nx.app/create-nx-workspace?utm_source=nx-dev&utm_medium=ci-tutorial&utm_campaign=try-nx-cloud" description="Setup takes less than 2 minutes" /%} This also generates a CI workflow, so you can skip ahead to [Remote caching](#remote-caching). ### Connect an existing workspace If you already have an Nx workspace, connect it to Nx Cloud: {% aside type="note" title="Prerequisites for nx connect" %} Your workspace must be pushed to a Git provider (GitHub, GitLab, Bitbucket, or Azure DevOps) before running `nx connect`. After connecting, Nx Cloud opens a PR that adds `nxCloudId` to `nx.json`. Merge this PR before proceeding so CI runs appear on the Nx Cloud dashboard. {% /aside %} ```shell nx connect ``` This creates an Nx Cloud account (if you don't have one) and connects your workspace. Once connected, you can see your workspace in your [Nx Cloud organization](https://cloud.nx.app/orgs). The access token is stored in `nx.json` and should be committed to your repository. It only grants cache read/write access, not admin access to your Nx Cloud organization. ## Generate a CI workflow If your workspace already has a CI workflow (e.g., `.github/workflows/ci.yml`), skip to [Remote caching](#remote-caching). Generate a CI workflow for GitHub Actions: ```shell nx add @nx/workspace nx g @nx/workspace:ci-workflow --ci=github ``` The `@nx/workspace` package provides the CI workflow generator. Once installed, the generator creates a `.github/workflows/ci.yml` file. It also supports CircleCI, GitLab CI, Azure Pipelines, and Bitbucket Pipelines. Pass a different `--ci` value or run `nx g @nx/workspace:ci-workflow --help` to see all options. {% aside type="note" title="Generated output may differ" %} The generated workflow may differ from the example below depending on your workspace setup and Nx version. The key elements (affected command, remote caching, fix-ci) will be present. {% /aside %} ```yaml # .github/workflows/ci.yml name: CI on: push: branches: - main pull_request: permissions: actions: read contents: read jobs: main: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: filter: tree:0 fetch-depth: 0 - uses: actions/setup-node@v4 with: node-version: 20 cache: 'npm' - run: npm ci - run: npx nx affected -t lint test build - run: npx nx fix-ci if: always() ``` This workflow includes several Nx CI features out of the box. The sections below explain each one. ## Remote caching When Nx Cloud is connected, task results are cached remotely. If a task has already run with the same inputs (on any machine or CI run), the result is replayed instantly instead of running again. This means: - The second CI run on a PR is faster because unchanged tasks hit the cache - Developers pulling the latest `main` get cached results from CI - Build artifacts like `dist/` and test coverage are restored from cache, not recomputed For more details, see [remote cache (Nx Replay)](/docs/features/ci-features/remote-cache). ## Running only affected tasks The generated workflow uses `nx affected` instead of `nx run-many`. This compares the PR's changes against the base branch and only runs tasks for projects that could be impacted: ```shell nx affected -t lint test build ``` Nx determines the base and head commits using `NX_BASE` and `NX_HEAD` environment variables. The generated CI workflow configures these automatically through the `fetch-depth: 0` checkout, which gives Nx access to the full git history for comparison. On a PR, Nx compares the PR branch against `main` (or whatever `defaultBase` is set to in `nx.json`). On a push to `main`, it compares against the previous commit. For more details, see [affected](/docs/features/ci-features/affected). ## Distributing tasks across machines For larger workspaces, you can distribute task execution across multiple machines using Nx Agents. Instead of running all tasks on a single CI runner, Nx Cloud coordinates the work across a fleet of agents: ```yaml # Add to your CI workflow - run: npx nx start-ci-run --distribute-on="3 linux-medium-js" ``` Nx Agents automatically split tasks across the available agents, respecting task dependencies and maximizing parallelism. No configuration changes to your tasks are needed. For more details, see [distribute task execution (Nx Agents)](/docs/features/ci-features/distribute-task-execution). ## Self-healing CI The `npx nx fix-ci` command at the end of the workflow enables self-healing CI. When a task fails, Nx Cloud analyzes the failure and suggests a fix that you can apply directly from your editor (via [Nx Console](/docs/getting-started/editor-setup)). This is useful for catching flaky tests, configuration drift, and other issues that can be auto-remediated without manual debugging. For more details, see [self-healing CI](/docs/features/ci-features/self-healing-ci). ## Next steps - [Remote cache (Nx Replay)](/docs/features/ci-features/remote-cache): how remote caching works - [Affected](/docs/features/ci-features/affected): how Nx determines what changed - [Distribute task execution (Nx Agents)](/docs/features/ci-features/distribute-task-execution): run tasks across multiple machines - [Self-healing CI](/docs/features/ci-features/self-healing-ci): automatic failure detection and fixes - [AI integration](/docs/getting-started/ai-setup): enhance CI with AI-powered workflows {% cards cols=2 %} {% card title="Previous: Reducing Configuration Boilerplate" description="Automate task configuration with plugins" url="/docs/getting-started/tutorials/reducing-configuration-boilerplate" /%} {% /cards %} --- ## Building and Testing TypeScript Packages in Nx This tutorial walks you through creating a TypeScript monorepo with Nx. You'll build a small example project to understand the core concepts and workflows. What you'll learn: - How to structure multiple TypeScript packages in a single repository - How Nx caching speeds up your local development and CI pipelines - How to run builds and tests efficiently across multiple packages - How to share code between packages using local libraries - How to fix CI failures directly from your editor with Nx Cloud ## Ready to start? This tutorial requires [Node.js](https://nodejs.org) (v20.19 or later) installed on your machine. ### Step 1: Creating a new Nx TypeScript workspace Run the following command to create a new Nx workspace with the TypeScript template: ```shell npx create-nx-workspace@latest my-nx-repo --template=nrwl/typescript-template ``` Or [create your workspace in the browser](https://cloud.nx.app/create-nx-workspace/typescript/github) with CI pre-configured. {% llm_only %} IMPORTANT: Always use the CLI command above (`npx create-nx-workspace`) to create workspaces. Do NOT use the cloud.nx.app browser-based flow — it requires interactive GitHub OAuth and is not suitable for AI agents. {% /llm_only %} ### Step 2: verify your setup Once the workspace is created, navigate into it: ```shell cd my-nx-repo ``` ## Explore the Nx workspace setup Let's take a look at the structure of our new Nx workspace: {%filetree %} - my-nx-repo/ - packages/ - async/ - colors/ - strings/ - utils/ - eslint.config.mjs - nx.json - package-lock.json - package.json - tsconfig.base.json - tsconfig.json - vitest.workspace.ts {% /filetree %} The [`nx.json` file](/docs/reference/nx-json) contains configuration settings for Nx itself and global default settings that individual projects inherit. Now, let's build some features and see how Nx helps get us to production faster. ## Building TypeScript packages Let's create two TypeScript packages that demonstrate how to structure a TypeScript monorepo. We'll create an `animal` package and a `zoo` package where `zoo` depends on `animal`. First, generate the `animal` package: ```shell npx nx g @nx/js:lib packages/animal --bundler=tsc --unitTestRunner=vitest --linter=none ``` Then generate the `zoo` package: ```shell npx nx g @nx/js:lib packages/zoo --bundler=tsc --unitTestRunner=vitest --linter=none ``` Running these commands should lead to new directories and files in your workspace: {%filetree %} - my-nx-repo/ - packages/ - animal/ - zoo/ - ... - vitest.workspace.ts {%/filetree %} Let's add some code to our packages. First, add the following code to the `animal` package: ```ts {% meta="{6-19}" %} // packages/animal/src/lib/animal.ts export function animal(): string { return 'animal'; } export interface Animal { name: string; sound: string; } const animals: Animal[] = [ { name: 'cow', sound: 'moo' }, { name: 'dog', sound: 'woof' }, { name: 'pig', sound: 'oink' }, ]; export function getRandomAnimal(): Animal { return animals[Math.floor(Math.random() * animals.length)]; } ``` Now let's update the `zoo` package to use the `animal` package: ```ts // packages/zoo/src/lib/zoo.ts import { getRandomAnimal } from '@org/animal'; export function zoo(): string { const result = getRandomAnimal(); return `${result.name} says ${result.sound}!`; } ``` Add the `@org/animal` dependency to `zoo`'s `package.json` (use `*` for npm or `workspace:*` for pnpm/yarn): ```json {% meta="{3-5}" %} // packages/zoo/package.json { "dependencies": { "@org/animal": "*" } } ``` Then link the packages: ```shell npm install ``` Now create an executable entry point for the zoo package: ```ts // packages/zoo/src/index.ts import { zoo } from './lib/zoo.js'; console.log(zoo()); ``` To build your packages, run: ```shell npx nx build animal ``` You can also use `npx nx run animal:build` as an alternative syntax. The `:` format works for any task in any project, which is useful when task names overlap with Nx commands. This creates a compiled version of your package in the `dist/packages/animal` folder. Since the `zoo` package depends on `animal`, building `zoo` will automatically build `animal` first: ```shell npx nx build zoo ``` You'll see both packages are built, with outputs in their respective `dist` folders. This is how you would prepare packages for use internally or for publishing to a package registry like NPM. You can also run the `zoo` package to see it in action: ```shell node packages/zoo/dist/index.js ``` ### Inferred tasks By default Nx simply runs your `package.json` scripts. However, you can also adopt [Nx technology plugins](/docs/technologies) that help abstract away some of the lower-level config and have Nx manage that. One such thing is to automatically identify tasks that can be run for your project from [tooling configuration files](/docs/concepts/inferred-tasks) such as `package.json` scripts and TypeScript configuration. In `nx.json` there's already the `@nx/js` plugin registered which automatically identifies `typecheck` and `build` targets. ```json // nx.json { ... "plugins": [ { "plugin": "@nx/js/typescript", "options": { "typecheck": { "targetName": "typecheck" }, "build": { "targetName": "build", "configName": "tsconfig.lib.json", "buildDepsName": "build-deps", "watchDepsName": "watch-deps" } } } ] } ``` To view the tasks that Nx has detected, look in the [Nx Console](/docs/getting-started/editor-setup) project detail view or run: ```shell npx nx show project animal ``` {% project_details title="Project Details View (Simplified)" %} ```json { "project": { "name": "@org/animal", "type": "lib", "data": { "root": "packages/animal", "targets": { "typecheck": { "dependsOn": ["^typecheck"], "options": { "cwd": "packages/animal", "command": "tsc --build --emitDeclarationOnly" }, "cache": true, "inputs": [ "production", "^production", { "externalDependencies": ["typescript"] } ], "outputs": ["{projectRoot}/dist"], "executor": "nx:run-commands", "configurations": {} }, "build": { "options": { "cwd": "packages/animal", "command": "tsc --build tsconfig.lib.json" }, "cache": true, "dependsOn": ["^build"], "inputs": [ "production", "^production", { "externalDependencies": ["typescript"] } ], "outputs": ["{projectRoot}/dist"], "executor": "nx:run-commands", "configurations": {} } }, "name": "animal", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "packages/animal/src", "projectType": "library", "tags": [], "implicitDependencies": [] } }, "sourceMap": { "root": ["packages/animal/project.json", "nx/core/project-json"], "targets": ["packages/animal/project.json", "nx/core/project-json"], "targets.typecheck": ["packages/animal/project.json", "@nx/js/typescript"], "targets.typecheck.command": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.typecheck.options": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.typecheck.cache": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.typecheck.dependsOn": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.typecheck.inputs": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.typecheck.outputs": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.typecheck.options.cwd": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.build": ["packages/animal/project.json", "@nx/js/typescript"], "targets.build.command": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.build.options": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.build.cache": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.build.dependsOn": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.build.inputs": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.build.outputs": [ "packages/animal/project.json", "@nx/js/typescript" ], "targets.build.options.cwd": [ "packages/animal/project.json", "@nx/js/typescript" ], "name": ["packages/animal/project.json", "nx/core/project-json"], "$schema": ["packages/animal/project.json", "nx/core/project-json"], "sourceRoot": ["packages/animal/project.json", "nx/core/project-json"], "projectType": ["packages/animal/project.json", "nx/core/project-json"], "tags": ["packages/animal/project.json", "nx/core/project-json"] } } ``` {% /project_details %} The `@nx/js` plugin automatically configures both the build and typecheck tasks based on your TypeScript configuration. Notice also how the outputs are set to `{projectRoot}/dist` - this is where your compiled TypeScript files will be placed, and it defined by the `outDir` option in `packages/animal/tsconfig.lib.json`. {% aside type="note" title="Overriding inferred task options" %} You can override the options for inferred tasks by modifying the [`targetDefaults` in `nx.json`](/docs/reference/nx-json#target-defaults) or setting a value in your [`project.json` file](/docs/reference/project-configuration). Nx will merge the values from the inferred tasks with the values you define in `targetDefaults` and in your specific project's configuration. {% /aside %} ## Code sharing with local libraries When you develop packages, creating shared utilities that multiple packages can use is a common pattern. This approach offers several benefits: - better separation of concerns - better reusability - more explicit APIs between different parts of your system - better scalability in CI by enabling independent test/lint/build commands for each package - most importantly: better caching because changes to one package don't invalidate the cache for unrelated packages ### Create a shared utilities library Let's create a shared utilities library that both our existing packages can use: ```shell npx nx g @nx/js:library packages/util --bundler=tsc --unitTestRunner=vitest --linter=none ``` Now we have: {%filetree %} - my-nx-repo/ - packages/ - animal/ - util/ - zoo/ - ... {%/filetree %} Let's add a utility function that our packages can share: ```ts {% meta="{6-12}" %} // packages/util/src/lib/util.ts export function util(): string { return 'util'; } export function formatMessage(prefix: string, message: string): string { return `[${prefix}] ${message}`; } export function getRandomItem(items: T[]): T { return items[Math.floor(Math.random() * items.length)]; } ``` ### Import the shared library This allows us to easily import them into other packages. Let's update our `animals` package to use the shared utility: ```ts {% meta="{2,19-21}" %} // packages/animals/src/lib/animals.ts import { getRandomItem } from '@org/util'; export function animal(): string { return 'animal'; } export interface Animal { name: string; sound: string; } const animals: Animal[] = [ { name: 'cow', sound: 'moo' }, { name: 'dog', sound: 'woof' }, { name: 'pig', sound: 'oink' }, ]; export function getRandomAnimal(): Animal { return getRandomItem(animals); } ``` And update the `zoo` package to use the formatting utility: ```ts {% meta="{3,7,8}" %} // packages/zoo/src/lib/zoo.ts import { getRandomAnimal } from '@org/animal'; import { formatMessage } from '@org/util'; export function zoo(): string { const result = getRandomAnimal(); const message = `${result.name} says ${result.sound}!`; return formatMessage('ZOO', message); } ``` Update the dependencies in each package's `package.json`: ```json {% meta="{3-5}" %} // packages/animal/package.json { "dependencies": { "@org/util": "*" } } ``` ```json {% meta="{3-6}" %} // packages/zoo/package.json { "dependencies": { "@org/animal": "*", "@org/util": "*" } } ``` Link the packages: ```shell npm install ``` Now when you run `npx nx build zoo`, Nx will automatically build all the dependencies in the correct order: first `util`, then `animal`, and finally `zoo`. Run the `zoo` package to see the updated output format: ```shell node packages/zoo/dist/index.js ``` ## Visualize your project structure Nx automatically detects the dependencies between the various parts of your workspace and builds a [project graph](/docs/features/explore-graph). This graph is used by Nx to perform various optimizations such as determining the correct order of execution when running tasks like `npx nx build`, enabling intelligent caching, and more. Interestingly, you can also visualize it. Just run: ```shell npx nx graph ``` You should be able to see something similar to the following in your browser. {% graph height="450px" %} ```json { "projects": [ { "name": "@org/animal", "type": "lib", "data": { "tags": [] } }, { "name": "@org/util", "type": "lib", "data": { "tags": [] } }, { "name": "@org/zoo", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "@org/animal": [ { "source": "@org/animal", "target": "@org/util", "type": "static" } ], "@org/util": [], "@org/zoo": [ { "source": "@org/zoo", "target": "@org/animal", "type": "static" }, { "source": "@org/zoo", "target": "@org/util", "type": "static" } ] }, "affectedProjectIds": [], "focus": null, "groupByFolder": false } ``` {% /graph %} Let's create a git branch with our new packages so we can open a pull request later: ```shell git checkout -b add-zoo-packages git add . git commit -m 'add animal and zoo packages' ``` ## Building and testing - running multiple tasks Our packages come with preconfigured building and testing . Let's intentionally introduce a typo in our test to demonstrate the self-healing CI feature later. You can run tests for individual packages: ```shell npx nx build zoo ``` Or run multiple tasks in parallel across all packages: ```shell npx nx run-many -t build test ``` This is exactly what is configured in `.github/workflows/ci.yml` for the CI pipeline. The `run-many` command allows you to run multiple tasks across multiple projects in parallel, which is particularly useful in a monorepo setup. There is a test failure for the `zoo` package due to the updated message. Don't worry about it for now, we'll fix it in a moment with the help of Nx Cloud's self-healing feature. ### Local task cache One thing to highlight is that Nx is able to [cache the tasks you run](/docs/features/cache-task-results). Note that all of these targets are automatically cached by Nx. If you re-run a single one or all of them again, you'll see that the task completes immediately. In addition, there will be a note that a matching cache result was found and therefore the task was not run again. ```text {% title="npx nx run-many -t built test" frame="terminal" %} ✔ nx run @org/util:build ✔ nx run @org/util:test ✔ nx run @org/animal:test ✔ nx run @org/animal:build ✖ nx run @org/zoo:test ✔ nx run @org/zoo:build —————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Ran targets test, build for 3 projects (800ms) ✔ 5/6 succeeded [5 read from cache] ✖ 1/6 targets failed, including the following: - nx run @org/zoo:test ``` Not all tasks might be cacheable though. You can configure the `cache` settings in the `targetDefaults` property of the `nx.json` file. You can also [learn more about how caching works](/docs/features/cache-task-results). The next section deals with publishing packages to a registry like NPM, but if you are not interested in publishing your packages, you can skip to [the end](#next-steps). ## Manage releases If you decide to publish your packages to NPM, Nx can help you [manage the release process](/docs/features/manage-releases). Release management involves updating the version of your packages, populating a changelog, and publishing the new version to the NPM registry. First you'll need to define which projects Nx should manage releases for by setting the `release.projects` property in `nx.json`: ```json {% meta="{4-6}" %} // nx.json { ... "release": { "projects": ["packages/*"] } } ``` You'll also need to ensure that each package's `package.json` file sets `"private": false` so that Nx can publish them. If you have any packages that you do not want to publish, make sure to set `"private": true` in their `package.json`. Now you're ready to use the `nx release` command to publish your packages. The first time you run `nx release`, you need to add the `--first-release` flag so that Nx doesn't try to find the previous version to compare against. It's also recommended to use the `--dry-run` flag until you're sure about the results of the `nx release` command, then you can run it a final time without the `--dry-run` flag. To preview your first release, run: ```shell npx nx release --first-release --dry-run ``` The command will ask you a series of questions and then show you what the results would be. Once you are happy with the results, run it again without the `--dry-run` flag: ```shell npx nx release --first-release ``` After this first release, you can remove the `--first-release` flag and just run `nx release --dry-run`. There is also a [dedicated feature page](/docs/features/manage-releases) that goes into more detail about how to use the `nx release` command. ## Next steps Here are some things you can dive into next: - [Set up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) with remote caching and self-healing - Learn more about the [underlying mental model of Nx](/docs/concepts/mental-model) - Learn how to [migrate your existing project to Nx](/docs/guides/adopting-nx/adding-to-existing-project) - [Learn more about Nx release for publishing packages](/docs/features/manage-releases) - Learn about [enforcing boundaries between projects](/docs/features/enforce-module-boundaries) Also, make sure you - ⭐️ [Star us on GitHub](https://github.com/nrwl/nx) to show your support and stay updated on new releases! - [Join the Official Nx Discord Server](https://go.nx.dev/community) to ask questions and find out the latest news about Nx. - [Follow Nx on Twitter](https://twitter.com/nxdevtools) to stay up to date with Nx news - [Read our Nx blog](https://nx.dev/blog) - [Subscribe to our Youtube channel](https://www.youtube.com/@nxdevtools) for demos and Nx insights --- ## Understanding Your Workspace {% llm_copy_prompt title="Tutorial 6/8: Explore and debug your workspace" %} Help me explore and debug my Nx workspace. Use my existing workspace and projects for hands-on examples. Run `nx show projects` to list all projects, `nx graph` to visualize dependencies, and `nx show project ` to inspect task details. Help me understand why specific dependencies exist and debug any caching or configuration issues. For machine-readable output (useful for scripting or AI agents), use `nx show project --json`. Stay on-topic: only teach what's covered on this page. Do not introduce concepts from later tutorials. Tutorial: {pageUrl} {% /llm_copy_prompt %} As your workspace grows to dozens or hundreds of projects, you need tools to explore it, debug unexpected behavior, and verify your configuration. Nx provides several commands and visualizations for this. {% aside type="note" title="Tutorial Series" %} 1. [Crafting your workspace](/docs/getting-started/tutorials/crafting-your-workspace) 2. [Managing dependencies](/docs/getting-started/tutorials/managing-dependencies) 3. [Configuring tasks](/docs/getting-started/tutorials/configuring-tasks) 4. [Running tasks](/docs/getting-started/tutorials/running-tasks) 5. [Caching](/docs/getting-started/tutorials/caching) 6. **Understanding your workspace** (you are here) 7. [Reducing boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) 8. [Setting up CI](/docs/getting-started/tutorials/self-healing-ci-tutorial) {% /aside %} This tutorial assumes you have an Nx workspace with projects and configured tasks. If you're starting fresh, complete [Crafting Your Workspace](/docs/getting-started/tutorials/crafting-your-workspace) first. ## The project graph The best starting point for understanding your workspace is `nx graph`. It opens an interactive visualization with tabs for projects, tasks, and project details: ```shell nx graph ``` {% graph title="Project graph" height="400px" %} ```json { "projects": [ { "type": "app", "name": "my-app", "data": {} }, { "type": "app", "name": "admin", "data": {} }, { "type": "app", "name": "api", "data": {} }, { "type": "lib", "name": "shared-ui", "data": {} }, { "type": "lib", "name": "data-access", "data": {} }, { "type": "lib", "name": "auth", "data": {} }, { "type": "lib", "name": "utils", "data": {} }, { "type": "lib", "name": "models", "data": {} } ], "groupByFolder": false, "workspaceLayout": { "appsDir": "apps", "libsDir": "packages" }, "dependencies": { "my-app": [ { "target": "shared-ui", "source": "my-app", "type": "direct" }, { "target": "data-access", "source": "my-app", "type": "direct" }, { "target": "auth", "source": "my-app", "type": "direct" } ], "admin": [ { "target": "shared-ui", "source": "admin", "type": "direct" }, { "target": "data-access", "source": "admin", "type": "direct" }, { "target": "auth", "source": "admin", "type": "direct" } ], "api": [ { "target": "data-access", "source": "api", "type": "direct" }, { "target": "auth", "source": "api", "type": "direct" }, { "target": "models", "source": "api", "type": "direct" } ], "shared-ui": [ { "target": "utils", "source": "shared-ui", "type": "direct" } ], "data-access": [ { "target": "models", "source": "data-access", "type": "direct" }, { "target": "utils", "source": "data-access", "type": "direct" } ], "auth": [{ "target": "utils", "source": "auth", "type": "direct" }], "utils": [], "models": [] }, "affectedProjectIds": [] } ``` {% /graph %} The project graph shows every project and the dependencies between them. You can: - **Search** for specific projects - **Filter** to show only affected projects or specific groups - **Click edges** between projects to see which files create the dependency (import statements, `package.json` references) ### Why does this dependency exist? When you see an unexpected dependency in the graph, click the edge between the two projects. The sidebar shows which files contributed to the dependency: ![Project graph showing @my-workspace/my-app depending on @my-workspace/utils, with the edge selected and the sidebar listing the specific files that create the dependency](../../../../assets/tutorials/project-graph-edge.png) For scripting, write the graph to a JSON file: ```shell nx graph --file=/tmp/graph.json ``` ## The task graph Switch to the **Tasks** tab in the graph to visualize the task execution plan. Select a task (like `build`) to see which tasks run and in what order: ![Task graph showing @my-workspace/my-app:build depending on @my-workspace/utils:build, with the arrow indicating utils must build first](../../../../assets/tutorials/task-graph-view.png) You can also open this directly from the CLI: ```shell nx build my-app --graph ``` This is useful for verifying that your `dependsOn` configuration is correct. Write the task graph to a file for analysis: ```shell nx build my-app --graph=/tmp/task-graph.json ``` ## Inspecting project details Click any project in the graph to see its details, or use the CLI: ```shell nx show project my-app ``` {% project_details title="Project details" %} ```json { "project": { "name": "my-app", "type": "app", "data": { "root": "apps/my-app", "targets": { "build": { "options": { "cwd": "apps/my-app", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": ["production", "^production"], "outputs": ["{projectRoot}/dist"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "test": { "options": { "cwd": "apps/my-app", "command": "vitest run" }, "cache": true, "inputs": ["default", "^production"], "outputs": ["{projectRoot}/coverage"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } } }, "name": "my-app", "sourceRoot": "apps/my-app/src", "projectType": "application", "tags": [], "metadata": { "technologies": ["react"] } } }, "sourceMap": { "root": ["apps/my-app/project.json", "nx/core/project-json"], "targets.build": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.test": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"] } } ``` {% /project_details %} This shows every task, where its configuration comes from, its `inputs`, `outputs`, and `dependsOn`. To drill into a specific task: ```shell nx show target my-app:build ``` Add `--json` to either command for machine-readable output. ## Listing projects from the CLI ```shell nx show projects ``` Filter by criteria: ```shell # Only projects that have a build target nx show projects --with-target build # Only projects matching a pattern nx show projects --projects 'packages/*' ``` ## Debugging cache behavior If a task isn't caching as expected, inspect its inputs and outputs: ```shell nx show project my-app ``` Check the task's `inputs`. These are the files and values that contribute to the cache hash. If an input changes between runs, the cache is invalidated. Common causes of unexpected cache misses: - An untracked config file is included in inputs - An environment variable changed - A dependency was updated {% aside type="tip" title="Enforce correct inputs and outputs" %} Use [sandboxing](/docs/features/ci-features/sandboxing) to detect tasks that read files not listed in their `inputs` or write files not listed in their `outputs`. This catches configuration mistakes that lead to stale caches or missing artifacts. {% /aside %} ## Nx Console [Nx Console](/docs/getting-started/editor-setup) brings all of these capabilities into your editor. In VS Code or WebStorm, you can: - Browse projects and their targets in a tree view - View the project graph inline - Run tasks with a visual interface - Inspect project details without leaving the editor {% install_nx_console /%} ## Learn more - [Explore the graph](/docs/features/explore-graph): full graph exploration guide - [Mental model](/docs/concepts/mental-model): how Nx thinks about projects and tasks {% cards cols=2 %} {% card title="Previous: Caching Tasks" description="Speed up tasks by replaying previous results" url="/docs/getting-started/tutorials/caching" /%} {% card title="Next: Reducing Configuration Boilerplate" description="Automate task configuration with plugins" url="/docs/getting-started/tutorials/reducing-configuration-boilerplate" /%} {% /cards %} # Core Concepts --- ## Core Concepts Learn about all the different concepts Nx uses to manage your tasks and enhance your productivity. {% index_page_cards path="concepts" /%} --- ## Publishable and Buildable Nx Libraries The `--buildable` and `--publishable` options are available on the Nx library generators for the following plugins: - Angular - React - NestJs - Node This document will look to explain the motivations for why you would use either the `--buildable` or `--publishable` option, as well as the mechanics of how they adjust the result when you add them to your generator. ## Publishable libraries You might use the `--publishable` option when generating a new Nx library if your intention is to distribute it outside the monorepo. One typical scenario for this may be that you use Nx to develop your organizations UI design system component library (maybe using its Storybook integration), which should be available also to your organizations' apps that are not hosted within the same monorepo. A normal Nx library - let's call it "workspace library" - is not made for building or publishing. Rather it only includes common lint and test targets in its `project.json` file. These libraries are directly referenced from one of the monorepo's applications and built together with them. Keep in mind that the `--publishable` flag does not enable automatic publishing. Rather it adds to your Nx workspace library a builder target that **compiles** and **bundles** your app. The resulting artifact will be ready to be published to some registry (e.g. [npm](https://npmjs.com/)). By having that builder, you can invoke the build via a command like: `nx build mylib` (where "mylib" is the name of the lib) which will then produce an optimized bundle in the `dist/mylib` folder. One particularity when generating a library with `--publishable` is that it requires you to also provide an `--importPath`. Your import path is the actual scope of your distributable package (e.g.: `@myorg/mylib`) - which needs to be a [valid npm package name](https://docs.npmjs.com/files/package.json#name). To publish the library (for example to npm) you can run the CLI command: `npm publish` from the artifact located in the `dist` directory. Setting up some automated script in Nx `tools` folder may also come in handy. For more details on the mechanics, remember that Nx is an open source project, so you can see the actual impact of the generator by looking at the source code (the best starting point is probably `packages//src/generators/library/library.ts`). ## Buildable libraries Buildable libraries are similar to "publishable libraries" described above. Their scope however is not to distribute or publish them to some external registry. Thus they might not be optimized for bundling and distribution. Buildable libraries are mostly used for producing some pre-compiled output that can be directly referenced from an Nx workspace application without the need to again compile it. A typical scenario is to leverage Nx incremental building capabilities. {% aside type="caution" title="More details" %} In order for a buildable library to be pre-compiled, it can only depend on other buildable libraries. This allows you to take full advantage of incremental builds. {% /aside %} For more details on the mechanics, remember that Nx is an open source project, so you can see the actual impact of the generator by looking at the source code (the best starting point is probably `packages//src/generators/library/library.ts`). --- ## CI Concepts {% index_page_cards path="concepts/ci-concepts" /%} --- ## Building Blocks of Fast CI Nx has many features that make your CI faster. Each of these features speeds up your CI in a different way, so that enabling an individual feature will have an immediate impact. These features are also designed to complement each other so that you can use them together to create a fully optimized CI pipeline. ## Use fast build tools The purpose of a CI pipeline is to run tasks like `build`, `test`, `lint` and `e2e`. You use different tools to run these tasks (like Webpack or Vite for you `build` task). If the individual tasks in your CI pipeline are slow, then your overall CI pipeline will be slow. Nx has two ways to help with this. Nx provides plugins for popular tools that make it easy to update to the latest version of that tool and [automatically updates](/docs/features/automate-updating-dependencies) your configuration files to take advantage of enhancements in the tool. The tool authors are always looking for ways to improve their product and the best way to get the most out of the tool you're using is to make sure you're on the latest version. Also, the recommended configuration settings for a tool will change over time so even if you're on the latest version of a tool, you may be using a slower version of it because you don't know about a new configuration setting. [`nx migrate`](/docs/features/automate-updating-dependencies) will automatically change the default settings of in your tooling config to use the latest recommended settings so that your repo won't be left behind. Because Nx plugins have a consistent interface for how they are invoked and how they interact with the codebase, it is easier to try out a different tool to see if it is better than what you're currently using. Newer tools that were created with different technologies or different design decisions can be orders of magnitude faster than your existing tools. Or the new tool might not help your project. Browse through the [list of Nx plugins](/docs/plugin-registry), like [vite](/docs/technologies/build-tools/vite/introduction) or [rspack](/docs/technologies/build-tools/rspack/introduction), and try it out on your project with the default settings already configured for you. ## Reduce wasted time In a monorepo, most PRs do not affect the entire codebase, so there's no need to run every test in CI for that PR. Nx provides the [`nx affected`](/docs/features/ci-features/affected) command to make sure that only the tests that need to be executed are run for a particular PR. Even if a particular project was affected by a PR, this could be the third time this same PR was run through CI and the build for this project was already run for this same exact set of files twice before. If you enable [remote caching](/docs/features/ci-features/remote-cache), you can make sure that you never run the same command on the same code twice. For a more detailed analysis of how these features reduce wasted time in different scenarios, read the [Reduce Wasted Time in CI guide](/docs/concepts/ci-concepts/reduce-waste) ## Parallelize and distribute tasks efficiently Every time you use Nx to run a task, Nx will attempt to run the task and all its dependent tasks in parallel in the most efficient way possible. Because Nx knows about [task pipelines](/docs/concepts/task-pipeline-configuration), it can run all the prerequisite tasks first. Nx will automatically run tasks in parallel processes up to the limit defined in the `parallel` property in `nx.json`. There's a limit to how many tasks can be run in parallel on the same machine, but the logic that Nx uses to assign tasks to parallel processes can also be used by Nx Cloud to efficiently [distribute tasks across multiple agent machines](/docs/features/ci-features/distribute-task-execution). Once those tasks are run, the [remote cache](/docs/features/ci-features/remote-cache) is used to replay those task results on the main machine. After the pipeline is finished, it looks like all the tasks were run on a single machine - but much faster than a single machine could do it. For a detailed analysis of different strategies for running tasks concurrently, read the [Parallelization and Distribution guide](/docs/concepts/ci-concepts/parallelization-distribution) --- ## Cache Security {% aside type="caution" title="Use Caution With Read-Write Tokens" %} Read-write tokens allow full write access to your remote cache. They should only be used in trusted environments. For instance, open source projects should only use read-write tokens as secrets configured for protected branches (e.g, main). Read-only tokens should be used in all other cases. {% /aside %} A cache allows you to reuse work that has already been done, but it also introduces a potential security risk - cache poisoning. A poisoned cache is one where the cached files have been altered in some way by a malicious actor. When a developer or the CI pipeline use that poisoned cache, the task output will be what the malicious actor wants instead of the correct task output. Nx takes security seriously and has put in place many precautions (we're [SOC 2 compliant](https://security.nx.app)). Listed below are some precautions that you need to take in your own codebase. ## What data is sent to the cache? Nx does not send your actual code to the remote cache. There are 3 kinds of data that are sent to the Nx Cloud remote cache for each task: 1. A hash of the inputs to the task. There is no way to reconstitute the actual source code files that were used to create a particular hash value. 2. Any files that were created as outputs from a task. 3. The terminal output created when running the task. If a malicious actor were able to modify the cache and those output files were then executed, that malicious actor could run arbitrary code on developer machines or in CI. ## Recommended precautions In order to keep your cache secure, there are a few steps we recommend you take: ### Use personal access tokens to provide fine-Grained access control for local development When you use a [personal access token](/docs/guides/nx-cloud/personal-access-tokens) to connect to Nx Cloud, you can control the level of access that your developers have to the cache after they authenticate by logging in. By default, all personal access tokens have read-write access to the cache. If you need to give a developer write access to the cache, you can do so in the workspace settings of the Nx Cloud UI. You can strengthen your workspace security further by revoking all access to the cache for unauthenticated users. This is done by changing the ID Access Level in your workspace settings. By default this is set to `read-write`, but you can change it to `read-only` to limit access or `none` to prevent all access. ### Avoid using CI access tokens in `nx.json` Avoid [specifying a token](/docs/guides/nx-cloud/access-tokens) with the `nxCloudAccessToken` property in `nx.json` as they're visible to anyone with codebase access. A `read-write` token grants complete cache write access, enabling potential cache poisoning by unauthorized users. Instead, [restrict CI access tokens](/docs/guides/nx-cloud/access-tokens) to protected CI environments and use [personal access tokens](/docs/guides/nx-cloud/personal-access-tokens) for local development. ### Use scoped tokens in CI We recommend using a [read-write token](/docs/guides/nx-cloud/access-tokens#read-write-access) only for protected branches (branches that don't allow direct push). A `read-write` access token allows reading from and writing to the shared global cache of your workspace. In all other branches, we recommend using a [read-only token](/docs/guides/nx-cloud/access-tokens#read-only-access). A `read-only` token only allows reading from the shared global cache, while writing is limited to an execution specific isolated cache. This allows your CI pipelines to share computational work between the [distributed agents](/docs/features/ci-features/distribute-task-execution). For workspaces with an enabled [source control integration with Nx Cloud](/docs/guides/nx-cloud/source-control-integration), we can securely scope the isolated cache to the pull request branch, without opening up the possibility of cache poisoning in your trusted environments. Learn more about [access token architecture](/docs/guides/nx-cloud/access-tokens#setting-ci-access-tokens). ### No need to revoke tokens after employees leave When an employee leaves a company, it is standard practice to change all the passwords that they had access to. That is not necessary for Nx Cloud tokens. In order to poison the cache, the former employee would need to have both the read-write token and the current code on the latest commit on the `main` branch. The odds of the employee being able to guess the hash value that will be created for the current commit on the `main` branch are infinitesimally small even after a single commit. ### Skip the cache when creating a deployment artifact In order to guarantee that cache poisoning will never affect your end users, [skip the cache](/docs/guides/tasks--caching/skipping-cache) when creating build artifacts that will actually be deployed. Skipping the cache for this one CI run is a very small performance cost, but it gives you 100% confidence that cache poisoning will not be an issue for the end users. ### Do not manually share your local cache Nx implicitly trusts the local cache which is stored by default in the `.nx/cache` folder. You can change the location of that folder in the `nx.json` file, so it could be tempting to place it on a network drive and easily share your cache with everyone on the company network. However, by doing this you've voided the guarantee of immutability from your cache. If someone has direct access to the cached files, they could directly poison the cache. Nx will automatically detect if a cache entry has been created in your local cache using a different machine and warn you with an [Unknown Local Cache Error](/docs/troubleshooting/unknown-local-cache). Instead, use Nx Cloud [remote caching](/docs/features/ci-features/remote-cache). If you want share your local cache anyway, you can use the [`@nx/shared-fs-cache`](/docs/reference/remote-cache-plugins/shared-fs-cache) plugin. ### Configure end to end encryption Nx Cloud guarantees your cache entries will remain immutable - once they've been registered they can't be changed. This is guaranteed because the only way to access the cache is through the Nx Cloud API and we have policies enabled in our cloud storage that specifically disables overwrites and deletions of cached artifacts. But what if a hacker were somehow able make their way into the server holding the cache artifacts? Since you set up [end to end encryption](/docs/guides/nx-cloud/encryption), the files they see on disk will be fully encrypted with a key that only exists in your workspace. ### Use an on-Premise version of Nx Cloud if needed If you need to have all cache artifacts on servers that you control, there is an on-premise version of Nx Cloud that you can use as part of the [Enterprise plan](https://nx.dev/enterprise). ## Security decisions In any security discussion, there is a trade off between convenience and security. It could be that some of these threats do not apply to your organization. If that is the case you could relax some of the security precautions and gain the performance benefits of more task results being stored in the remote cache. Every organization is different and Nx can be adapted to best meet your needs without opening up vulnerabilities. If you would Nx team members to help your organization fine tune your set up, [talk to us about Nx Enterprise](https://nx.dev/enterprise). --- ## Heartbeat and Main Job Completion Handling ### What is the heartbeat process? A big challenge in building a task distribution engine is tracking when the main orchestrator job has finished (either successfully or due to an error). For example, your agents might be busy running your Nx tasks, but GitHub Actions suddenly decides to kill your main job because it is consuming too many resources. In that case, regardless of how you configured your pipeline or how many shutdown hooks we add to the code, we simply do not have enough time to tell Nx Cloud it can stop running tasks on agents. To fix this, the first time you run `nx` commands, we create a small background process on your main job that pings Nx Cloud every few seconds. The moment we stop receiving these pings, we assume the main job has died, and we will fail the CI run and shut down the agents. This is useful not just for worst-case scenarios, but it also keeps your CI config simple: ```yaml - run: npx nx start-ci-run --distribute-on="5 linux-medium-js" # start agents and tell Nx to send the below affected tasks to NxCloud rather than execute in-place - run: npm ci - run: npx nx affected -t build,lint,test # That's it - we don't need an extra step to tell NxCloud that we're done running Nx commands. ``` 👆In the above case, once all the `affected` commands are executed and the main job shuts down, the heartbeat process will stop the pings. So, we'll assume the main job finished, and we can turn off the agents. In summary, for 99% of cases, you will never have to think about the heartbeat or care that it exists. Your CI will just work. ### Caveats In some specific cases, though, the heartbeat process will not work properly. In that case, you will need to [manage completion yourself](/docs/reference/nx-cloud-cli#requireexplicitcompletion): ```yaml - run: npx nx start-ci-run --distribute-on="5 linux-medium-js" --require-explicit-completion # this option disables the heartbeat - run: npm ci - run: npx nx affected -t build,lint,test - run: npx nx complete-ci-run # this now tells NxCloud to turn off the agents if: always() # IMPORTANT: Always run, even in case of failures ``` When you might need to do this: #### CI provider unexpectedly cleans up background processes We've noticed that some CI providers tend to be more aggressive with background process management when moving between steps. Assume you have the following configuration: ```yaml - run: npx nx start-ci-run --distribute-on="5 linux-medium-js" - run: npm ci - run: npx nx affected -t build,lint,test # This is the point where we turn on the heartbeat. - run: ./deploy-my-projects.sh - run: ./publish-test-results-to-sonarqube - run: npx nx affected -t e2e ``` 👆Notice how after running `npx nx affected -t build,lint,test`, we are doing some other work (deploying the projects, uploading test results, etc.). We've seen some CI providers occasionally clean up background processes when moving between steps. So, if you see your main job failing when it gets to the `npx nx affected -t e2e` tests, it might be because Nx Cloud thought the distribution had ended already, and it didn't accept any new Nx tasks. The heartbeat process is especially vulnerable during these "transition phases" between steps. To fix this, you can either manage completion explicitly (as mentioned above) or move all your Nx tasks into a single step or `nx` command. Both would fix the issue. #### Multi-job pipelines with different stages GitHub Actions supports defining dependencies between jobs. This allows you to create pipelines that spin up multiple machines at different stages, but that still run as part of the same overall "workflow". Other providers allow you to do this too. For example, you might want to: 1. Spin up a job that runs some quick tasks such as formatting and linting. 2. Once that's finished, create three machines for building and testing your app on Linux, macOS, and Windows. 3. Finally, once those three machines finish, spin up a machine that deploys your app. If Nx Cloud doesn't hear back from the heartbeat after a few seconds, it assumes something went wrong and fails the workflow. When moving from one stage to the next, you need to turn off the first machine and wait for the next machines to boot up and start their heartbeats. This can cause you to go over the heartbeat threshold. {% aside type="caution" title="Multi Machine/Job workflows" %} Workflows involving multiple machines/jobs are the main source of heartbeat-related issues, simply because of how long it usually takes to restart the heartbeat after shutting it down. {% /aside %} The only fix in this scenario is to handle completion yourself and run `npx nx complete-ci-run` as the last command on your last machine in the pipeline. ### Heartbeat vs. `--stop-agents-after` While both the heartbeat and `--stop-agents-after` tell Nx Cloud when it can shut down agents, they have different roles: 1. `--stop-agents-after` is useful purely to avoid wasting unnecessary compute. - So, while you might still have agents actively running tasks, Nx Cloud can tell that you won't be sending it any more tasks in the future because of how you configured `--stop-agents-after`. - So, it can turn off any agents that are no longer running tasks. - [Read more about configuring `--stop-agents-after`](/docs/reference/nx-cloud-cli#stopagentsafter). 2. The heartbeat, on the other hand, marks the completion of the main job. - It makes sure Nx Cloud instantly knows when the main job exited so it can update the status of its CI run. - In case of errors, it makes sure that it can instantly abandon any in-progress tasks. --- ## Parallelization and Distribution Nx speeds up your CI in several ways. One method is to reduce wasted calculations with the [affected command](/docs/features/ci-features/affected) and [remote caching](/docs/features/ci-features/remote-cache). No matter how effective you are at eliminating wasted calculations in CI, there will always be some tasks that really do need to be executed and sometimes that list of tasks will be everything in the repository. To speed up the essential tasks, Nx [efficiently orchestrates](/docs/concepts/task-pipeline-configuration) the tasks so that prerequisite tasks are executed first, but independent tasks can all be executed concurrently. Running tasks concurrently can be done with parallel processes on the same machine or distributed across multiple machines. ## Parallelization Any time you execute a task, Nx will parallelize as much as possible. If you run `nx build my-project`, Nx will build the dependencies of that project in parallel as much as possible. If you run `nx run-many -t build` or `nx affected -t build`, Nx will run all the specified tasks and their dependencies in parallel as much as possible. Nx will limit itself to the maximum number of parallel processes set in the `parallel` property in `nx.json`. To set that limit to `2` for a specific command, you can specify `--parallel=2` in the terminal. This flag works for individual tasks as well as `run-many` and `affected`. Unfortunately, there is a limit to how many processes a single computer can run in parallel at the same time. Once you hit that limit, you have to wait for all the tasks to complete. #### Pros and cons of using a single machine to execute tasks on parallel processes: | Characteristic | Pro/Con | Notes | | -------------- | ------- | -------------------------------------------------------------------------------- | | Complexity | 🎉 Pro | The pipeline uses the same commands a developer would use on their local machine | | Debuggability | 🎉 Pro | All build artifacts and logs are on a single machine | | Speed | ⛔️ Con | The larger a repository gets, the slower your CI will be | ## Distribution across machines Once your repository grows large enough, it makes sense to start using multiple machines to execute tasks in CI. This adds some extra cost to run the extra machines, but the cost of running those machines is much less than the cost of paying developers to sit and wait for CI to finish. You can either distribute tasks across machines manually, or use Nx Cloud distributed task execution to automatically assign tasks to machines and gather the results back to a single primary machine. When discussing distribution, we refer to the primary machine that determines which tasks to run as the main machine (or job). The machines that only execute the tasks assigned to them are called agent machines (or jobs). ### Manual distribution One way to manually distribute tasks is to use binning. Binning is a distribution strategy where there is a main job that divides the work into bins, one for each agent machine. Then every agent executes the work prepared for it. Here is a simplified version of the binning strategy. ```yaml // main-job.yml # Get the list of affected projects - nx show projects --affected --json > affected-projects.json # Store the list of affected projects in a PROJECTS environment variable # that is accessible by the agent jobs - node storeAffectedProjects.js ``` ```yaml // lint-agent.yml # Run lint for all projects defined in PROJECTS - nx run-many --projects=$PROJECTS -t lint ``` ```yaml // test-agent.yml # Run test for all projects defined in PROJECTS - nx run-many --projects=$PROJECTS -t test ``` ```yaml // build-agent.yml # Run build for all projects defined in PROJECTS - nx run-many --projects=$PROJECTS -t build ``` Here's a visualization of how this approach works: ![CI using binning](../../../../assets/concepts/ci-concepts/binning.svg) This is faster than the single machine approach, but you can see that there is still idle time where some agents have to wait for other agents to finish their tasks. There's also a lot of complexity hidden in the idle time in the graph. If `test-agent` tries to run a `test` task that depends on a `build` task that hasn't been completed yet by the `build-agent`, the `test-agent` will start to run that `build` task without pulling it from the cache. Then the `build-agent` might start to run the same `build` task that the `test-agent` is already working on. Now you've reintroduced waste that remote caching was supposed to eliminate. It is possible in a smaller repository to manually calculate the best order for tasks and encode that order in a script. But that order will need to be adjusted as the repository structure changes and may even be suboptimal depending on what projects were affected in a given PR. #### Pros and cons of manually distributing tasks across multiple machines: | Characteristic | Pro/Con | Notes | | -------------- | ------- | ------------------------------------------------------------------------------------------------------------------- | | Complexity | ⛔️ Con | You need to write custom scripts to tell agent machines what tasks to execute. Those scripts need to be maintained. | | Debuggability | ⛔️ Con | Build artifacts and logs are scattered across agent machines. | | Speed | 🎉 Pro | Faster than using a single machine | ### Distributed task execution using Nx Agents When you use **Nx Agents** (feature of Nx Cloud) you gain even more speed than manual distribution while preserving the simple set up and easy debuggability of the single machine scenario. The setup looks like this: ```yaml // main-job.yml # Coordinate the agents to run the tasks and stop agents when the build tasks are done - npx nx start-ci-run --distribute-on="8 linux-medium-js" --stop-agents-after=build # Run any commands you want here - nx affected -t lint test build ``` The visualization looks like this: ![CI using Agents](../../../../assets/concepts/ci-concepts/3agents.svg) In the same way that Nx efficiently assigns tasks to parallel processes on a single machine so that pre-requisite tasks are executed first, Nx Cloud's distributed task execution efficiently assigns tasks to agent machines so that the idle time of each agent machine is kept to a minimum. Nx performs these calculations for each PR, so no matter which projects are affected or how your project structure changes, Nx will optimally assign tasks to the agents available. #### Pros and cons of using Nx Cloud's distributed task execution: | Characteristic | Pro/Con | Notes | | -------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------- | | Complexity | 🎉 Pro | The pipeline uses the same commands a developer would use on their local machine, but with one extra line before running tasks. | | Debuggability | 🎉 Pro | Build artifacts and logs are collated to the main machine as if all tasks were executed on that machine | | Speed | 🎉 Pro | Fastest possible task distribution for each PR | ## Conclusion If your repo is starting to grow large enough that CI times are suffering, or if your parallelization strategy is growing too complex to manage effectively, try [setting up Nx Agents](/docs/features/ci-features/distribute-task-execution). You can [generate a simple workflow](/docs/reference/workspace/generators#ci-workflow) for common CI providers with a `nx g ci-workflow` or follow one of the [CI setup recipes](/docs/guides/nx-cloud/setup-ci). Organizations that want extra help setting up Nx Cloud or getting the most out of Nx can [sign up for Nx Enterprise](https://nx.dev/enterprise). This package comes with extra support from the Nx team and the option to host Nx Cloud on your own servers. --- ## Reduce Wasted Time in CI This article explores two ways that Nx improves the average speed of your CI pipeline - `nx affected` and remote caching. Using the `nx affected` command speeds up the first CI run for a PR and remote caching speeds up every CI run after that. Both `nx affected` and remote caching provide more benefits to repositories with more projects and a flatter project structure. For this discussion, we'll assume you understand the Nx [mental model](/docs/concepts/mental-model) and have an understanding of both [what the affected command is](/docs/features/ci-features/affected) and [how caching works](/docs/concepts/how-caching-works). ## Reduce wasted time with affected The `nx affected` command allows you to only run tasks on projects that were affected by a particular PR. This effectively eliminates the wasted time and resources that would have been spent executing tasks on projects that were unrelated to a particular PR. How much benefit you gain from this is different for each repository, but there a few general principles that you can keep in mind as you're assessing the value of the Nx affected command for your repository. You can also use these principles to help inform your architecture decisions as you try to improve the performance of your CI system. ### Repos with more projects gain more value from affected If we look at these two trivial examples, you can see that the repository with more projects gains more value from the affected command. {% graph title="One Project" height="200px" %} ```json { "hash": "85fd0561bd88f0bcd8703a9e9369592e2805f390d04982fb2401e700dc9ebc59", "projects": [ { "name": "project1", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "project1": [], "project2": [], "project3": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": ["project1"], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} {% graph title="Four Projects" height="200px" %} ```json { "hash": "85fd0561bd88f0bcd8703a9e9369592e2805f390d04982fb2401e700dc9ebc59", "projects": [ { "name": "project1", "type": "lib", "data": { "tags": [] } }, { "name": "project2", "type": "lib", "data": { "tags": [] } }, { "name": "project3", "type": "lib", "data": { "tags": [] } }, { "name": "project4", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "project1": [], "project2": [], "project3": [], "project4": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": ["project1"], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} In the one project example, every PR will affect the entire repository. In the four project example, modifying one project only affects 25% of the repository. For the one project repository `nx affected -t build` is identical to `nx run-many -t build`, whereas for the four project repository, `nx affected -t build` cuts out the 75% of wasted work. With this principle in mind, you can add more applications to the repository to gain the [benefits of a monorepo](https://monorepo.tools) without suffering an exponential increase in CI execution time. This principle also encourages splitting projects into multiple projects in order to have a faster CI pipeline for the existing applications. ### Flatter repos gain more value from affected Consider the following example repo structures. {% graph title="Stacked" height="350px" %} ```json { "hash": "85fd0561bd88f0bcd8703a9e9369592e2805f390d04982fb2401e700dc9ebc59", "projects": [ { "name": "project1", "type": "lib", "data": { "tags": [] } }, { "name": "project2", "type": "lib", "data": { "tags": [] } }, { "name": "project3", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "project1": [{ "source": "project1", "target": "project2" }], "project2": [{ "source": "project2", "target": "project3" }], "project3": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} {% graph title="Grouped" height="200px" %} ```json { "hash": "85fd0561bd88f0bcd8703a9e9369592e2805f390d04982fb2401e700dc9ebc59", "projects": [ { "name": "project1", "type": "lib", "data": { "tags": [] } }, { "name": "project2", "type": "lib", "data": { "tags": [] } }, { "name": "project3", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "project1": [{ "source": "project1", "target": "project3" }], "project2": [{ "source": "project2", "target": "project3" }], "project3": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} {% graph title="Flat" height="200px" %} ```json { "hash": "85fd0561bd88f0bcd8703a9e9369592e2805f390d04982fb2401e700dc9ebc59", "projects": [ { "name": "project1", "type": "lib", "data": { "tags": [] } }, { "name": "project2", "type": "lib", "data": { "tags": [] } }, { "name": "project3", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "project1": [], "project2": [], "project3": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} If we assume that each project has an independent 50% chance of being modified on a given PR, we can calculate the expected average number of affected projects. Intuitively, the flat structure should have less affected projects than the stacked structure and the grouped structure should fall somewhere in between. That is, in fact, the case. If we don't use the `nx affected` command in CI, no matter how our repo is structured, the expected number of projects run in CI will be 3 - all of them. | Repo Structure | Expected Number of Affected Projects | | -------------- | ------------------------------------ | | Stacked | 2.125 | | Grouped | 2 | | Flat | 1.5 | Note that the 50% chance of any project being modified is an arbitrary number. If we had picked a lower chance of being modified all the expected values would decrease as well. Every repository is different, but this illustrates that a flatter structure will help speed up your CI pipeline. {% callout title="The Math Behind the Expected Number of Affected Projects" type="deepdive" %} **Definitions:** ℙm(1) means the probability project 1 was modified ℙm'(1) means the probability project 1 was not modified ℙa(1) means the probability project 1 was affected **Given:** ℙm(1) = ℙm(2) = ℙm(3) = 0.5 **Stacked:** ℙa(1) = ℙm(1) + ℙm'(1) \* ℙm(2) + ℙm'(1) \* ℙm'(2) \* ℙm(3) = 0.5 + 0.25 + 0.125 = 0.875 ℙa(2) = ℙm(2) + ℙm'(2) \* ℙm(3) = 0.5 + 0.25 = 0.75 ℙa(3) = ℙm(3) = 0.5 = 0.5 **Expected Number of Affected Projects:** ℙa(1) + ℙa(2) + ℙa(3) = 0.875 + 0.75 + 0.5 = 2.125 **Grouped:** ℙa(1) = ℙm(1) + ℙm'(1) \* ℙm(3) = 0.5 + 0.25 = 0.75 ℙa(2) = ℙm(2) + ℙm'(2) \* ℙm(3) = 0.5 + 0.25 = 0.75 ℙa(3) = ℙm(3) = 0.5 = 0.5 **Expected Number of Affected Projects:** ℙa(1) + ℙa(2) + ℙa(3) = 0.75 + 0.75 + 0.5 = 2 **Flat:** ℙa(1) = ℙm(1) = 0.5 = 0.5 ℙa(2) = ℙm(2) = 0.5 = 0.5 ℙa(3) = ℙm(3) = 0.5 = 0.5 **Expected Number of Affected Projects:** ℙa(1) + ℙa(2) + ℙa(3) = 0.5 + 0.5 + 0.5 = 1.5 {% /callout %} ## Reduce wasted time with remote caching If you use a read/write token on developer machines, CI runs could be dramatically improved because they would be leveraging the work already done on the machine of the developer that pushed the PR. This set up requires you to have full trust in everyone who is capable of viewing the code base, which doesn't make sense for open source projects or for many organizations. Remote caching is still valuable when only the CI pipelines have a read/write token. ### Remote caching begins helping on the second CI run for a pipeline The first time CI runs for a particular PR, affected is doing most of the work to speed up your CI run. It is rare for a task to be cached on the first run. However, if the PR doesn't pass in CI, or if you need to make a change for some reason, subsequent runs of that same PR will be able to reuse cached tasks for you. Take a look at the example below. The projects are setup in the suboptimal stacked arrangement from before. On the first CI run, `project3` was modified, so every project is affected. Then the developer realizes that `project2` should be changed as well and pushes a new commit. For the second CI run, every project is still affected when compared against the `main` branch, but `project3` hasn't changed between the first CI run and the second, so the cache from the first CI run can be used instead of re-running that task. `project2` tasks need to be re-run since it was modified and `project1` tasks need to be re-run since it depends on a project that was modified. {% graph title="First CI Run" height="300px" %} ```json { "hash": "85fd0561bd88f0bcd8703a9e9369592e2805f390d04982fb2401e700dc9ebc59", "projects": [ { "name": "project1", "type": "lib", "data": { "tags": [] } }, { "name": "project2", "type": "lib", "data": { "tags": [] } }, { "name": "project3", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "project1": [{ "source": "project1", "target": "project2" }], "project2": [{ "source": "project2", "target": "project3" }], "project3": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": ["project1", "project2", "project3"], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} {% graph title="Second CI Run (project2 Changed)" height="300px" %} ```json { "hash": "85fd0561bd88f0bcd8703a9e9369592e2805f390d04982fb2401e700dc9ebc59", "projects": [ { "name": "project1 ↩︎", "type": "lib", "data": { "tags": [] } }, { "name": "project2 ↩︎", "type": "lib", "data": { "tags": [] } }, { "name": "project3 ✓", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "project1 ↩︎": [{ "source": "project1 ↩︎", "target": "project2 ↩︎" }], "project2 ↩︎": [{ "source": "project2 ↩︎", "target": "project3 ✓" }], "project3 ✓": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": ["project1 ↩︎", "project2 ↩︎", "project3 ✓"], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} ### Caching provides more value when there are more projects and a flatter structure The exact same reasoning that lead us to encourage more projects and a flatter structure to gain value from the affected command, also applies to caching. If there is a single project in a repo, the cache will be broken on 100% of changes. If there are 4 unconnected projects, changing one project, allows you to use the cache for the other 3 projects. Just as modifying a project makes projects that depend on it be `affected`, modifying a project also breaks the cache for projects that depend on it. --- ## Architectural Decisions {% index_page_cards path="concepts/decisions" /%} --- ## Code Ownership One of the most obvious benefits of having a monorepo is that you can easily share code across projects. This enables you to apply the Don't Repeat Yourself principle across the whole codebase. Code sharing could mean using a function or a component in multiple projects. Or code sharing could mean using a typescript interface to define the network API interface for both the front end and back end applications. Code sharing is usually a good thing, but it can cause problems. ## Too much sharing If everyone can use and modify every piece of code, you can run into problems: ### Devs modifying another team's code Another team can add complexity to code that your team maintains to satisfy their one use case. This adds an extra burden on you and may make it difficult to adapt that piece of code for other use cases. This can be solved by using a CODEOWNERS file that explicit defines which people in an organization need to approve PRs that touch a particular section of the codebase. ### Outside devs using internal code Another team can use a piece of code that is intended to be internal to your project. Now if you change that piece of code, their project is broken. So your team is either locked in to that API or you have to solve problems in another team's project. To solve this, Nx provides a lint rule `enforce-module-boundaries` that will throw an error if a project imports code that is not being exported from the `index.ts` file at the root of a library. Now the `index.ts` file is the definitive published API for that library. ### Projects depending on the wrong libraries Libraries with presentational components can accidentally use code from a library that holds a data store. Projects with Angular code can accidentally use code from a React project. Projects from team A could accidentally use code in projects that are intended to be only for team B. These kinds of rules will vary based on the organisation, but they can all be enforced automatically using tags and the `enforce-module-boundaries` lint rule. ## Defining code ownership As more teams are contributing to the same repository, it becomes crucial to establish clear code ownership. Since Nx allows us to place projects in any directory structure, those directories can become code-ownership boundaries. That's why the structure of an Nx workspace often reflects the structure of an organization. GitHub users can use the `CODEOWNERS` file for that. ```plaintext /libs/happynrwlapp julie-happynrwlapp-lead /apps/happynrwlapp julie-happynrwlapp-lead /libs/shared/ui hank-the-ui-guy /libs/shared/utils-testing julie,hank ``` If you want to know more about code ownership on GitHub, please check [the documentation on the `CODEOWNERS` file](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners). --- ## Dependency Management Strategies When working with a monorepo, one of the key architectural decisions is how to manage dependencies across your projects. This document outlines two main strategies and helps you choose the right approach for your team. The core decision comes down to: 1. Independently maintained dependencies in individual projects 2. A "single version policy", where dependencies are defined once at the root for your entire monorepo **Nx fully supports both strategies - it's your choice and you can change your approach as your needs evolve, you are never locked in**. You can even mix these strategies, using a single version policy for most dependencies while allowing specific projects to maintain their own versions when necessary. Thanks to its smart dependency graph analysis, Nx can trace dependencies used by different projects and can therefore avoid unnecessary cache misses even when a root level lockfile changes, so that is not a concern. Let's examine the trade-offs of each approach, using JavaScript/TypeScript as our primary example (though these principles apply to other languages as well): ## Independently maintained dependencies In this model, each project maintains its own dependency definitions. For JavaScript/TypeScript projects, this means each project has its own `package.json` file specifying runtime dependencies, with development dependencies often still living at the root of the workspace (although they can also be specified at the project level). During builds, each project's bundler includes the necessary dependencies in its final artifact. Dependencies are typically managed using package manager workspaces (npm/yarn/pnpm/bun). While this approach offers flexibility, it can introduce complexity when sharing code between projects. For example, if `project1` and `project2` use different versions of React, what happens when they try and share components? This can lead to runtime issues that are difficult to detect during development and challenging to debug in production. A common pitfall occurs when developers have one version of a dependency in the root `node_modules` but a different version specified in their project's `package.json`. This can result in code that works locally but fails in production where the bundled version is used. **Pros:** - Teams can independently choose and upgrade their dependencies - More immediately clear what dependencies are intended for each project - Easier transition for teams new to monorepos - Modern tooling around e.g. module federation can help mitigate some of the challenges within applications **Cons:** - Complicates deployment when projects share runtime dependencies - Makes code sharing between projects more challenging - Can lead to hard-to-detect runtime conflicts - Increases maintenance and strategy overhead with multiple versions to track ## Single version policy This strategy centralizes all dependency definitions in the root `package.json` file, ensuring consistent versions across your codebase. While individual projects may still maintain their own `package.json` files for development purposes, the root configuration serves as the single source of truth for versions. For building and deployment, you'll need to ensure each project only includes its relevant dependencies. Nx helps manage this through its workspace dependency graph and the `@nx/dependency-checks` ESLint rule, which can automatically detect and fix dependency mismatches between project and root configurations. The main challenge with this approach is coordinating dependency updates across independent teams. When multiple teams work on different, or even the same, applications within the same repo, they need to align on dependency upgrades. While this requires more coordination, it often results in less total work - upgrading a dependency once across all projects is typically more efficient than managing multiple separate upgrades over time. {% aside type="tip" title="Dependency Catalog Support" %} When using pnpm or Yarn, use [pnpm catalogs](https://pnpm.io/catalogs) or [Yarn catalogs](https://yarnpkg.com/features/catalogs) to maintain single version policy. Define versions in `pnpm-workspace.yaml` or `.yarnrc.yml` respectively and reference them as `"": "catalog:"` in project `package.json` files (Nx 22+ for pnpm and Nx 22.6+ for Yarn). {% /aside %} **Pros:** - Ensures consistent dependency versions, preventing runtime conflicts - Simplifies code sharing between projects - Makes workspace-wide updates more manageable and easier to track **Cons:** - Requires coordination between teams for dependency updates - May slow down teams that need to move at different velocities - Needs stronger governance and communication processes For details on using Nx dependency graph in your deployment process, see our guide on [preparing applications for deployment via CI](/docs/guides/ci-deployment). --- ## Folder Structure Nx can work with any folder structure you choose, but it is good to have a plan in place for the folder structure of your monorepo. Projects are often grouped by _scope_. A project's scope is either the application to which it belongs or (for larger applications) a section within that application. ## Move generator Don't be too anxious about choosing the exact right folder structure from the beginning. Projects can be moved or renamed using the [`@nx/workspace:move` generator](/docs/reference/workspace/generators#move). For instance, if a project under the `booking` folder is now being shared by multiple apps, you can move it to the shared folder like this: ```shell nx g move --project booking-some-project shared/some-project ``` ## Remove generator Similarly, if you no longer need a project, you can remove it with the [`@nx/workspace:remove` generator](/docs/reference/workspace/generators#remove). ```shell nx g remove booking-some-project ``` ## Example workspace Let's use Nrwl Airlines as an example organization. This organization has two apps, `booking` and `check-in`. In the Nx workspace, projects related to `booking` are grouped under a `libs/booking` folder, projects related to `check-in` are grouped under a `libs/check-in` folder and projects used in both applications are placed in `libs/shared`. You can also have nested grouping folders, (i.e. `libs/shared/seatmap`). The purpose of these folders is to help with organizing by scope. We recommend grouping projects together which are (usually) updated together. It helps minimize the amount of time a developer spends navigating the folder tree to find the right file. {% filetree %} - apps/ - booking/ - check-in/ - libs/ - booking/ <---- grouping folder - feature-shell/ <---- project - check-in/ - feature-shell/ - shared/ <---- grouping folder - data-access/ <---- project - seatmap/ <---- grouping folder - data-access/ <---- project - feature-seatmap/ <---- project {% /filetree %} ## Sharing projects One of the main advantages of using a monorepo is that there is more visibility into code that can be reused across many different applications. Shared projects are a great way to save developers time and effort by reusing a solution to a common problem. Let's consider our reference monorepo. The `shared-data-access` project contains the code needed to communicate with the back-end (for example, the URL prefix). We know that this would be the same for all libs; therefore, we should place this in the shared lib and properly document it so that all projects can use it instead of writing their own versions. {% filetree %} - libs/ - booking/ - data-access/ <---- app-specific project - shared/ - data-access/ <---- shared project - seatmap/ - data-access/ <---- shared project - feature-seatmap/ <---- shared project {% /filetree %} --- ## Monorepo or Polyrepo Monorepos have a lot of benefits, but there are also some costs involved. We feel strongly that the [technical challenges](/docs/concepts/decisions/why-monorepos) involved in maintaining large monorepos are fully addressed through the efficient use of Nx and Nx Cloud. Rather, the limiting factors in how large your monorepo grows are interpersonal. In order for teams to work together in a monorepo, they need to agree on how that repository is going to be managed. These questions can be answered in many different ways, but if the developers in the repository can't agree on the answers, then they'll need to work in separate repositories. **Organizational Decisions:** - [Dependency Management](/docs/concepts/decisions/dependency-management) - Should there be an enforced single version policy or should each project maintain their own dependency versions independently? - [Code Ownership](/docs/concepts/decisions/code-ownership) - What is the code review process? Who is responsible for reviewing changes to each portion of the repository? - [Project Dependency Rules](/docs/concepts/decisions/project-dependency-rules) - What are the restrictions on dependencies between projects? Which projects can depend on which other projects? - [Folder Structure](/docs/concepts/decisions/folder-structure) - What is the folder structure and naming convention for projects in the repository? - [Project Size](/docs/concepts/decisions/project-size) - What size should projects be before they need to be split into separate projects? - Git Workflow - What Git workflow should be used? Will you use trunk-based development or long running feature branches? - CI Pipeline - How is the CI pipeline managed? Who is responsible for maintaining it? - Deployment - How are deployments managed? Does each project deploy independently or do they all deploy at once? ## How many repositories? Once you have a good understanding of where people stand on these questions, you'll need to choose between one of the following setups: ### One monorepo to rule them all If everyone can agree on how to run the repository, having [a single monorepo will provide a lot of benefits](/docs/concepts/decisions/why-monorepos). Every project can share code and maintenance tasks can be performed in one PR for the entire organization. Any task that involves coordination becomes much easier. Once the repository scales to hundreds of developers, you need to take proactive steps to ensure that your decisions about [code review](/docs/concepts/decisions/code-ownership) and [project dependency restrictions](/docs/features/enforce-module-boundaries) do not inhibit the velocity of your teams. Also, any shared code and tooling (like the CI pipeline or a shared component library) need to be maintained by a dedicated team to help everyone in the monorepo. ### Polyrepos - a repository for each project If every project is placed in its own repository, each team can make their own organizational decisions without the need to consult with other teams. Unfortunately, this also means that each team has to make their own organizational decisions instead of focusing on feature work that provides business value. Sharing code is difficult with this set up and every maintenance task needs to be repeated across all the repositories in the organization. Nx can still be useful with this organizational structure. Tooling and maintenance tasks can be centralized through shared [Nx plugins](/docs/concepts/nx-plugins) that each repository can opt-in to using. Since creating repositories is a frequent occurrence in this scenario, Nx [generators](/docs/features/generate-code) can be used to quickly scaffold out the repository with reasonable tooling defaults. ### Multiple monorepos Somewhere between the single monorepo and the full polyrepo solutions exists the multiple monorepo setup. Typically when there are disagreements about organizational decisions, there are two or three factions that form. These factions can naturally be allocated to separate monorepos that have been configured in a way that best suits the teams that will be working in them. Compared to the single monorepo setup, this setup requires some extra overhead cost - maintaining multiple CI pipelines and performing the same tooling maintenance tasks on multiple repositories, but this cost could be offset by the extra productivity boost provided by the fact that each team can work in a repository that is optimized for the way that they work. --- ## Project Dependency Rules There are many types of libraries in a workspace. You can identify the type of a library through a naming convention and/or by using the project tagging system. With explicitly defined types, you can also use Nx to enforce project dependency rules based on the types of each project. This article explains one possible way to organize your repository projects by type. Every repository is different and yours may need a different set of types. In order to maintain a certain sense of order, we recommend having a small number of types, such as the below four (4) types of libraries. **Feature libraries:** Developers should consider feature libraries as libraries that implement container components (with access to data sources) for specific business use cases or pages in an application. **UI libraries:** A UI library contains only presentational components. **Data-access libraries:** A data-access library contains code for interacting with a back-end system. It also includes all the code related to state management. **Utility libraries:** A utility library contains low-level utilities used by many libraries and applications. --- ## Feature libraries **What is it?** A feature library contains a set of files that configure a business use case or a page in an application. Most of the components in such a library are container components that interact with data sources. This type of library also contains most of the UI logic, form validation code, etc. Feature libraries are almost always app-specific and are often lazy-loaded. **Naming Convention** `feature` (if nested) or `feature-*` (e.g., `feature-home`). **Dependency Constraints** A feature library can depend on any type of library. {% filetree %} - libs/ - my-app/ - feature-home/ - src/ - index.ts - lib/ {% /filetree %} `feature-home` is the app-specific feature library (in this case, the "my-app" app). --- ## UI libraries **What is it?** A UI library is a collection of related presentational components. There are generally no services injected into these components (all of the data they need should come from Inputs). **Naming Convention** `ui` (if nested) or `ui-*` (e.g., `ui-buttons`) **Dependency Constraints** A ui library can depend on ui and util libraries. --- ## Data-access libraries **What is it?** Data-access libraries contain code that function as client-side delegate layers to server tier APIs. All files related to state management also reside in a `data-access` folder (by convention, they can be grouped under a `+state` folder under `src/lib`). **Naming Convention** `data-access` (if nested) or `data-access-*` (e.g. `data-access-seatmap`) **Dependency Constraints** A data-access library can depend on data-access and util libraries. --- ## Utility libraries **What is it?** A utility library contains low level code used by many libraries. Often there is no framework-specific code and the library is simply a collection of utilities or pure functions. **Naming Convention** `util` (if nested), or `util-*` (e.g., `util-testing`) **Dependency Constraints** A utility library can depend only on utility libraries. An example util lib module: **libs/shared/util-formatting** ```typescript export { formatDate, formatTime } from './src/format-date-fns'; export { formatCurrency } from './src/format-currency'; ``` ## Enforce project dependency rules In order to enforce the dependency constraints that were listed for each type, you can add the following rule in your ESLint configuration: {% tabs syncKey="eslint-config-preference" %} {% tabitem label="Flat Config" %} ```javascript // eslint.config.mjs import nx from '@nx/eslint-plugin'; export default [ ...nx.configs['flat/base'], ...nx.configs['flat/typescript'], ...nx.configs['flat/javascript'], { ignores: ['**/dist'], }, { files: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'], rules: { '@nx/enforce-module-boundaries': [ 'error', { allow: [], depConstraints: [ { sourceTag: 'type:feature', onlyDependOnLibsWithTags: [ 'type:feature', 'type:ui', 'type:util', ], }, { sourceTag: 'type:ui', onlyDependOnLibsWithTags: ['type:ui', 'type:util'], }, { sourceTag: 'type:util', onlyDependOnLibsWithTags: ['type:util'], }, ], }, ], }, }, ]; ``` {% /tabitem %} {% tabitem label="Legacy (.eslintrc.json)" %} ```json // .eslintrc.json { "root": true, "ignorePatterns": ["**/*"], "plugins": ["@nx"], "overrides": [ { "files": ["*.ts", "*.tsx", "*.js", "*.jsx"], "rules": { "@nx/enforce-module-boundaries": [ "error", { "allow": [], "depConstraints": [ { "sourceTag": "type:feature", "onlyDependOnLibsWithTags": [ "type:feature", "type:ui", "type:util" ] }, { "sourceTag": "type:ui", "onlyDependOnLibsWithTags": ["type:ui", "type:util"] }, { "sourceTag": "type:util", "onlyDependOnLibsWithTags": ["type:util"] } ] } ] } } ] } ``` {% /tabitem %} {% /tabs %} ## Other types You will probably come up with other library types that make sense for your organization. That's fine. Just keep a few things in mind: - Keep the number of library types low - Clearly document what each type of library means --- ## Project Size Like a lot of decisions in programming, deciding to make a new Nx project or not is all about trade-offs. Each organization will decide on their own conventions, but here are some trade-offs to bear in mind as you have the conversation. ## What is a project for? > Developers new to Nx can be initially hesitant to move their logic into separate projects, because they assume it implies that those projects need to be general purpose and shareable across applications. **This is a common misconception, moving code into projects can be done from a pure code organization perspective.** Ease of re-use might emerge as a positive side effect of refactoring code into projects by applying an _"API thinking"_ approach. It is not the main driver though. In fact when organizing projects you should think about your business domains. ## Should i make a new project? There are three main benefits to breaking your code up into more projects. ### 1. Faster commands The more granular your projects are, the more effective `nx affected` and Nx computation cache will be. For example, if `projectA` contains 10 tests, but only 5 of them were affected by a particular code change, all 10 tests will be run by `nx affected -t test`. If you can predict which 5 tests are usually run together, you can split all the related code into a separate project to allow the two groups of 5 tests to be executed independently. ### 2. Visualizing architecture The `nx graph` command generates a graph of how apps and projects depend on each other. If most of your code lives in a few giant projects, this visualization doesn't provide much value. Adding the `--watch` flag to the command will update the visualization in-browser as you make changes. ### 3. Enforcing constraints You can enforce constraints on how different types of projects depend on each other [using tags](/docs/features/enforce-module-boundaries). Following pre-determined conventions on what kind of code can go in different types of projects allows your tagging system to enforce good architectural patterns. Also, each project defines its own API, which allows for encapsulating logic that other parts of codebase can not access. You can even use a [CODEOWNERS file](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners) to assign ownership of a certain project to a user or team. ## Should i add to an existing project? Limiting the number of projects by keeping code in an existing project also has benefits. ### 1. Consolidating code Related code should be close together. If a developer can accomplish a task without moving between multiple different folders, it helps them work faster and make less mistakes. Every new project adds some folders and configuration files that are not directly contributing to business value. Nx helps reduce the cost of adding a new project, but it isn't zero. ### 2. Removing constraints Especially for rapidly evolving code, the standard architectural constraints may just get in the way of experimentation and exploration. It may be worthwhile to develop for a while in a single project in order to allow a real architecture to emerge and then refactoring into multiple projects once the pace of change has slowed down. --- ## Monorepos A monorepo is a single git repository that holds the source code for multiple applications and libraries, along with the tooling for them. ## What are the benefits of a monorepo? - **Shared code and visibility** - [Keeps your code DRY across your entire organization.](/docs/concepts/decisions/code-ownership) Reuse validation code, UI components, and types across the codebase. Reuse code between the backend, the frontend, and utility libraries. - **Atomic changes** - Change a server API and modify the downstream applications that consume that API in the same commit. You can change a button component in a shared library and the applications that use that component in the same commit. A monorepo saves the pain of trying to coordinate commits across multiple repositories. - **Developer mobility** - Get a consistent way of building and testing applications written using different tools and technologies. Developers can confidently contribute to other teams' applications and verify that their changes are safe. - **Single set of dependencies** - [Use a single version of all third-party dependencies](/docs/concepts/decisions/dependency-management), reducing inconsistencies between applications. Less actively developed applications are still kept up-to-date with the latest version of a framework, library, or build tool. ## Why not just code collocation? A naive implementation of a monorepo is code collocation, where you combine all the code from multiple repositories into the same repo. Many large companies that use monorepos don't "simply" put all the code in one place. **That's not enough**. Without adequate tooling to coordinate everything, problems arise with simply collocating code. - **Running unnecessary tests** - All tests in the entire repository run to ensure nothing breaks from a given change. Even code in projects that are unrelated to the actual change. - **No code boundaries** - Bugs and inconsistencies are added by a developer from another team changing code in your project. Or worse, another team uses code that you only intended for private use in their application. Now another project code depends on it, keeping you from making changes that may break their application. - **Inconsistent tooling** - Each project uses its own set of commands for running tests, building, serving, linting, deploying, and so forth. Inconsistency creates mental overhead remembering which commands to use from project to project. Tools like Lerna and Yarn Workspaces help optimize the installation of node modules, but they **do not** enable Monorepo-style development. In other words, they solve an orthogonal problem and can even be used in combination with Nx. Read more on it [here](https://blog.nrwl.io/why-you-should-switch-from-lerna-to-nx-463bcaf6821). ## Nx + code collocation = monorepo Nx provides tools to give you the benefits of a monorepo without the drawbacks of simple code collocation. ### Scaling your monorepo with Nx - **Consistent Command Execution** - Executors allow for consistent commands to test, serve, build, and lint each project using various tools. - **Consistent Code Generation** - Generators allow you to customize and standardize organizational conventions and structure, removing the need to perform the same manual setup tasks repetitively. - **Affected Commands** - [Nx affected commands](/docs/reference/nx-commands#nx-affected) analyze your source code, the context of the changes, and only runs tasks on the affected projects impacted by the source code changes. - **Remote Caching** - Nx provides local caching and support for remote caching of command executions. With remote caching, when someone on your team runs a command, everyone else gets access to those artifacts to speed up their command executions, bringing them down from minutes to seconds. Nx helps you scale your development to massive applications and libraries even more with distributed task execution and incremental builds. ### Scaling your organization with Nx - **Controlled Code Sharing** - While sharing code becomes much easier to share, there should also be constraints of when and how code should be depended on. Libraries are defined with specific enforced APIs. Rules should be put in place to define which libraries can depend on each other. Also, even though everyone has access to the repo does not mean that anyone should change any project. Projects should have owners such that changes to that project requires their approval. This can be defined using a `CODEOWNERS` file. - **Consistent Code Generation** - Generators allow you to automate code creation and modification tasks. Instead of writing a 7 steps guide in a readme file, you can create a generator to prompt the developer for inputs and modify the code directly. Nrwl provides plugins containing useful executors and generators for many popular tools. Also, Nx workspaces are extended further through a growing number of community-provided plugins. - **Accurate Architecture Diagram** - Most architecture diagrams become obsolete in an instant. And every diagram becomes out of date as soon as the code changes. Because Nx understands your code, it generates an up-to-date and accurate diagram of how projects depend on each other. The Nx project dependencies are also pluggable to extend to other programming languages and ecosystems. --- ## Executors and Configurations Executors are pre-packaged node scripts that can be used to run tasks in a consistent way. In order to use an executor, you need to install the plugin that contains the executor and then configure the executor in the project's `project.json` file. ```jsonc title="apps/cart/project.json" { "root": "apps/cart", "sourceRoot": "apps/cart/src", "projectType": "application", "generators": {}, "targets": { "build": { "executor": "@nx/webpack:webpack", "options": { "outputPath": "dist/apps/cart", ... } }, "test": { "executor": "@nx/jest:jest", "options": { ... } } } } ``` Each project has targets configured to run an executor with a specific set of options. In this snippet, `cart` has two targets defined - `build` and `test`. Each executor definition has an `executor` property and, optionally, an `options` and a `configurations` property. - `executor` is a string of the form `[package name]:[executor name]`. For the `build` executor, the package name is `@nx/webpack` and the executor name is `webpack`. - `options` is an object that contains any configuration defaults for the executor. These options vary from executor to executor. - `configurations` allows you to create presets of options for different scenarios. All the configurations start with the properties defined in `options` as a baseline and then overwrite those options. In the example, there is a `production` configuration that overrides the default options to set `sourceMap` to `false`. Once configured, you can run an executor the same way you would [run any target](/docs/features/run-tasks): ```shell nx [command] [project] nx build cart ``` Browse the executors that are available in the [plugin registry](/docs/plugin-registry). ## Run a terminal command from an executor If defining a new target that needs to run a single shell command, there is a shorthand for the `nx:run-commands` executor that can be used. ```jsonc title="project.json" { "root": "apps/cart", "sourceRoot": "apps/cart/src", "projectType": "application", "generators": {}, "targets": { "echo": { "command": "echo 'hello world'", }, }, } ``` For more info, see the [run-commands documentation](/docs/guides/tasks--caching/run-commands-executor) ## Build your own executor Nx comes with a Devkit that allows you to build your own executor to automate your Nx workspace. Learn more about it in the [docs page about creating a local executor](/docs/extending-nx/local-executors). ## Running executors with a configuration You can use a specific configuration preset like this: ```shell nx [command] [project] --configuration=[configuration] nx build cart --configuration=production ``` ## Use task configurations The `configurations` property provides extra sets of values that will be merged into the options map. ```json title="project.json" { "build": { "executor": "@nx/js:tsc", "outputs": ["{workspaceRoot}/dist/libs/mylib"], "dependsOn": ["^build"], "options": { "tsConfig": "libs/mylib/tsconfig.lib.json", "main": "libs/mylib/src/main.ts" }, "configurations": { "production": { "tsConfig": "libs/mylib/tsconfig-prod.lib.json" } } } } ``` You can select a configuration like this: `nx build mylib --configuration=production` or `nx run mylib:build:production`. The following code snippet shows how the executor options get constructed: ```javascript require(`@nx/jest`).executors['jest']({ ...options, ...selectedConfiguration, ...commandLineArgs, }); // Pseudocode ``` The selected configuration adds/overrides the default options, and the provided command line args add/override the configuration options. ### Default configuration When using multiple configurations for a given target, it's helpful to provide a default configuration. For example, running e2e tests for multiple environments. By default it would make sense to use a `dev` configuration for day to day work, but having the ability to run against an internal staging environment for the QA team. ```json title="project.json" { "e2e": { "executor": "@nx/cypress:cypress", "options": { "cypressConfig": "apps/my-app-e2e/cypress.config.ts" }, "configurations": { "dev": { "devServerTarget": "my-app:serve" }, "qa": { "baseUrl": "https://some-internal-url.example.com" } }, "defaultConfiguration": "dev" } } ``` When running `nx e2e my-app-e2e`, the _dev_ configuration will be used. In this case using the local dev server for `my-app`. You can always run the other configurations by explicitly providing the configuration i.e. `nx e2e my-app-e2e --configuration=qa` or `nx run my-app-e2e:e2e:qa` --- ## How Caching Works Before running any cacheable task, Nx computes its computation hash. As long as the computation hash is the same, the output of running the task is the same. By default, the computation hash for something like `nx test remixapp` includes: - All the source files of `remixapp` and its dependencies - Relevant global configuration - Versions of external dependencies - Runtime values provisioned by the user such as the version of Node - CLI Command flags ![computation-hashing](../../../assets/concepts/caching/nx-hashing.svg) > This behavior is customizable. For instance, lint checks may only depend on the source code of the project and global > configs. Builds can depend on the dts files of the compiled libs instead of their source. After Nx computes the hash for a task, it then checks if it ran this exact computation before. First, it checks locally, and then if it is missing, and if a remote cache is configured, it checks remotely. If a matching computation is found, Nx retrieves and replays it. This includes restoring files. Nx places the right files in the right folders and prints the terminal output. From the user's point of view, the command ran the same, only a lot faster. ![cache](../../../assets/concepts/caching/cache.svg) If Nx doesn't find a corresponding computation hash, Nx runs the task, and after it completes, it takes the outputs and the terminal logs and stores them locally (and, if configured, remotely as well). All of this happens transparently, so you don't have to worry about it. ## Optimizations Nx optimizes the caching experience in several ways. For instance, Nx: - Captures stdout and stderr to make sure the replayed output looks the same, including on Windows. - Minimizes the IO by remembering what files are replayed where. - Only shows relevant output when processing a large task graph. - Provides affordances for troubleshooting cache misses. And many other optimizations. As your workspace grows, the task graph looks more like this: ![cache](../../../assets/concepts/caching/task-graph-big.svg) All of these optimizations are crucial for making Nx usable for any non-trivial workspace. Only the minimum amount of work happens. The rest is either left as is or restored from the cache. ## Fine-tuning the Nx cache Each cacheable task defines a set of inputs and outputs. Inputs are factors Nx considers when calculating the computation hash. Outputs are files that will be cached and restored when the computation hash matches. For more information on how to fine-tune caching, see the [Fine-tuning Caching with Inputs recipe](/docs/guides/tasks--caching/configure-inputs). ### Inputs Inputs are factors Nx considers when calculating the computation hash for a task. For more information on the different types of inputs and how to configure inputs for your tasks, read the [Fine-tuning Caching with Inputs recipe](/docs/guides/tasks--caching/configure-inputs) ## What is cached Nx cache works on the process level. Regardless of the tools used to build/test/lint/etc.. your project, the results are cached. This includes: - **Terminal output:** The terminal output generated when running a task. This includes logs, warnings, and errors. - **Task artifacts:** The output files of a task defined in the [`outputs` property of your project configuration](/docs/guides/tasks--caching/configure-outputs). For example the build output, test results, or linting reports. - **Hash:** The hash of the inputs to the computation. The inputs include the source code, runtime values, and command line arguments. Note that the hash is included in the cache, but the actual inputs are not. {% tabs %} {% tabitem label="package.json" %} ```json title="apps/myapp/package.json" { "name": "myapp", "dependencies": {}, "devDependencies": {}, "nx": { "targets": { "build": { "outputs": ["{projectRoot}/build", "{projectRoot}/public/build"] } } } } ``` {% /tabitem %} {% tabitem label="project.json" %} ```json title="apps/myapp/project.json" { "name": "myapp", ... "targets": { "build": { ... "outputs": ["dist/dist/myapp"] } } } ``` {% /tabitem %} {% /tabs %} If the `outputs` property for a given target isn't defined in the project's `package.json` file, Nx will look at the global, workspace-wide definition in the `targetDefaults` section of `nx.json`: ```jsonc title="nx.json" { ... "targetDefaults": { "build": { "dependsOn": [ "^build" ], "outputs": [ "{projectRoot}/dist", "{projectRoot}/build", "{projectRoot}/public/build" ] } } } ``` If neither is defined, Nx defaults to caching `dist` and `build` at the root of the repository. {% aside type="note" title="Output vs OutputPath" %} Several executors have a property in options called `outputPath`. On its own, this property does not influence caching or what is stored at the end of a run. You can reuse that property though by defining your outputs like: `outputs: [{options.outputPath}]`. {% /aside %} ## Define cache inputs By default the cache considers all project files (e.g. `{projectRoot}/**/*`). This behavior can be customized by defining in a much more fine-grained way what files should be included or excluded for each target. {% tabs %} {% tabitem label="Globally" %} ```json title="nx.json" { "targetDefaults": { "build": { "inputs": ["{projectRoot}/**/*", "!{projectRoot}/**/*.md"] ... }, "test": { "inputs": [...] } } } ``` {% /tabitem %} {% tabitem label="Project Level" %} ```json title="packages/some-project/project.json" { "name": "some-project", "targets": { "build": { "inputs": ["!{projectRoot}/**/*.md"], ... }, "test": { "inputs": [...] } ... } } ``` {% /tabitem %} {% /tabs %} Inputs may include - source files - environment variables - runtime inputs - command line arguments Learn more about fine tuning caching in the [Fine-tuning Caching with Inputs page](/docs/guides/tasks--caching/configure-inputs). ## Args hash inputs Finally, in addition to Source Code Hash Inputs and Runtime Hash Inputs, Nx needs to consider the arguments: For example, `nx build remixapp` and `nx build remixapp -- --flag=true` produce different results. Note, only the flags passed to the npm scripts itself affect results of the computation. For instance, the following commands are identical from the caching perspective. ```shell npx nx build remixapp npx nx run-many -t build -p remixapp ``` In other words, Nx does not cache what the developer types into the terminal. If you build/test/lint… multiple projects, each individual build has its own hash value and will either be retrieved from cache or run. This means that from the caching point of view, the following command: ```shell npx nx run-many -t build -p header footer ``` is identical to the following two commands: ```shell npx nx build header npx nx build footer ``` ## Next steps {% aside type="tip" title="Learn by doing" %} Try the [Caching](/docs/getting-started/tutorials/caching) tutorial to apply these concepts in your own workspace. {% /aside %} Learn more about how to configure caching, where the cache is stored, how to reset it and more. - [Cache Task Results](/docs/features/cache-task-results) --- ## Inferred Tasks (Project Crystal) In Nx version 18, Nx plugins can automatically infer tasks for your projects based on the configuration of different tools. Many tools have configuration files which determine what a tool does. Nx is able to cache the results of running the tool. Nx plugins use the same configuration files to infer how Nx should [run the task](/docs/features/run-tasks). This includes [fine-tuned cache settings](/docs/features/cache-task-results) and automatic [task dependencies](/docs/concepts/task-pipeline-configuration). For example, the `@nx/webpack` plugin infers tasks to run webpack through Nx based on your repository's webpack configuration. This configuration already defines the destination of your build files, so Nx reads that value and caches the correct output files. {% youtube src="https://youtu.be/wADNsVItnsM" title="Project Crystal" /%} ## How does a plugin infer tasks? Every plugin has its own custom logic, but in order to infer tasks, they all go through the following steps. ### 1. detect tooling configuration in workspace The plugin will search the workspace for configuration files of the tool. For each configuration file found, the plugin will infer tasks. i.e. The `@nx/webpack` plugin searches for `webpack.config.js` files to infer tasks that run webpack. ### 2. create an inferred task The plugin then configures tasks with a name that you specified in the plugin's configuration in `nx.json`. The settings for the task are determined by the tool configuration. The `@nx/webpack` plugin creates tasks named `build`, `serve` and `preview` by default and it automatically sets the task caching settings based on the values in the webpack configuration files. ## What is inferred Nx plugins infer the following properties by analyzing the tool configuration. - Command - How is the tool invoked - [Cacheability](/docs/concepts/how-caching-works) - Whether the task will be cached by Nx. When the Inputs have not changed the Outputs will be restored from the cache. - [Inputs](/docs/guides/tasks--caching/configure-inputs) - Inputs are used by the task to produce Outputs. Inputs are used to determine when the Outputs of a task can be restored from the cache. - [Outputs](/docs/guides/tasks--caching/configure-outputs) - Outputs are the results of a task. Outputs are restored from the cache when the Inputs are the same as a previous run. - [Task Dependencies](/docs/concepts/task-pipeline-configuration) - The list of other tasks which must be completed before running this task. ## Nx uses plugins to build the graph A typical workspace will have many plugins inferring tasks. Nx processes all the plugins registered in `nx.json` to create project configuration for individual projects and a project and task graph that shows the connections between them all. ### Plugin order matters Plugins are processed in the order that they appear in the `plugins` array in `nx.json`. So, if multiple plugins create a task with the same name, the plugin listed last will win. If, for some reason, you have a project with both a `vite.config.js` file and a `webpack.config.js` file, both the `@nx/vite` plugin and the `@nx/webpack` plugin will try to create a `build` task. The `build` task that is executed will be the task that belongs to the plugin listed lower in the `plugins` array. ### Scope plugins to specific projects Plugins use config files to infer tasks for projects. You can specify which config files are processed by Nx plugins using the `include` and `exclude` properties in the plugin configuration object. ```jsonc // nx.json { "plugins": [ { "plugin": "@nx/jest/plugin", "include": ["packages/**/*"], "exclude": ["**/*-e2e/**/*"], }, ], } ``` The `include` and `exclude` properties are file glob patterns that filter which configuration files the plugin processes. In the example above, the `@nx/jest/plugin` will only infer tasks for projects where the `jest.config.ts` file path matches the `packages/**/*` glob but does not match the `**/*-e2e/**/*` glob. This is useful when you want a plugin to only affect certain projects, or when you need to apply different plugin options to different sets of projects by registering the same plugin multiple times with different `include`/`exclude` patterns. ## View inferred tasks To view the task settings for projects in your workspace, [show the project details](/docs/features/explore-graph) either from the command line or using Nx Console. ```shell nx show project my-project --web ``` {% project_details%} ```json { "project": { "name": "myreactapp", "type": "app", "data": { "root": "apps/myreactapp", "targets": { "build": { "options": { "cwd": "apps/myreactapp", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": [ "production", "^production", { "externalDependencies": ["vite"] } ], "outputs": ["{workspaceRoot}/dist/apps/myreactapp"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "serve": { "options": { "cwd": "apps/myreactapp", "command": "vite serve", "continuous": true }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "preview": { "options": { "cwd": "apps/myreactapp", "command": "vite preview" }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "serve-static": { "executor": "@nx/web:file-server", "options": { "buildTarget": "build", "continuous": true }, "configurations": {} }, "test": { "options": { "cwd": "apps/myreactapp", "command": "vitest run" }, "cache": true, "inputs": [ "default", "^production", { "externalDependencies": ["vitest"] } ], "outputs": ["{workspaceRoot}/coverage/apps/myreactapp"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "lint": { "cache": true, "options": { "cwd": "apps/myreactapp", "command": "eslint ." }, "inputs": [ "default", "{workspaceRoot}/.eslintrc.json", "{workspaceRoot}/apps/myreactapp/.eslintrc.json", "{workspaceRoot}/tools/eslint-rules/**/*", { "externalDependencies": ["eslint"] } ], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["eslint"] } } }, "name": "myreactapp", "$schema": "../../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "apps/myreactapp/src", "projectType": "application", "tags": [], "implicitDependencies": [], "metadata": { "technologies": ["react"] } } }, "sourceMap": { "root": ["apps/myreactapp/project.json", "nx/core/project-json"], "targets": ["apps/myreactapp/project.json", "nx/core/project-json"], "targets.build": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.build.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.cache": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.dependsOn": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.inputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.outputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.serve.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.preview.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.executor": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.options.buildTarget": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.test.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.cache": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.test.inputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.outputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.lint": ["apps/myreactapp/project.json", "@nx/eslint/plugin"], "targets.lint.command": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.cache": ["apps/myreactapp/project.json", "@nx/eslint/plugin"], "targets.lint.options": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.inputs": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.options.cwd": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "name": ["apps/myreactapp/project.json", "nx/core/project-json"], "$schema": ["apps/myreactapp/project.json", "nx/core/project-json"], "sourceRoot": ["apps/myreactapp/project.json", "nx/core/project-json"], "projectType": ["apps/myreactapp/project.json", "nx/core/project-json"], "tags": ["apps/myreactapp/project.json", "nx/core/project-json"] } } ``` {% /project_details %} ## Overriding inferred task configuration You can override the task configuration inferred by plugins in several ways. If you want to overwrite the task configuration for multiple projects, [use the `targetDefaults` object](/docs/reference/nx-json#target-defaults) in the `nx.json` file. If you only want to override the task configuration for a specific project, [update that project's configuration](/docs/reference/project-configuration) in `package.json` or `project.json`. This configuration is more specific so it will override both the inferred configuration and the `targetDefaults`. The order of precedence for task configuration is: 1. Inferred Task Configurations from plugins in `nx.json`. 2. `targetDefaults` in `nx.json`. 3. Project Configuration in `package.json` or `project.json`. More details about how to override task configuration is available in these guides: - [Configure Inputs for Task Caching](/docs/guides/tasks--caching/configure-inputs) - [Configure Outputs for Task Caching](/docs/guides/tasks--caching/configure-outputs) - [Defining a Task Pipeline](/docs/guides/tasks--caching/defining-task-pipeline) - [Pass Arguments to Commands](/docs/guides/tasks--caching/pass-args-to-commands) ## Existing Nx workspaces {% aside type="tip" title="Learn by doing" %} Try the [Reducing Configuration Boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) tutorial to apply these concepts in your own workspace. {% /aside %} If you have an existing Nx Workspace and upgrade to the latest Nx version, a migration will automatically set `useInferencePlugins` to `false` in `nx.json`. This property allows you to continue to use Nx without inferred tasks. When `useInferencePlugins` is `false`: 1. A newly generated project will have all targets defined with executors - not with inferred tasks. 2. Running `nx add @nx/some-plugin` will not create a plugin entry for `@nx/some-plugin` in the `nx.json` file. (So that plugin will not create inferred tasks.) If you want to **migrate** your projects to use inferred tasks, follow the recipe for [migrating to inferred tasks](/docs/guides/tasks--caching/convert-to-inferred). Even once a repository has fully embraced inferred tasks, `project.json` and executors will still be useful. The `project.json` file is needed to modify inferred task options and to define tasks that can not be inferred. Some executors perform tasks that can not be accomplished by running a tool directly from the command line (i.e. batch mode). --- ## Mental Model Nx is a VSCode of build tools, with a powerful core, driven by metadata, and extensible through [plugins](/docs/concepts/nx-plugins). Nx works with a few core concepts to drive your monorepo efficiently: project graphs, task graphs, affected commands, computation hashing, and caching. ## The project graph A project graph is used to reflect the source code in your repository and all the external dependencies that aren't authored in your repository, such as Webpack, React, Angular, and so forth. ![project-graph](../../../assets/concepts/mental-model/project-graph.svg) Nx analyzes your file system to detect projects. Projects are identified by the presence of a `package.json` file or `project.json` file. Projects identification can also be customized through plugins. You can manually define dependencies between the project nodes, but you don't have to do it very often. Nx analyzes files' source code, your installed dependencies, TypeScript files, and others figuring out these dependencies for you. Nx also stores the cached project graph, so it only reanalyzes the files you have changed. ![project-graph-updated](../../../assets/concepts/mental-model/project-graph-updated.svg) Nx provides an updated graph after each analysis is done. ## Metadata-driven Everything in Nx comes with metadata to enable toolability. Nx gathers information about your projects and tasks and then uses that information to help you understand and interact with your codebase. With the right plugins installed, most of the metadata can be [inferred directly from your existing configuration files](/docs/concepts/inferred-tasks) so you don't have to define it manually. This metadata is used by Nx itself, by VSCode and WebStorm integrations, by GitHub integration, and by third-party tools. ![metadata](../../../assets/concepts/mental-model/metadata.png) These tools are able to implement richer experiences with Nx using this metadata. ## The task graph Nx uses the project graph to create a task graph. Any time you run anything, Nx creates a task graph from the project graph and then executes the tasks in that graph. For instance `nx test lib` creates a task graph with a single node: {% graph height="100px" type="task" %} ```json { "projects": [ { "name": "lib", "type": "lib", "data": { "tags": [], "targets": { "test": {} } } } ], "taskIds": ["lib:test"], "taskGraph": { "roots": ["lib:test"], "tasks": { "lib:test": { "id": "lib:test", "target": { "project": "lib", "target": "test" }, "projectRoot": "libs/lib", "overrides": {} } }, "dependencies": {} } } ``` {% /graph %} A task is an invocation of a target. If you invoke the same target twice, you create two tasks. Nx uses the [project graph](#the-project-graph), but the task graph and project graph aren't isomorphic, meaning they aren't directly connected. In the case above, `app1` and `app2` depend on `lib`, but running `nx run-many -t test -p app1 app2 lib`, the created task graph will look like this: **Project Graph:** {% graph height="200px" type="project" %} ```json { "projects": [ { "name": "lib", "type": "lib", "data": { "tags": [], "targets": { "test": {} } } } ], "taskIds": ["lib:test"], "taskGraph": { "roots": ["lib:test"], "tasks": { "lib:test": { "id": "lib:test", "target": { "project": "lib", "target": "test" }, "projectRoot": "libs/lib", "overrides": {} } }, "dependencies": {} } } ``` {% /graph %} **Task Graph:** {% graph height="200px" type="task"%} ```json { "projects": [ { "name": "lib", "type": "lib", "data": { "tags": [], "targets": { "test": {} } } } ], "taskIds": ["lib:test"], "taskGraph": { "roots": ["lib:test"], "tasks": { "lib:test": { "id": "lib:test", "target": { "project": "lib", "target": "test" }, "projectRoot": "libs/lib", "overrides": {} } }, "dependencies": {} } } ``` {% /graph %} Even though the apps depend on `lib`, testing `app1` doesn't depend on the testing `lib`. This means that the two tasks can run in parallel. Let's look at the test target relying on its dependencies. ```json { "test": { "executor": "@nx/jest:jest", "outputs": ["{workspaceRoot}/coverage/apps/app1"], "dependsOn": ["^test"], "options": { "jestConfig": "apps/app1/jest.config.js", "passWithNoTests": true } } } ``` With this, running the same test command creates the following task graph: **Project Graph:** {% graph height="200px" type="project" %} ```json { "projects": [ { "name": "lib", "type": "lib", "data": { "tags": [], "targets": { "test": {} } } } ], "taskIds": ["lib:test"], "taskGraph": { "roots": ["lib:test"], "tasks": { "lib:test": { "id": "lib:test", "target": { "project": "lib", "target": "test" }, "projectRoot": "libs/lib", "overrides": {} } }, "dependencies": {} } } ``` {% /graph %} **Task Graph:** {% graph height="200px" type="task" %} ```json { "projects": [ { "name": "lib", "type": "lib", "data": { "tags": [], "targets": { "test": {} } } } ], "taskIds": ["lib:test"], "taskGraph": { "roots": ["lib:test"], "tasks": { "lib:test": { "id": "lib:test", "target": { "project": "lib", "target": "test" }, "projectRoot": "libs/lib", "overrides": {} } }, "dependencies": {} } } ``` {% /graph %} This often makes more sense for builds, where to build `app1`, you want to build `lib` first. You can also define similar relationships between targets of the same project, including a test target that depends on the build. Learn more about configuring task pipelines in [Task Pipeline Configuration](/docs/concepts/task-pipeline-configuration). A task graph can contain different targets, and those can run in parallel. For instance, as Nx is building `app2`, it can be testing `app1` at the same time. ![task-graph-execution](../../../assets/concepts/mental-model/task-graph-execution.svg) Nx also runs the tasks in the task graph in the right order. Nx executing tasks in parallel speeds up your overall execution time. ## Affected commands When you run `nx test app1`, you are telling Nx to run the `app1:test` task plus all the tasks it depends on. When you run `nx run-many -t test -p app1 lib`, you are telling Nx to do the same for two tasks `app1:test` and `lib:test`. When you run `nx run-many -t test`, you are telling Nx to do this for all the projects. As your workspace grows, retesting all projects becomes too slow. To address this Nx implements code change analysis via the [`affected` command](/docs/features/ci-features/affected) to get the min set of projects that need to be retested. How does it work? When you run `nx affected -t test`, Nx looks at the files you changed in your PR, it will look at the nature of change (what exactly did you update in those files), and it uses this to figure the list of projects in the workspace that can be affected by this change. It then runs the `run-many` command with that list. For instance, if my PR changes `lib`, and I then run `nx affected -t test`, Nx figures out that `app1` and `app2` depend on `lib`, so it will invoke `nx run-many -t test -p app1 app2 lib`. ![affected](../../../assets/concepts/mental-model/affected.svg) Nx analyzes the nature of the changes. For example, if you change the version of Next.js in the package.json, Nx knows that `app2` cannot be affected by it, so it only retests `app1`. ## Computation hashing and caching Before running a task, Nx computes a hash based on source files, configuration, dependencies, and other inputs. If the hash matches a previous run, the cached result is replayed — including terminal output and file artifacts. If not, Nx runs the task and stores the result for next time. ![computation-hashing](../../../assets/concepts/mental-model/computation-hashing.svg) Nx checks the local cache first, then the [remote cache](/docs/features/ci-features/remote-cache) if configured. From the user's point of view, the command ran the same, only a lot faster. ![cache](../../../assets/concepts/mental-model/cache.svg) See [How Caching Works](/docs/concepts/how-caching-works) for the complete list of hash inputs, cache configuration options, and optimization details. ## Distributed task execution For large workspaces, even with caching, running all tasks on a single machine can be slow. [Nx Agents](/docs/features/ci-features/distribute-task-execution) can distribute the task graph across multiple machines, running tasks in parallel while using [remote caching](/docs/features/ci-features/remote-cache) to share artifacts between agents. From your CI's perspective, the results appear as if everything ran on a single machine. ![Distribution](../../../assets/concepts/mental-model/dte.svg) ## In summary - Nx is able to analyze your source code to create a Project Graph. - Nx can use the project graph and information about projects' targets to create a Task Graph. - Nx is able to perform code-change analysis to create the smallest task graph for your PR. - Nx supports [computation caching](/docs/features/cache-task-results) to never execute the same computation twice. This computation cache is pluggable and can be distributed. --- ## Nx Daemon In version 13 we introduced the opt-in Nx Daemon which Nx can leverage to dramatically speed up project graph computation, particularly for large workspaces. ## Why is it needed? Every time you invoke a target directly, such as `nx test myapp`, or run affected commands, such `nx affected:test`, Nx first needs to generate a project graph in order to figure out how all the different projects and files within your workspace fit together. Naturally, the larger your workspace gets, the more expensive this project graph generation becomes. Thankfully, because Nx stores its metadata on disk, Nx only recomputes what has changed since the last command invocation. This helps quite a bit, but the recomputation is not very surgical because there is no way for Nx to know what kind of changes you have made, so it has to consider a wide range of possibilities when recomputing the project graph, even with the cache available. ## What is Nx daemon The Nx Daemon is a process which runs in the background on your local machine. There is one unique Nx Daemon per Nx workspace meaning if you have multiple Nx workspaces on your machine active at the same time, the corresponding Nx Daemon instances will operate independently of one another and can be on different versions of Nx. {% aside type="note" title="Mac & Linux" %} On macOS and linux, the server runs as a unix socket, and on Windows it runs as a named pipe. {% /aside %} The Nx Daemon is more efficient at recomputing the project graph because it watches the files in your workspaces and updates the project graph right away (intelligently throttling to ensure minimal recomputation). It also keeps everything in memory, so the response tends to be a lot faster. In order to be most efficient, the Nx Daemon has some built in mechanisms to automatically shut down (including removing all file watchers) when it is not needed. These include: - after 3 hours of inactivity (meaning the workspace's Nx Daemon did not receive any requests or detect any file changes in that time) - when the Nx installation changes If you ever need to manually shut down the Nx Daemon, you can run `nx reset` within the workspace in question. ## Turning it off The Nx Daemon is enabled by default when running on your local machine. If you want to turn it off - set `useDaemonProcess: false` in the runners options in `nx.json` or - set the `NX_DAEMON` env variable to `false`. When using Nx in a CI environment, the Nx Daemon is disabled by default. Whether the process runs is determined by the following function: [https://github.com/nrwl/nx/blob/master/packages/nx/src/utils/is-ci.ts](https://github.com/nrwl/nx/blob/master/packages/nx/src/utils/is-ci.ts) ## Logs To see information about the running Nx Daemon (such as its background process ID and log output file), run `nx daemon`. Once you have the path to that log file, you could either open it in your IDE or stream updates in a separate terminal window by running `tail -f {REPLACE_WITH_LOG_PATH}`, for example. ## Customizing the socket location The Nx Daemon uses a unix socket to communicate between the daemon and the Nx processes. By default this socket gets placed in a temp directory. If you are using Nx in a docker-compose environment, however, you may want to run the daemon manually and control its location to enable sharing the daemon among your docker containers. To do so, set the `NX_DAEMON_SOCKET_DIR` environment variable to a shared directory. ## Daemon behavior in containers Nx automatically disables the daemon in Docker containers and CI environments. The daemon's performance benefits come from maintaining state between commands and watching for file changes—in short-lived environments where each command runs in a fresh container, this state cannot be reused, so the overhead of starting a background process provides no benefit. ### Automatic detection Nx detects Docker containers by checking for `/.dockerenv` or Docker references in `/proc/self/cgroup`. When detected, the daemon is disabled unless explicitly enabled with `NX_DAEMON=true`. ### Enabling the daemon in containers If you have a long-running container with a persistent filesystem (e.g., a dev container), you can enable the daemon: ```dockerfile ENV NX_DAEMON=true ``` When enabling the daemon in containers, be aware of these considerations: - **Volume mounts** can cause inode/mtime changes that interfere with file watching - **Container restarts** invalidate the daemon's socket, requiring a new daemon process - **Permission differences** between the container and host may affect socket communication For docker-compose setups with shared daemon sockets, see the [Customizing the socket location](#customizing-the-socket-location) section above. {% aside type="note" title="When to enable the daemon" %} Only enable the daemon in containers that persist across multiple Nx commands. For ephemeral CI containers or frequently restarted dev containers, the default (disabled) behavior is more reliable. {% /aside %} --- ## What Are Nx Plugins? Nx plugins help developers use a tool or framework with Nx. They allow the plugin author who knows the best way to use a tool with Nx to codify their expertise and allow the whole community to reuse those solutions. For example, plugins can accomplish the following: - [Configure Nx cache settings](/docs/concepts/inferred-tasks) for a tool. The [`@nx/webpack`](/docs/technologies/build-tools/webpack/introduction) plugin can automatically configure the [inputs](/docs/guides/tasks--caching/configure-inputs) and [outputs](/docs/guides/tasks--caching/configure-outputs) for a `build` task based on the settings in the `webpack.config.js` file it uses. - [Update tooling configuration](/docs/features/automate-updating-dependencies) when upgrading the tool version. When Storybook 7 introduced a [new format](https://storybook.js.org/blog/storybook-csf3-is-here) for their configuration files, anyone using the [`@nx/storybook`](/docs/technologies/test-tools/storybook/introduction) plugin could automatically apply those changes to their repository when upgrading. - [Set up a tool](/docs/features/generate-code) for the first time. With the [`@nx/playwright`](/docs/technologies/test-tools/playwright/introduction) plugin installed, you can use the `@nx/playwright:configuration` code generator to set up Playwright tests in an existing project. - [Run a tool in an advanced way](/docs/concepts/executors-and-configurations). The [`@nx/js`](/docs/technologies/typescript/introduction) plugin's [`@nx/js:tsc` executor](/docs/technologies/typescript/executors#tsc) combines the Nx understanding of your repository with Typescript's native batch mode feature to make your builds [even more performant](/docs/technologies/typescript/guides/enable-tsc-batch-mode). ## Plugin features {% linkcard title="Infer tasks" href="/docs/concepts/inferred-tasks" description="Automatically configure Nx settings for tasks based on tooling configuration" /%} {% linkcard title="Generate Code" href="/docs/features/generate-code" description="Generate and modify code to set up and use the tool or framework" /%} {% linkcard title="Maintain Dependencies" href="/docs/features/automate-updating-dependencies" description="Automatically update package versions and tooling configuration" /%} {% linkcard title="Enhance Tooling with Executors" href="/docs/concepts/executors-and-configurations" description="Run a tool in an advanced way that may not be possible from the command line" /%} ## Types of plugins {% aside type="tip" title="Learn by doing" %} Try the [Reducing Configuration Boilerplate](/docs/getting-started/tutorials/reducing-configuration-boilerplate) tutorial to apply these concepts in your own workspace. {% /aside %} {% linkcard title="Official and Community Plugins" href="/docs/plugin-registry" description="Browse the plugin registry to discover plugins created by the Nx core team and the community" /%} {% linkcard title="Build Your Own Plugin" href="/docs/extending-nx/organization-specific-plugin" description="Build your own plugin to use internally or share with the community" /%} --- ## Sync Generators In Nx 19.8, you can use sync generators to ensure that your repository is maintained in a correct state. One specific application is to use the project graph to update files. These can be global configuration files or scripts, or at the task level to ensure that files are in sync before a task is run. Sync Generator Examples: - Update a custom CI script with binning strategies based on the current project graph - Update TypeScript config files with project references based on the current project graph - Ensure code is formatted in a specific way before CI is run ## Task sync generators Sync generators can be associated with a particular task. Nx will use the sync generator to ensure that code is correctly configured before running the task. Nx does this in different ways, depending on whether the task is being run on a developer machine or in CI. On a developer machine, the sync generator is run in `--dry-run` mode and if files would be changed by the generator, the user is prompted to run the generator or skip it. This prompt can be disabled by setting the `sync.applyChanges` property to `true` or `false` in the `nx.json` file. ```json title="nx.json" {4-6} { "$schema": "packages/nx/schemas/nx-schema.json", ... "sync": { "applyChanges": true } } ``` {% aside type="caution" title="Opting out of automatic sync" %} If you set `sync.applyChanges` to `false`, then developers must run `nx sync` manually before pushing changes. Otherwise, CI may fail due to the workspace being out of sync. {% /aside %} In CI, the sync generator is run in `--dry-run` mode and if files would be changed by the generator, the task fails with an error provided by the sync generator. The sync generator can be skipped in CI by passing the `--skip-sync` flag when executing the task, or you can skip an individual sync generator by adding that generator to the `sync.disabledTaskSyncGenerators` in `nx.json`. ```json title="nx.json" {4-6} { "$schema": "packages/nx/schemas/nx-schema.json", ... "sync": { "disabledTaskSyncGenerators": ["@nx/js:typescript-sync"] } } ``` Use the project details view to **find registered sync generators** for a given task. ```shell nx show project ``` The above command opens up the project details view, and the registered sync generators are under the **Sync Generators** for each target. Most sync generators are inferred when using an [inference plugin](/docs/concepts/inferred-tasks). For example, the `@nx/js/typescript` plugin registers the `@nx/js:typescript-sync` generator on `build` and `typecheck` targets. {% project_details title="Project Details View" expandedTargets=["build"] %} ```json { "project": { "name": "foo", "data": { "root": " packages/foo", "projectType": "library", "targets": { "build": { "dependsOn": ["^build"], "cache": true, "inputs": [ "{workspaceRoot}/tsconfig.base.json", "{projectRoot}/tsconfig.lib.json", "{projectRoot}/src/**/*.ts" ], "outputs": ["{workspaceRoot}/packages/foo/dist"], "syncGenerators": ["@nx/js:typescript-sync"], "executor": "nx:run-commands", "options": { "command": "tsc --build tsconfig.lib.json --pretty --verbose" } } } } }, "sourceMap": { "targets": ["packages/foo/tsconfig.ts", "@nx/js/typescript"], "targets.build": ["packages/foo/tsconfig.ts", "@nx/js/typescript"] } } ``` {% /project_details %} Task sync generators can be thought of like the `dependsOn` property, but for generators instead of task dependencies. To [register a generator](/docs/extending-nx/create-sync-generator) as a sync generator for a particular task, add the generator to the `syncGenerators` property of the task configuration. ## Global sync generators Global sync generators are not associated with a particular task and are executed only when the `nx sync` or `nx sync:check` command is explicitly run. They are [registered](/docs/extending-nx/create-sync-generator) in the `nx.json` file with the `sync.globalGenerators` property. ## Sync the project graph and the file system Nx processes the file system in order to [create the project graph](/docs/features/explore-graph) which is used to run tasks in the correct order and determine project dependencies. Sync generators allow you to also go the other direction and use the project graph to update the file system. **File System:** ```text └─ myorg ├─ apps │ ├─ app1 │ └─ app2 ├─ libs │ └─ lib ├─ nx.json └─ package.json ``` **Project Graph:** {% graph title="Project Graph" height="200px" type="project" %} ```json { "projects": [ { "name": "app1", "type": "app", "data": { "tags": [], "targets": { "test": {} } } }, { "name": "app2", "type": "app", "data": { "tags": [], "targets": { "test": {} } } }, { "name": "lib", "type": "lib", "data": { "tags": [], "targets": { "test": {} } } } ], "dependencies": { "app1": [ { "source": "app1", "target": "lib", "type": "static" } ], "app2": [ { "source": "app2", "target": "lib", "type": "static" } ], "lib": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} The ability to update the file system from the project graph makes it possible to use the Nx project graph to change the behavior of other tools that are not part of the Nx ecosystem. ## Run `nx sync:check` in CI Task sync generators are executed whenever their task is run, but global sync generators need to be triggered manually with `nx sync`. In order to effectively use sync generators, make sure to add `nx sync:check` to the beginning of your CI scripts so that CI can fail quickly if the code is out of sync. It is also helpful to run `nx sync` in a pre-commit or pre-push Git hook to encourage developers to commit code that is already in sync. --- ## Synthetic Monorepos Most organizations don't have a single giant monorepo. They have a handful of monorepos per team or domain, plus dozens of standalone repos. Consolidating everything into one repository is not just a technical challenge. The organizational side (bringing teams along, changing workflows, ensuring adoption) is often harder than the code migration itself. Synthetic monorepos let you get monorepo benefits without that consolidation. ## What is a synthetic monorepo? A synthetic monorepo connects separate repositories into a unified dependency graph without moving any code. Which repo depends on which, what a change affects downstream, how projects relate across teams: all of that becomes visible automatically. ![A synthetic monorepo connecting multiple monorepos and standalone repos into a unified dependency graph](../../../assets/concepts/synthetic-monorepo.svg) Unlike a traditional monorepo where all code lives in one repository, a synthetic monorepo leaves each repository where it is. Instead, it builds a cross-repo graph that tooling can reason about, just as if the code were in one place. ## What synthetic monorepos enable A synthetic monorepo addresses several downsides of a polyrepo setup: **Visibility** — An automatic cross-repo dependency graph shows which repo depends on which and what a change affects downstream. Always up to date, discovered from actual code — not a manually maintained spreadsheet or catalog. Nx implements this through the [Workspace Graph](/docs/enterprise/polygraph). **Coordination** — Cross-repo changes no longer require manually sequencing PRs, managing compatibility, and coordinating release order. Tooling on top of the graph enables impact analysis, coordinated changes, and conformance checking across repo boundaries. **Governance** — Organizational standards apply across every connected repo through [conformance rules](/docs/enterprise/conformance). Scheduled [custom workflows](/docs/enterprise/custom-workflows) check repos continuously — even ones nobody has touched in months. Detection and enforcement happen automatically, not through tickets and follow-ups. **CI intelligence** — [Affected detection](/docs/concepts/mental-model#affected-commands), [remote caching](/docs/concepts/how-caching-works), and [distributed task execution](/docs/concepts/ci-concepts/parallelization-distribution) work across the full graph, not just within a single repo. **AI agents** — AI coding agents are [dramatically less effective in polyrepos](https://youtu.be/alIto5fqrfk) — they can only see one repo at a time, so cross-repo features require you to manually shuttle context between sessions. A synthetic monorepo gives agents cross-repo visibility, enabling coordinated changes, parallel execution, and automatic PR creation across boundaries. [Self-healing CI](/docs/features/ci-features/self-healing-ci) catches failures automatically. ## When to use a synthetic monorepo vs. a real monorepo A real monorepo is the best option when you can consolidate. It gives you atomic commits, a single toolchain, and the simplest mental model. A synthetic monorepo is the better starting point when: - **Consolidation isn't feasible yet:** team autonomy concerns, divergent CI setups, or hundreds of repos make migration impractical. - **You need cross-repo visibility now:** you can't wait months for a migration to see how projects relate across teams. - **Teams need to stay autonomous:** each team keeps their repo, workflow, and release cadence while still participating in a unified graph. The two aren't mutually exclusive. Start synthetic for org-wide visibility, then consolidate tightly coupled teams into real monorepos where it makes sense. ## Synthetic monorepos with Nx Polygraph Nx implements synthetic monorepos through [Nx Polygraph](/docs/enterprise/polygraph). Polygraph connects existing repositories into a unified, intelligent graph that powers the visibility, coordination, and CI features described above. It works with any repo, even those that don't use Nx, and requires zero changes to target repos. Learn more about [getting started with Nx Polygraph](/docs/enterprise/polygraph). --- ## What is a Task Pipeline If you have a monorepo workspace (or modularized app), you rarely just run one task. Almost certainly there are relationships among the projects in the workspace and hence tasks need to follow a certain order. As you can see in the graph visualization below, the `myreactapp` project depends on other projects. Therefore, when running the build for `myreactapp`, these dependencies need to be built first so that `myreactapp` can use the resulting build artifacts. {% graph height="450px" %} ```json { "projects": [ { "name": "myreactapp", "type": "app", "data": { "tags": [] } }, { "name": "shared-ui", "type": "lib", "data": { "tags": [] } }, { "name": "feat-products", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "myreactapp": [ { "source": "myreactapp", "target": "feat-products", "type": "static" } ], "shared-ui": [], "feat-products": [ { "source": "feat-products", "target": "shared-ui", "type": "static" } ] }, "workspaceLayout": { "appsDir": "", "libsDir": "" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false } ``` {% /graph %} While you could manage these relationships on your own and set up custom scripts to build all projects in the proper order (e.g. first build `shared-ui`, then `feat-products` and finally `myreactapp`), that kind of approach is not scalable and would need constant maintenance as you keep changing and adding projects to your workspace. This becomes even more evident when you run tasks in parallel. You cannot just naively run all of them at the same time. Instead, the task orchestrator needs to respect the order of the tasks so that it builds libraries first and then resumes building the apps that depend on those libraries in parallel. ![task-graph-execution](../../../assets/concepts/mental-model/task-graph-execution.svg) Define task dependencies in the form of "rules", which are then followed when running tasks. There's a [detailed recipe](/docs/guides/tasks--caching/defining-task-pipeline) but here's the high-level overview: ```jsonc title="nx.json" { ... "targetDefaults": { "build": { "dependsOn": ["^build", "prebuild"] }, "test": { "dependsOn": ["build"] } } } ``` {% aside type="caution" title="Older versions of Nx" %} Older versions of Nx used targetDependencies instead of targetDefaults. `targetDependencies` was removed in version 16, with `targetDefaults` replacing its use case. {% /aside %} When running `nx test myproj`, the above configuration would tell Nx to 1. Run the `test` command for `myproj` 2. But since there's a dependency defined from `test` to `build` (see `"dependsOn" :["build"]`), Nx runs `build` for `myproj` first. 3. `build` itself defines a dependency on `prebuild` (on the same project) as well as `build` of all the dependencies. Therefore, it will run the `prebuild` script and will run the `build` script for all the dependencies. Note, Nx doesn't have to run all builds before it starts running tests. The task orchestrator will run as many tasks in parallel as possible as long as the constraints are met. These rules can be defined globally in the `nx.json` file or locally per project in the `package.json` or `project.json` files. ## Configure it for your own project {% aside type="tip" title="Learn by doing" %} Try the [Configuring Tasks](/docs/getting-started/tutorials/configuring-tasks) tutorial to apply these concepts in your own workspace. {% /aside %} Learn about all the details of how to configure [task pipelines in the according recipe section](/docs/guides/tasks--caching/defining-task-pipeline). --- ## Managing Configuration Files Besides providing caching and task orchestration, Nx also helps incorporate numerous tools and frameworks into your repo. With all these pieces of software commingling, you can end up with a lot of configuration files. Nx plugins help to abstract away some of the difficulties of managing this configuration, but the configuration is all still accessible, in case there is a particular setting that you need to change. ## Different kinds of configuration When discussing configuration, it helps to categorize the configuration settings in two dimensions: - **Type** - The two different types we'll discuss are Nx configuration and tooling configuration. Tooling can be any framework or tool that you use in your repo (i.e. React, Jest, Playwright, Typescript, etc) - **Specificity** - There are two different levels of specificity: global and project-specific. Project-specific configuration is merged into and overwrites global configuration. For example, Jest has a global `/jest.config.ts` file and a project-specific `/apps/my-app/jest.config.ts` file that extends it. | | Nx | Tooling | | ---------------- | --------------------------- | ----------------------------- | | Global | `/nx.json` | `/jest.config.ts` | | Project-specific | `/apps/my-app/project.json` | `/apps/my-app/jest.config.ts` | ## How does Nx help manage tooling configuration? In a repository with many different projects and many different tools, there will be a lot of tooling configuration. Nx helps reduce the complexity of managing that configuration in two ways: 1. Abstracting away common tooling configuration settings so that if your project is using the tool in the most common way, you won't need to worry about configuration at all. The default settings for any Nx plugin executor are intended to work without modification for most projects in the community. 2. Allowing you to [provide `targetDefaults`](/docs/guides/tasks--caching/reduce-repetitive-configuration) so that the most common settings for projects in your repo can all be defined in one place. Then, only projects that are exceptions need to overwrite those settings. With the judicious application of this method, larger repositories can actually have less lines of configuration after adding Nx than before. ## Determining the value of a configuration property If you need to track down the value of a specific configuration property (say `runInBand` for `jest` on the `/apps/my-app` project) you need to look in the following locations. The configuration settings are merged with priority being given to the file higher up in the list. 1. In `/apps/my-app/project.json`, the `options` listed under the `test` target that uses the `@nx/jest:jest` executor. 2. In `/nx.json`, the `targetDefaults` listed for the `test` target. 3. One of the `test` target options references `/apps/my-app/jest.config.ts` 4. Which extends `/jest.config.ts` ```text repo/ ├── apps/ │ └── my-app/ │ ├── jest.config.ts │ └── project.json ├── jest.config.ts └── nx.json ``` ## More information {% aside type="tip" title="Learn by doing" %} Try the [Configuring Tasks](/docs/getting-started/tutorials/configuring-tasks) tutorial to apply these concepts in your own workspace. {% /aside %} - [Nx Configuration](/docs/reference/nx-json) - [Project Configuration](/docs/reference/project-configuration) --- ## TypeScript Project Linking {% youtube src="https://youtu.be/D9D8KNffyBk" title="TypeScript Monorepos Done Right!" /%} The naive way to reference code in a separate project is to use a relative path in the `import` statement. ```ts import { someFunction } from '../../teamA/otherProject'; const result = someFunction(); ``` The problem with this approach is that all your import statements become tied to your folder structure. Developers need to know the full path to any project from which they want to import code. Also, if `otherProject` ever moves locations, there will be superfluous code changes across the entire repository. A more ergonomic solution is to reference your local projects as if they were external npm packages and then use a project linking mechanism to automatically resolve the project file path behind the scenes. ```ts import { someFunction } from '@myorg/otherProject'; const result = someFunction(); ``` There are two different methods that Nx supports for linking TypeScript projects: package manager workspaces and TypeScript path aliases. Project linking with TS path aliases was available with Nx before package managers offered a workspaces project linking approach. The Nx Team has since added full support for workspaces because (1) it has become more common across the TypeScript ecosystem and (2) packages will be resolved using native node module resolution instead of relying on TypeScript. Nx provides a cohesive experience for repositories using TypeScript path aliases without project references or repositories using package manager workspaces with TypeScript project references enabled. ## Project linking with workspaces Create a new Nx workspace that links projects with package manager workspaces: ```shell npx create-nx-workspace ``` {% aside type="note" title="Opt-out of Workspaces" %} You can opt-out of workspaces by running `npx create-nx-workspace --no-workspaces`. {% /aside %} ### Set up package manager workspaces The configuration for package manager workspaces varies based on which package manager you're using. {% tabs %} {% tabitem label="npm" %} ```json title="package.json" { "workspaces": ["apps/*", "packages/*"] } ``` Defining the `workspaces` property in the root `package.json` file lets npm know to look for other `package.json` files in the specified folders. With this configuration in place, all the dependencies for the individual projects will be installed in the root `node_modules` folder when `npm install` is run in the root folder. Also, the projects themselves will be linked in the root `node_modules` folder to be accessed as if they were npm packages. If you want to reference a local library project with its own `build` task, you should include the library in the `devDependencies` of the application/library's `package.json` with `*` specified as the library's version. `*` tells npm to use whatever version of the project is available. ```json title="/apps/my-app/package.json" { "devDependencies": { "@my-org/some-project": "*" } } ``` {% /tabitem %} {% tabitem label="yarn" %} ```json title="package.json" { "workspaces": ["apps/*", "packages/*"] } ``` Defining the `workspaces` property in the root `package.json` file lets yarn know to look for other `package.json` files in the specified folders. With this configuration in place, all the dependencies for the individual projects will be installed in the root `node_modules` folder when `yarn` is run in the root folder. Also, the projects themselves will be linked in the root `node_modules` folder to be accessed as if they were npm packages. If you want to reference a local library project with its own `build` task, you should include the library in the `devDependencies` of the application/library's `package.json` with `workspace:*` specified as the library's version. [`workspace:*` tells yarn that the project is in the same repository](https://yarnpkg.com/features/workspaces) and not an npm package. You want to specify local projects as `devDependencies` instead of `dependencies` so that the library is not included twice in the production bundle of the application. ```json title="/apps/my-app/package.json" { "devDependencies": { "@my-org/some-project": "*" } } ``` {% /tabitem %} {% tabitem label="bun" %} ```json title="package.json" { "workspaces": ["apps/*", "packages/*"] } ``` Defining the `workspaces` property in the root `package.json` file lets bun know to look for other `package.json` files in the specified folders. With this configuration in place, all the dependencies for the individual projects will be installed in the root `node_modules` folder when `bun install` is run in the root folder. Also, the projects themselves will be linked in the root `node_modules` folder to be accessed as if they were npm packages. If you want to reference a local library project with its own `build` task, you should include the library in the `devDependencies` of the application/library's `package.json` with `workspace:*` specified as the library's version. [`workspace:*` tells bun that the project is in the same repository](https://bun.sh/docs/install/workspaces) and not an npm package. You want to specify local projects as `devDependencies` instead of `dependencies` so that the library is not included twice in the production bundle of the application. ```json title="/apps/my-app/package.json" { "devDependencies": { "@my-org/some-project": "workspace:*" } } ``` {% /tabitem %} {% tabitem label="pnpm" %} ```yaml title="pnpm-workspace.yaml" packages: - 'apps/*' - 'packages/*' ``` Defining the `packages` property in the root `pnpm-workspaces.yaml` file lets pnpm know to look for project `package.json` files in the specified folders. With this configuration in place, all the dependencies for the individual projects will be installed in the root `node_modules` folder when `pnpm install` is run in the root folder. If you want to reference a local library project from an application, you need to include the library in the `devDependencies` of the application's `package.json` with `workspace:*` specified as the library's version. [`workspace:*` tells pnpm that the project is in the same repository](https://pnpm.io/workspaces#workspace-protocol-workspace) and not an npm package. You want to specify local projects as `devDependencies` instead of `dependencies` so that the library is not included twice in the production bundle of the application. ```json title="/apps/my-app/package.json" { "devDependencies": { "@my-org/some-project": "workspace:*" } } ``` {% /tabitem %} {% /tabs %} ### Set up TypeScript project references With workspaces enabled, you can also configure TypeScript project references to speed up your build and typecheck tasks. The root `tsconfig.base.json` should contain a `compilerOptions` property and no other properties. `compilerOptions.composite` and `compilerOptions.declaration` should be set to `true`. `compilerOptions.paths` should not be set. ```jsonc title="tsconfig.base.json" { "compilerOptions": { // Required compiler options "composite": true, "declaration": true, "declarationMap": true, // Other options... }, } ``` The root `tsconfig.json` file should extend `tsconfig.base.json` and not include any files. It needs to have `references` for every project in the repository so that editor tooling works correctly. ```jsonc title="tsconfig.json" { "extends": "./tsconfig.base.json", "files": [], // intentionally empty "references": [ // UPDATED BY PROJECT GENERATORS // All projects in the repository ], } ``` #### Individual project TypeScript configuration Each project's `tsconfig.json` file should extend the `tsconfig.base.json` file and list `references` to the project's dependencies. ```jsonc title="packages/cart/tsconfig.json" { "extends": "../../tsconfig.base.json", "files": [], // intentionally empty "references": [ // UPDATED BY NX SYNC // All project dependencies { "path": "../utils", }, // This project's other tsconfig.*.json files { "path": "./tsconfig.lib.json", }, { "path": "./tsconfig.spec.json", }, ], } ``` Each project's `tsconfig.lib.json` file extends the `tsconfig.base.json` file and adds `references` to the `tsconfig.lib.json` files of project dependencies. ```jsonc title="packages/cart/tsconfig.lib.json" { "extends": "../../tsconfig.base.json", "compilerOptions": { // Any overrides }, "include": ["src/**/*.ts"], "exclude": [ // exclude config and test files ], "references": [ // UPDATED BY NX SYNC // tsconfig.lib.json files for project dependencies { "path": "../utils/tsconfig.lib.json", }, ], } ``` The project's `tsconfig.spec.json` does not need to reference project dependencies. ```jsonc title="packages/cart/tsconfig.spec.json" { "extends": "../../tsconfig.base.json", "compilerOptions": { // Any overrides }, "include": [ // test files ], "references": [ // tsconfig.lib.json for this project { "path": "./tsconfig.lib.json", }, ], } ``` ### TypeScript project references performance benefits {% youtube src="https://youtu.be/O2xBQJMTs9E" title="Why TypeScript Project References Make Your CI Faster!" /%} Using TypeScript project references improves both the speed and memory usage of build and typecheck tasks. The repository below contains benchmarks showing the difference between running typecheck with and without using TypeScript project references. {% github_repository title="TypeScript Project References Benchmark" url="https://github.com/jaysoo/typecheck-timings" /%} Here are the baseline typecheck task performance results. ```text Typecheck without using project references: 186 seconds, max memory 6.14 GB ``` Using project references allows the TypeScript compiler to individually check the types for each project and store the results of that calculation in a `.tsbuildinfo` file for later use. Because of this, the TypeScript compiler does not need to load the entire codebase into memory at the same time, which you can see from the decreased memory usage on the first run with project references enabled. ```text Typecheck with project references first run: 175 seconds, max memory 945 MB ``` Once the `.tsbuildinfo` files have been created, subsequent runs will be much faster. ```text Typecheck with all `.tsbuildinfo` files created: 25 seconds, max memory 429 MB ``` Even if some projects have been updated and individual projects need to be type checked again, the TypeScript compiler can still use the cached `.tsbuildinfo` files for any projects that were not affected. This is very similar to the way Nx caching and affected features work. ```text Typecheck (1 pkg updated): 36.33 seconds, max memory 655.14 MB Typecheck (5 pkg updated): 48.21 seconds, max memory 702.96 MB Typecheck (25 pkg updated): 65.25 seconds, max memory 666.78 MB Typecheck (100 pkg updated): 80.69 seconds, max memory 664.58 MB Typecheck (1 nested leaf pkg updated): 26.66 seconds, max memory 407.54 MB Typecheck (2 nested leaf pkg updated): 31.17 seconds, max memory 889.86 MB Typecheck (1 nested root pkg updated): 26.67 seconds, max memory 393.78 MB ``` These performance benefits will be more noticeable for larger repositories, but even small code bases will see some benefits. ### Local TypeScript path aliases If you define TS path aliases in an individual project's tsconfig files, you should not define them also in the root `tsconfig.base.json` file because TypeScript does not merge the paths. The paths defined in the root file would be completely overwritten by the ones defined in the project tsconfig. For instance, you could define paths like this in an application's tsconfig file. ```jsonc title="/apps/my-remix-app/tsconfig.app.json" { "compilerOptions": { "paths": { "#app/*": ["./app/*"], "#tests/*": ["./tests/*"], "@/icon-name": [ "./app/components/ui/icons/name.d.ts", "./types/icon-name.d.ts", ], }, }, } ``` ## Project linking with TypeScript path aliases {% aside type="caution" title="Path Aliases Overwrite Extended Configuration Files" %} If you define path aliases in a project's specific `tsconfig.*.json` file, those path aliases will overwrite the path aliases defined in the root `tsconfig.base.json`. You can't use both project-level path aliases and root path aliases. {% /aside %} Linking projects with TypeScript path aliases is configured entirely in the tsconfig files. You can still use package manager workspaces to enable you to define separate third-party dependencies for individual projects, but the local project linking is done by TypeScript instead of the package manager. The paths for each library are defined in the root `tsconfig.base.json` and each project's `tsconfig.json` should extend that file. Note that application projects do not need to have a path defined because no projects will import code from a top-level application. ```jsonc title="/tsconfig.base.json" { "compilerOptions": { // common compiler option defaults for all projects // ... // These compiler options must be false or undefined "composite": false, "declaration": false, "paths": { // These paths are automatically added by Nx library generators "@myorg/shared-ui": ["packages/shared-ui/src/index.ts"], // ... }, }, } ``` # Features --- ## Features Learn the core features of Nx with in depth guides. {% index_page_cards path="features" /%} --- ## Automate Updating Dependencies {% youtube src="https://youtu.be/A0FjwsTlZ8A" title="How Automated Code Migrations Work" /%} Keeping your tooling up to date is crucial for the health of your project. Tooling maintenance work can be tedious and time consuming, though. The **Nx migrate** functionality provides a way for you to - automatically update your **`package.json` dependencies** - migrate your **configuration files** (e.g. Jest, ESLint, Nx config) - **adjust your source code** to match the new versions of packages (e.g., migrating across breaking changes) To update your workspace, run: ```shell npx nx@latest migrate latest ``` {% aside type="note" title="Visual migration tool from Nx Console" %} Want a more visual and guided way to migrate? Check out the [Migrate UI](/docs/guides/nx-console/console-migrate-ui) that comes with the [Nx Console extension](/docs/getting-started/editor-setup). {% /aside %} ## How does it work? Nx knows where its configuration files are located and ensures they match the expected format. This automated update process, commonly referred to as "migration," becomes even **more powerful when you leverage [Nx plugins](/docs/plugin-registry)**. Each plugin can provide migrations for its area of competency. For example, the [Nx React plugin](/docs/technologies/react/introduction) knows where to look for React and Nx specific configuration files and knows how to apply certain changes when updating to a given version of React. In the example below, the React plugin defines a migration script (`./src/migrations/.../add-babel-core`) that runs when upgrading to Nx `16.7.0-beta.2` (or higher). ```json {% meta="{7,8}" %} // migrations.json { "generators": { ... "add-babel-core": { ... "version": "16.7.0-beta.2", "implementation": "./src/migrations/update-16-7-0/add-babel-core" }, }, } ``` When running `nx migrate latest`, Nx parses all the available plugins and their migration files and applies the necessary changes to your workspace. ## How do i upgrade my Nx workspace? Updating your Nx workspace happens in three steps: 1. The **installed dependencies**, including the `package.json` and `node_modules`, are updated. 2. Nx produces a `migrations.json` file containing the **migrations to be run** based on your workspace configuration. You can inspect and adjust the file. Run the migrations to update your configuration files and source code. 3. Optionally, you can remove the `migrations.json` file or keep it to re-run the migration in different Git branches. You can intervene at each step and make adjustments as needed for your specific workspaces. This is especially important in large codebases where you might want to control the changes more granularly. ### Step 1: update dependencies and generate migrations First, run the `migrate` command: ```shell nx migrate latest ``` Note you can also specify an exact version by replacing `latest` with `nx@`. {% aside title="Update One Major Version at a Time" %} To avoid potential issues, it is [recommended to update one major version of Nx at a time](/docs/guides/tips-n-tricks/advanced-update#one-major-version-at-a-time-small-steps). {% /aside %} This results in: - The `package.json` being updated - A `migrations.json` being generated if there are pending migrations. At this point, no packages have been installed, and no other files have been touched. Now, you can **inspect `package.json` to see if the changes make sense**. Sometimes the migration can update a package to a version that is either not allowed or conflicts with another package. Feel free to manually apply the desired adjustments. Also look at the `migrations.json` for the type of migrations that are going to be applied. ### Step 2: Run migrations You can now run the actual code migrations that were generated in the `migrations.json` in the previous step. ```shell nx migrate --run-migrations ``` Depending on the migrations that ran, this might **update your source code** and **configuration files** in your workspace. All the changes will be unstaged ready for you to review and commit yourself. {% aside type="tip" title="Migrations are version specific" %} Note that each Nx plugin is able to provide a set of migrations which are relevant to particular versions of the package. Hence `migrations.json` will only contain migrations which are appropriate for the update you are currently applying. {% /aside %} ### Step 3: Clean up After you run all the migrations, you can remove `migrations.json` and commit any outstanding changes. Note: You may want to keep the `migrations.json` until every branch that was created before the migration has been merged. Leaving the `migrations.json` in place allows devs to run `nx migrate --run-migrations` to apply the same migration process to their newly merged code as well. ### Step 4: Update community plugins (Optional) If you have any [Nx community plugins](/docs/plugin-registry) installed you need to migrate them individually (assuming they provide migration scripts) by using the following command: ```shell nx migrate my-plugin ``` For a list of all the plugins you currently have installed, run: ```shell nx report ``` ## Keep Nx packages on the same version When you run `nx migrate`, the `nx` package and all the `@nx/` packages get updated to the same version. It is important to [keep these versions in sync](/docs/guides/tips-n-tricks/keep-nx-versions-in-sync) to have Nx work properly. As long as you run `nx migrate` instead of manually changing the version numbers, you shouldn't have to worry about it. Also, when you add a new plugin, use `nx add ` to automatically install the version that matches your repository's version of Nx. ## Need to opt-out of some migrations? Sometimes you need to temporarily opt-out from some migrations because your workspace is not ready yet. You can manually adjust the `migrations.json` or run the update with the `--interactive` flag to choose which migrations you accept. Find more details in our [Advanced Update Process](/docs/guides/tips-n-tricks/advanced-update) guide. --- ## Cache Task Results {% youtube src="https://youtu.be/o-6jb78uuP0" title="Remote Caching with Nx Replay" /%} Rebuilding and retesting the same code repeatedly is costly. Nx offers a sophisticated and battle-tested computation caching system that ensures **code is never rebuilt twice**. This: - drastically **speeds up your task execution times** while developing locally and even more [in CI](/docs/features/ci-features/remote-cache) - **saves you money on CI/CD costs** by reducing the number of tasks that need to be executed Nx **restores both the terminal output and the files** created from running the task (e.g., your build or dist directory). If you want to learn more about the conceptual model behind Nx caching, read [How Caching Works](/docs/concepts/how-caching-works). ## Define cacheable tasks {% tabs syncKey="nx-version" %} {% tabitem label="Nx >= 17" %} To enable caching for `build` and `test`, edit the `targetDefaults` property in `nx.json` to include the `build` and `test` tasks: ```json // nx.json { "targetDefaults": { "build": { "cache": true }, "test": { "cache": true } } } ``` {% /tabitem %} {% tabitem label="Nx < 17" %} To enable caching for `build` and `test`, edit the `cacheableOperations` property in `nx.json` to include the `build` and `test` tasks: ```json // nx.json { "tasksRunnerOptions": { "default": { "runner": "nx/tasks-runners/default", "options": { "cacheableOperations": ["build", "test"] } } } } ``` {% /tabitem %} {% /tabs %} {% aside type="note" title="Cacheable operations need to be side effect free" %} This means that given the same input they should always result in the same output. As an example, e2e test runs that hit the backend API cannot be cached as the backend might influence the result of the test run. {% /aside %} ## Enable remote caching By default, Nx caches task results locally. The biggest benefit of caching comes from using remote caching in CI, where you can **share the cache between different runs**. Nx comes with a managed remote caching solution built on top of Nx Cloud. To enable remote caching, connect your workspace to [Nx Cloud](https://nx.dev/nx-cloud) by running the following command: ```shell npx nx@latest connect ``` Learn more about [remote caching with Nx Cloud](/docs/features/ci-features/remote-cache). ## Fine-tune caching with inputs and outputs Nx caching feature starts with sensible defaults, but you can also **fine-tune the defaults** to control exactly what gets cached and when. There are two main options that control caching: - **Inputs -** define what gets included as part of the calculated hash (e.g. files, environment variables, etc.) - **Outputs -** define folders where files might be placed as part of the task execution. You can define these inputs and outputs at the project level (`project.json`) or globally for all projects (in `nx.json`). Take the following example: we want to exclude all `*.md` files from the cache so that whenever we change the README.md (or any other markdown file), it does _not_ invalidate the build cache. We also know that the build output will be stored in a folder named after the project name in the `dist` folder at the root of the workspace. To achieve this, we can add `inputs` and `outputs` definitions globally for all projects or at a per-project level: {% tabs %} {% tabitem label="Globally" %} ```json // nx.json { "targetDefaults": { "build": { "inputs": ["{projectRoot}/**/*", "!{projectRoot}/**/*.md"], "outputs": ["{workspaceRoot}/dist/{projectName}"] } } } ``` {% /tabitem %} {% tabitem label="Project Level (project.json)" %} ```json // packages/some-project/project.json { "name": "some-project", "targets": { "build": { ... "inputs": ["!{projectRoot}/**/*.md"], "outputs": ["{workspaceRoot}/dist/apps/some-project"], ... } ... } } ``` {% /tabitem %} {% tabitem label="Project Level (package.json)" %} ```json // packages/some-project/package.json { "name": "some-project", "nx": { "targets": { "build": { ... "inputs": ["!{projectRoot}/**/*.md"], "outputs": ["{workspaceRoot}/dist/apps/some-project"], ... } ... } } } ``` {% /tabitem %} {% /tabs %} Note that you only need to define output locations if they differ from the usual `dist` or `build` directory, which Nx automatically recognizes. Learn more [about configuring inputs including `namedInputs`](/docs/guides/tasks--caching/configure-inputs). ## Configure caching automatically When using [Nx plugins](/docs/concepts/nx-plugins), many tasks have caching configured automatically, saving you the effort of manual setup. **Nx plugins can [automatically infer tasks](/docs/concepts/inferred-tasks) and configure caching** based on your underlying tooling configuration files. For example, if you add the `@nx/vite` plugin using the following command... ```shell npx nx add @nx/vite ``` ...it automatically detects your `vite.config.ts` file, infers the tasks you'd be able to run, such as `build`, and **automatically configures the cache settings** for these tasks as well as the [task pipeline](/docs/concepts/task-pipeline-configuration) (e.g., triggering dependent builds). This means **you don't need to manually specify cacheable operations for Vite tasks** and the cache setting such as inputs and outputs are always in sync with the `vite.config.ts` file. To view the task settings that have been automatically configured by a plugin, use the following command: ```shell nx show project --web ``` Alternatively, you can view these directly in your editor by installing [Nx Console](/docs/getting-started/editor-setup). Learn more details about [Nx plugins](/docs/concepts/nx-plugins) and [inferred tasks](/docs/concepts/inferred-tasks). ## Troubleshoot cache settings Caching is hard. If you run into issues, check out the following resources: - [Debug cache misses](/docs/troubleshooting/troubleshoot-cache-misses) - [Turn off or skip the cache](/docs/guides/tasks--caching/skipping-cache) - [Change the cache location](/docs/guides/tasks--caching/change-cache-location) - [Clear the local or remote cache](/docs/reference/nx-commands#nx-reset) --- ## Orchestration & CI with Nx Cloud {% youtube src="https://www.youtube.com/watch?v=cDBihpB3SbI" title="Nx and Nx Cloud" width="100%" /%} CI is challenging and it's **not your fault**. It's a fundamental issue with how the current, traditional CI execution model works. Nx Cloud adopts a new **task-based** CI model that overcomes slowness and unreliability of the current VM-based CI model. Nx Cloud improves many aspects of the CI/CD process: - **Speed** - 30% - 70% faster CI (based on reports from our clients) - **Cost** - 40% - 75% reduction in CI costs (observed on the Nx OSS monorepo) - **Reliability** - by automatically identifying flaky tasks (e2e tests in particular) and re-running them ## Connect your workspace to Nx Cloud Run the following command in your Nx workspace (make sure you have it pushed to a remote repository first): ```shell npx nx connect ``` This connects your workspace to Nx Cloud and enables remote caching and CI features. For more details, [follow our in-depth guide](/docs/guides/nx-cloud/setup-ci) for setting up CI with Nx. ## How Nx Cloud improves CI In traditional CI models, work is statically assigned to CI machines. This creates inefficiencies that many teams experience at scale. Nx Cloud uses a **task-based approach to dynamically assign tasks** to agent machines. CI becomes scalable, maintainable, and more reliable because Nx Cloud coordinates work among agent machines automatically and acts on individual tasks directly. For example: - An agent machine fails in a setup step — Nx Cloud automatically reassigns the work to other agent machines. - More work needs to run in CI — add more agent machines, Nx Cloud automatically assigns available work. - Known flaky tasks waste CI time on needed reruns — Nx Cloud automatically detects flaky tasks and reruns them in the current CI execution. [Learn how our customers use Nx Cloud](https://nx.dev/blog?filterBy=customer+story) to scale their workspaces and be more efficient. ## Nx Cloud features {% index_page_cards path="features/ci-features" /%} ## Learn more - [Blog post: Reliable CI: A new execution model fixing both flakiness and slowness](https://nx.dev/blog/reliable-ci-a-new-execution-model-fixing-both-flakiness-and-slowness) - [Live stream: Unlock the secret of fast CI - Hands-on session](https://www.youtube.com/live/rkLKaqLeDa0) - [YouTube: 10x Faster e2e Tests](https://www.youtube.com/watch?v=0YxcxIR7QU0) --- ## Run Only Tasks Affected by a PR {% youtube src="https://youtu.be/q-cu5Lw3DoE" title="Only Run Tasks for Projects That Changed" /%} As your workspace grows, re-testing, re-building, and re-linting **all projects becomes too slow**. To address this, Nx comes with an "affected" command. Using this command, Nx - determines the minimum set of **projects that are affected by the change** - only runs tasks on those affected projects This drastically improves the speed of your CI and reduces the amount of compute needed. This advantage is further enhanced when [paired with remote caching and distribution](#best-paired-with-remote-caching-and-distribution). {% graph title="Making a change in lib10 only affects a sub-part of the project graph (shown in purple)" height="400px" %} ```json { "projects": [ { "type": "app", "name": "app1", "data": {} }, { "type": "app", "name": "app2", "data": {} }, { "type": "lib", "name": "lib1", "data": {} }, { "type": "lib", "name": "lib2", "data": {} }, { "type": "lib", "name": "lib3", "data": {} }, { "type": "lib", "name": "lib4", "data": {} }, { "type": "lib", "name": "lib5", "data": {} }, { "type": "lib", "name": "lib6", "data": {} }, { "type": "lib", "name": "lib7", "data": {} }, { "type": "lib", "name": "lib8", "data": {} }, { "type": "lib", "name": "lib9", "data": {} }, { "type": "lib", "name": "lib10", "data": {} }, { "type": "lib", "name": "lib11", "data": {} }, { "type": "lib", "name": "lib12", "data": {} } ], "groupByFolder": false, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "dependencies": { "app1": [ { "target": "lib1", "source": "app1", "type": "direct" }, { "target": "lib2", "source": "app1", "type": "direct" } ], "app2": [ { "target": "lib4", "source": "app2", "type": "direct" }, { "target": "lib5", "source": "app2", "type": "direct" }, { "target": "lib6", "source": "app2", "type": "direct" } ], "lib1": [ { "target": "lib7", "source": "lib1", "type": "direct" }, { "target": "lib8", "source": "lib1", "type": "direct" } ], "lib2": [ { "target": "lib3", "source": "lib2", "type": "direct" } ], "lib3": [ { "target": "lib8", "source": "lib3", "type": "direct" } ], "lib4": [ { "target": "lib3", "source": "lib4", "type": "direct" }, { "target": "lib9", "source": "lib4", "type": "direct" }, { "target": "lib10", "source": "lib4", "type": "direct" } ], "lib5": [ { "target": "lib10", "source": "lib5", "type": "direct" }, { "target": "lib11", "source": "lib5", "type": "direct" }, { "target": "lib12", "source": "lib5", "type": "direct" } ], "lib6": [ { "target": "lib12", "source": "lib6", "type": "direct" } ], "lib7": [], "lib8": [], "lib9": [], "lib10": [], "lib11": [], "lib12": [] }, "affectedProjectIds": ["lib10", "lib4", "lib5", "app2"] } ``` {% /graph %} ## Using Nx affected commands To leverage this feature, use the following command when running your tasks, particularly on CI: ```shell nx affected -t ``` When you run `nx affected -t test`, Nx will: - Use Git to determine the files you changed in your PR. - Use the [project graph](/docs/features/explore-graph) to determine which projects the files belong to. - Determine which projects depend on the projects you modified. Once the projects are identified, Nx runs the tasks you specified on that subset of projects. You can also visualize the affected projects using the [Nx graph](/docs/features/explore-graph). Simply run: ```shell nx graph --affected ``` ## Best paired with remote caching and distribution Using `nx affected` is a powerful tool to reduce the amount of compute that needs to be run. However, this might not be sufficient to significantly speed up your CI pipeline. For example: - If you're modifying a **project that is used by a large portion** of your monorepo projects, you might end up running tasks for almost all the projects in the workspace. - If you have a set of 10 projects affected by a PR and you continue making changes, you will **always end up running tasks for those 10 projects**. The set of affected projects doesn't change but is always calculated with respect to your last successful run on the main branch. This is why Nx Affected is best paired with [remote caching](/docs/features/ci-features/remote-cache) and [distributed task execution](/docs/features/ci-features/distribute-task-execution). ## Configure affected on CI To understand which projects are affected, Nx uses the Git history and the [project graph](/docs/features/explore-graph). Git knows which files changed, and the Nx project graph knows which projects those files belong to. The affected command takes a `base` and `head` commit. The default `base` is your `main` branch, and the default `head` is your current file system. This is generally what you want when developing locally, but in CI, you need to customize these values. ```shell nx affected -t build --base=origin/main --head=$PR_BRANCH_NAME # where PR_BRANCH_NAME is defined by your CI system nx affected -t build --base=origin/main~1 --head=origin/main # rerun what is affected by the last commit in main ``` You can also set the base and head SHAs as environment variables: ```shell NX_BASE=origin/main~1 NX_HEAD=origin/main ``` **The recommended approach is to set the base SHA to the latest successful commit** on the `main` branch. This ensures that all changes since the last successful CI run are accounted for. See our guides to get the [last successful CI run for your CI provider](/docs/guides/nx-cloud/setup-ci#get-the-commit-of-the-last-successful-build). ## Ignoring files from affected commands Nx provides two methods to exclude glob patterns (files and folders) from `affected:*` commands: - Glob patterns defined in your `.gitignore` file are ignored. - Glob patterns defined in an optional `.nxignore` file are ignored. ## Marking projects affected by dependency updates By default, Nx will mark all projects as affected whenever your package manager's lock file changes. This behavior is a failsafe in case Nx misses a project that should be affected by a dependency update. If you'd like to opt into smarter behavior, you can configure Nx to only mark projects as affected if they actually depend on the updated packages. ```json // nx.json { "pluginsConfig": { "@nx/js": { "projectsAffectedByDependencyUpdates": "auto" } } } ``` The flag `projectsAffectedByDependencyUpdates` can be set to `auto`, `all`, or an array that contains project specifiers. The default value is `all`. ## Not using git If you aren't using Git, you can pass `--files` to any affected command to indicate what files have been changed. --- ## Distribute Task Execution (Nx Agents) {% youtube src="https://youtu.be/XS-exYYP_Gg" title="Nx Agents Walkthrough" /%} Nx Agents is a **distributed task execution system that intelligently allocates tasks across multiple machines**, optimizing your CI pipeline for speed and efficiency. While using [Nx Affected](/docs/features/ci-features/affected) and [remote caching (Nx Replay)](/docs/features/ci-features/remote-cache) can significantly speed up your CI pipeline, you might still encounter bottlenecks as your codebase scales. Combining affected runs and remote caching with task distribution is key to maintaining low CI times. Nx Agents handles this distribution efficiently, avoiding the complexity and maintenance required if you were to set it up manually. ![Nx Cloud visualization of how tasks are being distributed with Nx Agents](../../../../assets/features/nx-agents-live-chart.avif) Nx Agents offer several key advantages: - **Declarative Configuration:** No maintenance is required as your monorepo evolves, thanks to a declarative setup. - **Efficient Task Replay:** By leveraging [remote caching](/docs/features/ci-features/remote-cache), tasks can be replayed efficiently across machines, enhancing distribution speed. - **Intelligent Task Distribution:** Tasks are distributed based on historical run times and dependencies, ensuring correct and optimal execution. - **Dynamic Resource Allocation:** Agents are [allocated dynamically based on the size of the PR](/docs/features/ci-features/dynamic-agents), balancing cost and speed. - **Seamless CI Integration:** Easily adopt Nx Agents with your [existing CI provider](/docs/guides/nx-cloud/setup-ci), requiring minimal setup changes. - **Simple Activation:** Enable distribution with just a [single line of code](#enable-nx-agents) in your CI configuration. ## Enable Nx Agents To enable task distribution with Nx Agents, make sure your Nx workspace is connected to Nx Cloud. If you haven't connected your workspace to Nx Cloud yet, run the following command: ```shell npx nx@latest connect ``` Check out the [connect to Nx Cloud recipe](/docs/guides/nx-cloud/setup-ci) for more details. Then, adjust your CI pipeline configuration to **enable task distribution**. If you don't have a CI config yet, you can generate a new one using the following command: ```shell npx nx g ci-workflow ``` The key line in your CI config is the `start-ci-run` command: ```yaml {% meta="{15}" %} // .github/workflows/ci.yml name: CI ... jobs: main: runs-on: ubuntu-latest steps: ... - uses: actions/checkout@v4 with: fetch-depth: 0 filter: tree:0 - run: pnpm dlx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" # Cache node_modules - uses: actions/setup-node@v4 with: node-version: 20 cache: 'pnpm' ... # Nx Affected runs only tasks affected by the changes in this PR/commit. Learn more: https://nx.dev/ci/features/affected - run: pnpm exec nx affected -t lint test build ``` This command tells Nx Cloud to: - Start a CI run (`npx nx start-ci-run`) - Collect all Nx commands that are being issued (e.g., `pnpm exec nx affected -t lint test build`) and - Distribute them across 3 agents (`3 linux-medium-js`) where `linux-medium-js` is a predefined agent [launch template](/docs/reference/nx-cloud/launch-templates). ### Configure Nx Agents on your CI Provider Every organization manages their CI/CD pipelines differently, so the guides don't cover org-specific aspects of CI/CD (e.g., deployment). They mainly focus on configuring Nx correctly using Nx Agents and [Nx Replay](/docs/features/ci-features/remote-cache). Read our [setup guides on for your CI provider of choice](/docs/guides/nx-cloud/setup-ci). ## How Nx Agents work ![Distribute Task Execution with Nx Agents](../../../../assets/features/nx-agents-orchestration-diagram.svg) _**Nx Agents are declarative**_ in that you only specify the number of agents and the type of agent you want to use. Nx Cloud then picks up the Nx commands that are being issued on your CI and distributes them automatically. This results in **low maintenance and a much more efficient distribution strategy**. A non-declarative approach would be one where you define which tasks or projects get executed on which machine, requiring you to adjust the configuration as your codebase changes. _**Nx Agents use a task-centric approach**_ to distribution. Current CI systems use VM-centric approaches, where tasks must be predefined for specific machines, often leading to inefficiencies as your codebase grows. Instead of defining which tasks run on which machine upfront, Nx Agents dynamically process tasks based on availability and task dependencies/ordering. Tasks are picked up by agents based on the task's required processing time (from historical data) and task dependency/ordering (from the Nx graph). This results in a faster and resource efficient processing, and is also more resilient to failures since any other agent can pick up work if one agent fails during bootup. Read more [on our blog post](https://nx.dev/blog/reliable-ci-a-new-execution-model-fixing-both-flakiness-and-slowness). _**Nx Agents are cost and resource-efficient**_ as the tasks are automatically distributed - **optimizing for speed while also ensuring resource utilization is high** and idle time is low. You can also [dynamically adjust the number of agents](/docs/features/ci-features/dynamic-agents) based on the size of the PR, and we're working on [some more AI-powered features](/docs/features/ci-features/self-healing-ci) to optimize this even further. In addition, [remote caching](/docs/features/ci-features/remote-cache) guarantees tasks are not run twice, and artifacts are shared efficiently among agents. _**Nx Agents are non-invasive**_ in that you don't need to completely overhaul your existing CI configuration or your Nx workspace to use them. You can start using Nx Agents with your existing CI provider by adding the `nx start-ci-run...` command mentioned previously. In addition, all artifacts and logs are played back to the main job so you can keep processing them as if they were run on the main job. Hence, your existing post-processing steps should still keep working as before. For a more thorough explanation of how Nx Agents optimize your CI pipeline, read this [guide to parallelization and distribution in CI](/docs/concepts/ci-concepts/parallelization-distribution). ## Nx Agents features {% cardgrid %} {% linkcard title="Create Custom Launch Templates" description="Define your own launch templates to set up agents in the exact right way" href="/docs/reference/nx-cloud/launch-templates" /%} {% linkcard title="Dynamically Allocate Agents" description="Assign a different number of agents to a pipeline based on the size of the PR" href="/docs/features/ci-features/dynamic-agents" /%} {% linkcard title="Automatically Split E2E Tasks" description="Split large e2e tasks into a separate task for each spec file" href="/docs/features/ci-features/split-e2e-tasks" /%} {% linkcard title="Identify and Re-run Flaky Tasks" description="Re-run flaky tasks in CI whenever they fail" href="/docs/features/ci-features/flaky-tasks" /%} {% /cardgrid %} ## Relevant repositories and examples By integrating Nx Agents into your CI pipeline, you can significantly reduce build times, optimize resource use, and maintain a scalable, efficient development workflow. - [Reliable CI: A New Execution Model Fixing Both Flakiness and Slowness](https://nx.dev/blog/reliable-ci-a-new-execution-model-fixing-both-flakiness-and-slowness) - [Nx: On how to make your CI 16 times faster with a small config change](https://github.com/vsavkin/interstellar) - ["Lerna & Distributed Task Execution" Example](https://github.com/vsavkin/lerna-dte) --- ## Dynamically Allocate Agents By default, when you set up [Nx Agents](/docs/features/ci-features/distribute-task-execution) you specify the number and type of agents to use. ```yaml {% meta="{9}" %} // .github/workflows/main.yaml" ... jobs: - job: main name: Main Job ... steps: ... - run: npx nx start-ci-run --distribute-on="8 linux-medium-js" --stop-agents-after="e2e-ci" - ... ``` This works great, but may not be the most cost-effective way to run your tasks. The goal is to **balance cost and speed**. For example, you might want to run a small PR on a few agents to save costs, but use many agents for a large PR to get the fastest possible build time. ## Configure dynamic agents based on PR size You can configure Nx Cloud to execute a different number of agents based on the size of your PR's affected changes. Define any number of **changesets** (the number of agents and types of agents) to use for different sized PRs and pass them in a configuration file to your `start-ci-run` command. Start by creating a file called `distribution-config.yaml` in the `.nx/workflows` directory of your repo. This file will contain a `distribute-on` property that will be used to define the changesets to use for your PR. You can name your changesets anything you want. {% aside type="caution" title="The order of your changesets matters!" %} Define your changesets in order of increasing size (i.e. smallest changesets are defined before larger changesets). Nx Cloud uses the position of your changesets as part of its calculations to dynamically determine the correct changeset to use for your PR. {% /aside %} ```yaml // .nx/workflows/distribution-config.yaml distribute-on: small-changeset: 3 linux-medium-js medium-changeset: 6 linux-medium-js large-changeset: 10 linux-medium-js ``` You can also specify a `default` changeset if you only want one changeset to be used for all PRs. Note that `default` is a reserved keyword so do not use it if you would like to define multiple changesets. ```yaml // .nx/workflows/distribution-config.yaml distribute-on: default: 3 linux-medium-js ``` You can have as many changesets as you want. Based on the number of changesets specified, each changeset is assigned an equal percentage out of 100. Nx Cloud can determine the percentage of affected projects in your PR and use that value to evaluate which changeset to use. {% callout type="deepdive" title="How is the size of the PR determined?" %} Nx Cloud calculates the relationship between the number of [affected projects](/docs/features/ci-features/affected) and the total number of projects in the workspace to determine the size of a PR. {% /callout %} ## Setting up dynamic agents in your CI pipeline In the example below, each changeset would be assigned an equal percentage range out of 100%. If Nx Cloud determines that 30% of your projects have been affected, then it will use the medium changeset to distribute the workload on. If Nx Cloud determines that 55% of your projects have been affected, it will use the large changeset. ```yaml // .nx/workflows/distribution-config.yaml distribute-on: small-changeset: 3 linux-medium-js # Distribute on small if 1-25% of projects affected in PR medium-changeset: 6 linux-medium-js # Distribute on medium if 26-50% of projects affected in PR large-changeset: 10 linux-medium-js # Distribute on large if 51-75% of projects affected in PR extra-large-changeset: 15 linux-medium-js # Distribute on extra-large if 76-100% of projects affected in PR ``` You can then reference your distribution configuration in your CI pipeline configuration: ```yaml {% meta="{9}" %} // .github/workflows/main.yaml ... jobs: - job: main name: Main Job ... steps: ... - run: npx nx start-ci-run --distribute-on=".nx/workflows/distribution-config.yaml" --stop-agents-after="e2e-ci" - ... ``` Now your agents will distribute your tasks dynamically—scaling and adapting to your PR sizes. This feature helps save costs on smaller PRs while maintaining the high performance necessary for large PRs. --- ## Identify and Re-run Flaky Tasks Tasks that fail only sometimes and only in certain environments are called "flaky tasks". They are enormously time consuming to identify and debug. Nx Cloud can **reliably detect flaky tasks** and **automatically schedule them to be re-run** on a different agent. Ideally as a developer you don't even notice flaky tasks any more as they're automatically re-run and solved for you. ## Enable flaky task detection Flaky Task Detection is enabled by default if your workspace is connected to Nx Cloud and leverages [Nx Agents](/docs/features/ci-features/distribute-task-execution). To connect your workspace to Nx Cloud run: ```shell npx nx@latest connect ``` See the [connect to Nx Cloud recipe](/docs/guides/nx-cloud/setup-ci) for all the details. ## How Nx identifies flaky tasks Nx leverages its cache mechanism to identify flaky tasks. - Nx creates a **hash of all the inputs** for a task whenever it is run. - If Nx ever encounters a task that fails with a particular set of inputs and then succeeds with those same inputs, Nx **knows for a fact that the task is flaky**. Nx can't know with certainty when the task has been fixed to no longer be flaky, so if a particular task has **no flakiness incidents for 2 weeks**, the `flaky` flag is removed for that task. ![Flaky tasks in CI](../../../../assets/features/ci-features/flaky-tasks-ci.png) In this image, the `e2e-ci--src/e2e/app.cy.ts` task is a flaky task that has been automatically retried once. There is a `1 retry` indicator to show that it has been retried and, once expanded, you can see tabs that contain the logs for `Attempt 1` and `Attempt 2`. With this UI, you can easily compare the output between a successful and unsuccessful run of a flaky task. ## Automatically re-run flaky tasks When a flaky task fails in CI with [distributed task execution](/docs/features/ci-features/distribute-task-execution) enabled, Nx will **automatically send that task to a different agent** and run it again (up to 2 tries in total). Its important to run the task on a different agent to ensure that the agent itself or the other tasks that were run on that agent are not the reason for the flakiness. ## Flaky task analytics {% aside type="note" title="Enterprise Feature" %} Workspace flaky task analytics is currently available for organizations on the Enterprise plan. Reach out if your organization is [interested in Nx Enterprise](https://nx.dev/enterprise?utm_source=nx.dev&utm_medium=callout&utm_campaign=flaky-task-analytics). {% /aside %} Nx Cloud provides analytics to help you understand and manage flaky tasks across your workspace. The analytics dashboard gives you insights into which tasks are flaky, how often they fail, and how much time is being wasted on reruns. ![Flaky Tasks dashboard](../../../../assets/features/ci-features/nx-cloud-flaky-tasks-metrics-chart.avif) The dashboard displays key metrics over the time range selected (7 days vs 30 days) to give you a quick overview of your workspace health. - **Active flaky tasks** - The total number of tasks in your workspace that have a flake rate greater than 0 within the selected time window. - **Average flake rate** - A weighted average flake rate across all tasks in your workspace. This metric uses the sample size to weight each task's flake rate proportionally, so a task that ran 1000 times with 5% flake rate has more impact than one that ran 10 times with 50% flake rate. - **High risk tasks** - The number of tasks with a flake rate higher than 20%, indicating severe reliability issues that need immediate attention. The chart shown provides a visual representation of your flaky tasks, helping you quickly identify which tasks need the most attention. Tasks are plotted based on their **impact score**, which is calculated as `flake_rate × sample_size`. This means frequently-run flaky tasks are weighted higher than rarely-run flaky tasks. Priority levels are determined using percentile-based thresholds that scale across organizations of any size: - **High priority** (red) - Top 10% of tasks by impact score (90th percentile and above). These tasks have severe flakiness and should be addressed immediately. - **Medium priority** (yellow) - Next 23% of tasks by impact score (67th-90th percentile). These tasks have moderate flakiness with sufficient data. - **Low priority** (gray) - Bottom 67% of tasks by impact score. These tasks have minor flakiness or not enough data to be concerning. Tasks on the right side of the chart typically represent the highest priority items that need attention. The scatter plot shows up to 50 results, sorted by most recent flaked tasks. ### Flaky task table The table provides detailed information about each flaky task in your workspace: ![Flaky Tasks Analytics Table](../../../../assets/features/ci-features/nx-cloud-flaky-tasks-table.avif) By default, the table loads your most recent flaky tasks. Each row includes: - **Task** - The project and target combination (e.g., `my-app:test`) - **Flake rate** - Measures how often a task succeeds due to flakiness. Specifically, it represents the percentage of total successes that came from unreliable (flaky) task hashes: `flaky_successes / (flaky_successes + non_flaky_successes)`. This tells you: "Of all the times this task succeeded, how many successes came from unreliable code?" - **Total reruns** - The number of times a task was executed more than once due to flakiness. This counts the "extra" executions that happened because the task failed and needed to be retried. Calculated as: `total_executions - unique_hash_count` - **Time wasted** - An estimate of the total time spent on reruns, calculated by multiplying the total reruns by the average task duration - **Last failure** - The timestamp of the most recent failure across all contributing task hashes #### Flaky task detail view Click on any row in the table to view detailed information about a specific flaky task. The **Overview** tab shows summary statistics and trends for the selected task such as flake rate, time wasted and automatic deflake counts ![Flaky Task Detail Overview](../../../../assets/features/ci-features/nx-cloud-flaky-tasks-details.avif) The **Activity** tab displays a timeline of all executions, showing when the task failed and succeeded to jump directly into the runs. ![Flaky Task Detail Activity](../../../../assets/features/ci-features/nx-cloud-flaky-tasks-detail-activity.avif) The **Environments** tab provides insights into the different environments where the task was executed, helping identify if certain environments contribute to flakiness. ![Flaky Task Detail Environments](../../../../assets/features/ci-features/nx-cloud-flaky-tasks-detail-environment.avif) --- ## GitHub Integration Any CI tool requires tight integration with your existing version control system. Nx Cloud offers first class integration with GitHub in the following ways. ## Easy workspace setup ![Screenshot of Nx Cloud connecting a GitHub repository](../../../../assets/features/ci-features/github-onboarding.avif) Get started quickly with Nx Cloud with our GitHub connection process. Connect your workspace by selecting your repo and organization from GitHub, and Nx Cloud will create a pull request with all the necessary configuration. User access is automatically connected to GitHub, and a PR is created to connect your workspace. Your repo now has [distributed caching](/docs/features/ci-features/remote-cache) in less than 5 minutes. You can also create a new workspace from a template for experimentation. This workspace will come pre-configured with Nx Cloud and examples of core Nx concepts. [Create a new Nx workspace](https://cloud.nx.app/create-nx-workspace) to get started. [Connect your Nx Cloud account to GitHub](/docs/features/ci-features/github-integration#connect-to-github) to use this feature. ## Pull request insights ![Screenshot of Nx Cloud app for GitHub showing pull request insights](../../../../assets/features/ci-features/github-pr-bot.avif) Good CI checks require fast and easy access to results. That's why Nx Cloud will update your PR with the current running status of your tasks and a convenient link to your Nx Cloud results and logs. Take advantage of the enhanced developer experience of structured and searchable logs. Quick insight to PR task progress, so you're not stuck waiting for every task to complete. And with Nx Replay, developers can quickly replay tasks locally to avoid running tasks that CI has already completed. This feature is available in workspaces with the [Nx Cloud GitHub App installed](/docs/guides/nx-cloud/source-control-integration/github#install-the-app). ## Access control ![Diagram showing users syncing from GitHub to Nx Cloud](../../../../assets/features/ci-features/github-user-management.avif) Nx Cloud organization access can be linked to a Github organization, so that memberships are automatically synced. This allows Nx Cloud to fit in to any existing on-boarding or off-boarding process. There's no need to manually manage users separately. Get your engineers Nx Cloud access right alongside their GitHub access so they can get to work fast. Use [personal access tokens](/docs/guides/nx-cloud/personal-access-tokens) to further enhance your security. [Connect your Nx Cloud account to GitHub](/docs/features/ci-features/github-integration#connect-to-github) to use this feature. Members of your GitHub organization will also need to connect their GitHub accounts to access the organization. ## Connect to GitHub Get started by connecting your Nx Cloud account to GitHub. This will allow you to access GitHub-powered organizations that you're a member of, easily connect workspaces, and configure automatic access control through GitHub. {% call_to_action title="Connect to GitHub" url="https://cloud.nx.app/profile/vcs-integrations" icon="nxcloud" description="Connect your Nx Cloud account to GitHub in your profile settings" %} Connect to GitHub {% /call_to_action %} Note that it doesn't matter what method you use to log into Nx Cloud, connecting your GitHub account is a separate step. ## Connect to GitHub during initial setup 1. Visit [Nx Cloud](https://cloud.nx.app) and click **Connect a workspace** at the top. 2. Select **Connect existing repository** from the dropdown. 3. Follow the prompts to select a repo. 4. If that repo is controlled by a GitHub organization, you will be prompted to use that organization. 5. Follow the prompts to create a pull request to complete your connection to Nx Cloud. {% call_to_action title="Connect a workspace to Nx Cloud" url="https://cloud.nx.app/setup/connect-workspace/github/select" icon="nxcloud" description="Connect an Nx workspace in GitHub to Nx Cloud" %} Connect an Nx workspace in GitHub to Nx Cloud {% /call_to_action %} ## Connect an organization to GitHub after initial setup If you already have an organization in Nx Cloud, and you'd like to use your GitHub organization to manage access to it: 1. Go to the organization in Nx Cloud while logged in as an admin user. 2. Click on **Settings** in the top menu 3. Go to **Connect GitHub organization in the sidebar** 4. Follow the prompts there to connect to GitHub. Note that for every workspace in the Nx Cloud organization, there must be a corresponding repo in the GitHub organization. ## Connect a workspace to GitHub after initial setup If you already have a workspace connected to Nx Cloud, and you'd like to connect it to a GitHub repo to enable PR insights, [install the Nx Cloud GitHub App](/docs/guides/nx-cloud/source-control-integration/github#install-the-app). --- ## Remote Caching (Nx Replay) {% youtube src="https://youtu.be/NF1__N_snog" title="Remote Caching with Nx Replay" /%} Nx [caches task results locally](/docs/features/cache-task-results) to avoid rebuilding the same code twice. Remote caching extends this by **sharing the cache across your team and CI**. ![Diagram showing Teika sharing his cache with CI, Kimiko and James](../../../../assets/features/distributed-caching.svg) - **Zero config** and **secure** by default - Drastically **speeds up task execution times** during local development, and more critically in CI - **Saves money on CI/CD costs** by reducing the number of tasks that need to be executed (we observed 30-70% faster CI & half the cost) Nx **restores terminal output, along with the files and artifacts** created from running the task (e.g., your build or dist directory). If you want to learn more about the conceptual model behind Nx caching, read [How Caching Works](/docs/concepts/how-caching-works). ## Configure remote caching To use **Nx Replay**, you need to connect your workspace to Nx Cloud (if you haven't already). ```shell npx nx@latest connect ``` See the [connect to Nx Cloud recipe](/docs/guides/nx-cloud/setup-ci) for all the details. ## Why use remote caching (Nx Replay)? Nx Replay directly benefits your organization by: - **Speeding up CI pipelines:** With Nx Replay, tasks that have already been executed in a PR's initial CI pipeline run can **reuse cached results in subsequent runs**. This reduces the need to re-run unaffected tasks, significantly speeding up the CI process for modified PRs. This benefit complements the [affected command](/docs/features/ci-features/affected), which optimizes pipelines by only running tasks for projects that could be impacted by code changes. - **Boosting local developer efficiency:** Depending on [how cache permissions](/docs/guides/nx-cloud/access-tokens) are set for your workspace, developers can reuse cached results from CI on their local machines. As a result, tasks like builds and tests can complete instantly if they were already executed in CI. This accelerates developer workflows without any extra steps required. - **Enabling Nx Agents:** Nx Replay is crucial for [Nx Agents](/docs/features/ci-features/distribute-task-execution) to function efficiently. Nx Agents leverage remote caching as a **transport mechanism** for transferring task artifacts between machines as it distributes tasks. When a task depends on another task that may have been executed on a different agent, Nx Replay ensures the necessary artifacts are transferred seamlessly. This allows each agent to execute only its assigned tasks while relying on cached results for dependencies, ensuring tasks run only once and are shared across all agents. [Learn more about Nx Agents](/docs/features/ci-features/distribute-task-execution). ## What gets stored? Nx Cloud stores the following: - **Terminal output:** The terminal output generated when running a task. This includes logs, warnings, and errors. - **Task artifacts:** The output files of a task defined in the [`outputs` property of your project configuration](/docs/guides/tasks--caching/configure-outputs). For example, the build output, test results, or linting reports. - **Hash:** The hash of the inputs to the computation. The inputs include the source code, runtime values, and command line arguments. Note that the hash is included in the cache, but the actual inputs are not. Learn more about [how caching works](/docs/concepts/how-caching-works#what-is-cached). Cache correctness depends on your tasks declaring the right `inputs` and `outputs`. If a task reads a file that isn't in its inputs, cache hits can serve stale results. If it writes outside its declared outputs, those files go missing on a cache restore. [Task sandboxing](/docs/features/ci-features/sandboxing) uses IO tracing to catch these mismatches so you can fix them before they cause problems. ## Security in remote caching Since we work with many large corporations (including banks, insurance companies, and governments), we take security very seriously. Nx Cloud provides several features to ensure your data remains safe and secure: - **Immutability:** Each cache entry is immutable, meaning once an entry is created, it cannot be altered. This ensures that cached results cannot be tampered with by malicious parties, preventing the injection of vulnerabilities into your build process. - **Access Control via Tokens:** Nx Cloud allows you to [control who can read from and write to the cache](/docs/guides/nx-cloud/access-tokens). For example, you can configure these settings to restrict cache write access to your CI pipeline while allowing all developers to only read. - **End-to-End Encryption:** Nx Cloud supports end-to-end encryption to protect your data. Task artifacts are encrypted before being sent to the remote cache and decrypted when retrieved. This ensures that even if someone gains access to Nx Cloud servers, they cannot view your stored artifacts. For more details, visit the [encryption documentation](/docs/guides/nx-cloud/encryption). - **Nx Enterprise (Self-Hosting and EU Regions):** For organizations with specific compliance or data residency requirements, Nx Enterprise offers the option to self-host Nx Cloud on your own infrastructure. Additionally, you can choose to host in EU regions, ensuring that your data complies with regional data protection laws. This is available to our [Nx Enterprise customers](https://nx.dev/enterprise). - **SOC Certification:** Nx and Nx Cloud are SOC Type 1 and Type 2 certified, providing an additional layer of assurance that your data is handled according to industry-standard security practices. For more details, you can visit our [security page](https://security.nx.app). ### Configure caching access Caching access can be restricted in terms of read/write access. You can configure this in your [Nx Cloud dashboard](https://nx.app). Learn more about [cache security here](/docs/concepts/ci-concepts/cache-security). ## FAQ ### What if the remote cache is offline? Nx Replay automatically syncs the remote cache to the local cache folder. As such, if the remote cache is not available, it will automatically fall back to the local cache or just run the task if it is not cached. ### Can I self-host my remote cache? If you're an enterprise and have special restrictions, [reach out to us](https://nx.dev/enterprise/trial). Our enterprise plan includes various hosting options, from dedicated EU region hosting, to single-tenant and also on-premise. ### How can I skip Nx Cloud caching? To learn more about how to temporarily skip task caching, head over to [our corresponding docs page](/docs/guides/tasks--caching/skipping-cache#skip-remote-caching-from-nx-cloud). --- ## Task Sandboxing Task sandboxing monitors file system access during task execution and flags any reads or writes that fall outside the declared `inputs` and `outputs` in your [project configuration](/docs/reference/project-configuration) (whether explicit or [inferred](/docs/concepts/inferred-tasks)). It doesn't block access to the rest of the file system, but undeclared dependencies have direct implications on [caching](/docs/features/cache-task-results) correctness, from false cache hits serving stale results to missing output files after a cache restore. {% aside type="note" title="Enterprise Feature" %} Sandboxing is available on the [Nx Enterprise plan](https://nx.dev/enterprise). [Reach out to learn more](https://nx.dev/enterprise). {% /aside %} {% aside type="caution" title="Nx 22.5+ Required" %} Task sandboxing requires Nx 22.5 or later. {% /aside %} ## Why hermeticity matters A task is _hermetic_ when it only reads from its declared inputs and only writes to its declared outputs. Hermetic tasks are safe to cache and replay because their behavior is fully described by their configuration. ![Diagram showing declared inputs flowing into vite build, producing declared outputs, with a cache key derived from the input hash](../../../../assets/features/sandboxing-cache-flow.svg) [Computation caching](/docs/features/cache-task-results) relies on declared `inputs` and `outputs` to determine when a cache entry is valid. If a task reads a file that isn't listed in its inputs, the cache has no way to know that file changed, and it'll serve a stale result. If a task writes files outside its declared outputs, those files won't be captured in the cache and will go missing on a cache hit. Undeclared dependencies are difficult to catch through code review alone. A task might work correctly for months until an unrelated change to an undeclared input causes a cache-related failure that's hard to trace back to the root cause. ### Example: undeclared input (false cache hit) An API app reads `app.yaml` from the project root at startup to configure routes and middleware. If `app.yaml` isn't included in the target's `inputs`, changing the configuration won't invalidate the cache. The next CI run serves stale output built against the old configuration. Sandboxing catches this because it sees the build process reading `app.yaml` even though it isn't declared. Add the file to the target's `inputs` in `project.json` to fix it: ```json {% meta="{4}" %} { "targets": { "build": { "inputs": ["{projectRoot}/app.yaml"] } } } ``` ### Example: undeclared output (missing artifacts on replay) When using TypeScript `*.d.ts` generation (such as `vite-plugin-dts`) to emit type declarations outside of `dist/`, the build produces two output directories: ```text {% meta="{3-4}" %} packages/ my-package/ dist/ types/ src/ package.json ``` If the target's `outputs` only lists `["{projectRoot}/dist"]`, the `types/` directory isn't [stored in the remote cache](/docs/features/ci-features/remote-cache#what-gets-stored). On a cache hit, `dist/` is restored but the type declarations are missing, breaking downstream library consumers. Sandboxing catches this because it sees writes to `types/` that aren't covered by the declared outputs. Include both directories in `outputs` so they can be replayed from cache: ```json {% meta="{4}" %} { "targets": { "build": { "outputs": ["{projectRoot}/dist", "{projectRoot}/types"] } } } ``` ## How sandboxing works Sandboxing runs each task in a monitored environment where all file system reads and writes are tracked. When a task accesses a file outside its declared inputs or writes to a path outside its declared outputs, Nx Cloud flags it. You get an audit trail of every file each task touched during execution, warnings when tasks have undeclared dependencies, and confidence that your cache configuration is correct rather than just "working so far." In **Warning** mode (recommended when getting started), violations are reported in the Nx Cloud UI but tasks continue to completion. In **Strict** mode, tasks fail immediately when a violation is detected. See [Cloud settings](#cloud-settings) to configure the enforcement mode. ## Investigating violations When sandboxing detects undeclared file access, a warning banner appears on the CI Pipeline Execution (CIPE) page in Nx Cloud. Runs that contain violations are tagged with a "sandbox violation" badge. ![CIPE page showing sandbox violations detected in 109 tasks, with a run tagged as sandbox violation](../../../../assets/features/sandboxing-cipe-violations.png) Click into a run to see which individual tasks have violations. Click the **Sandbox violations** button to filter the task list to only tasks with undeclared reads or writes. ![Task list showing individual tasks with sandbox violation badges](../../../../assets/features/sandboxing-task-violations.png) Open the task details and switch to the **Sandbox analysis** tab. The process tree shows every process spawned during the task, along with the files each one read and wrote. Click a process to see its full file access list. Files flagged as "unexpected read" or "unexpected write" are the ones not covered by your declared `inputs` and `outputs`. ![Sandbox analysis tab showing process tree with unexpected reads and writes highlighted](../../../../assets/features/sandboxing-analysis.png) To export the raw trace data for further analysis, click **View raw sandbox report** to download the JSON report. ![View raw sandbox report button](../../../../assets/features/sandboxing-raw-report.png) ## Inspecting inputs and outputs Check what your tasks currently declare before enabling sandboxing. Use `nx show target` to inspect a specific target's resolved inputs and outputs: {% aside type="caution" title="Nx 22.6+ Required" %} The `--inputs` and `--outputs` flags for `nx show target` require Nx 22.6 or later. {% /aside %} ```shell nx show target : --inputs --outputs ``` Add the `--json` flag for machine-readable output: ```shell nx show target : --inputs --outputs --json ``` For a broader view of the full project configuration, run `nx show project`: ```shell nx show project ``` You can also use the [project details view](/docs/features/explore-graph) in Nx Console: {% project_details title="Project Details View" expandedTargets=["build"] %} ```json { "project": { "name": "myapp", "type": "app", "data": { "root": "apps/myapp", "targets": { "build": { "options": { "cwd": "apps/myapp", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": [ "production", "^production", { "externalDependencies": ["vite"] } ], "outputs": ["{projectRoot}/dist"], "executor": "nx:run-commands", "configurations": {} } }, "sourceRoot": "apps/myapp/src", "projectType": "application", "tags": [] } }, "sourceMap": { "root": ["apps/myapp/project.json", "nx/core/project-json"], "targets.build": ["apps/myapp/vite.config.ts", "@nx/vite/plugin"], "targets.build.inputs": ["apps/myapp/vite.config.ts", "@nx/vite/plugin"], "targets.build.outputs": ["apps/myapp/vite.config.ts", "@nx/vite/plugin"] } } ``` {% /project_details %} The `inputs` and `outputs` arrays show exactly what the cache tracks. Sandboxing compares these declarations against what the task actually reads and writes at runtime and reports discrepancies. ## Enabling sandboxing Sandboxing is available for [Nx Enterprise](https://nx.dev/enterprise) customers on [single-tenant](/docs/enterprise/single-tenant/overview) deployments using [Nx Agents](/docs/features/ci-features/distribute-task-execution). It is not supported with [manual distributed task execution](/docs/guides/nx-cloud/manual-dte). Contact your Nx Enterprise representative to enable sandboxing for your deployment. ### Excluding paths Create a `.nx/workflows/sandboxing-config.yaml` file to exclude paths from sandboxing checks. ```yaml # .nx/workflows/sandboxing-config.yaml exclude-reads: - '**/vite.config.*.timestamp-*' task-exclusions: - project: myapp target: build exclude-reads: - .next/cache/** exclude-writes: - logs/** - target: lint exclude-reads: - .eslintcache/** ``` Global `exclude-reads` and `exclude-writes` apply to all tasks. Use `task-exclusions` to scope exclusions to a specific project, target, or combination of both. Patterns use glob syntax relative to the workspace root. ## Cloud settings Enterprise customers with sandboxing enabled can configure the enforcement mode in the Nx Cloud workspace settings under **Settings > General**. ![Nx Cloud settings sidebar showing General settings](../../../../assets/features/sandboxing-settings-sidebar.png) Three enforcement modes are available: - **Strict** — tasks that violate sandbox isolation fail immediately. - **Warning** — tasks complete but violations are reported in the Nx Cloud UI. - **Off** — sandboxing is disabled. ![Sandboxing enforcement mode setting with Strict, Warning, and Off options](../../../../assets/features/sandboxing-settings.png) ## Learn more - [Cache task results](/docs/features/cache-task-results) - [Remote cache](/docs/features/ci-features/remote-cache) - [Project configuration reference](/docs/reference/project-configuration) - [Nx Enterprise](https://nx.dev/enterprise) --- ## AI-Powered Self-Healing CI {% youtube src="https://youtu.be/aQUlsilNSQ8" title="How Nx Self-Healing CI Works" /%} Nx Cloud Self-Healing CI is an **AI-powered system that automatically detects, analyzes, and proposes fixes for CI failures**, offering several key advantages: - **Improves Time to Green (TTG):** Automatically proposes fixes when tasks fail, significantly reducing the time to get your PR merge-ready. No more babysitting PRs. - **Keeps You in the Flow:** Get notified about failed PRs and proposed fixes via PR/MR comments or directly in your editor with Nx Console (VS Code, Cursor, or WebStorm). Review, approve, and keep working while AI handles the rest. - **Leverages Deep Context:** AI agents understand your workspace structure, project relationships, and build configurations through the Nx [project graph](/docs/features/explore-graph) and metadata. - **Non-Invasive Integration:** Works with your existing CI provider without overhauling your current setup. ## Enable self-healing CI {% aside title="VCS Integration Required" type="note" %} Your Nx Cloud workspace needs to have a [VCS integration enabled](/docs/guides/nx-cloud/source-control-integration) to use Self-Healing CI. Self-Healing CI supports GitHub, GitLab, Azure DevOps, and Bitbucket. {% /aside %} To enable Self-Healing CI in your workspace, you'll need to connect to Nx Cloud and configure your CI pipeline. If you haven't already connected to Nx Cloud, run the following command: ```shell npx nx@latest connect ``` Next, check the [Nx Cloud workspace settings](https://cloud.nx.app/go/workspace/settings/self-healing-ci) in the Nx Cloud web application to ensure that "Self-Healing CI" is enabled. ### Configure your CI pipeline Add a step to your CI configuration's "main" job that runs the `fix-ci` command. It doesn't matter exactly what this job is called, it is whichever job in your config where you invoke `nx start-ci-run` and kick off your `nx run-many` or `nx affected` commands. By default, your CI provider will only run the step if the previous steps succeeded, but by definition we want to run Self-Healing CI even when previous steps fail. Therefore make sure you are using a condition of `if: always()` or equivalent to ensure it runs even when previous steps fail: {% tabs %} {% tabitem label="GitHub Actions" %} ```yaml # .github/workflows/ci.yml name: CI jobs: main: runs-on: ubuntu-latest steps: # Your existing steps which check out the repo, start-ci-run, install # dependencies, etc. # These are just illustrative examples... - uses: actions/checkout@v6 - uses: actions/setup-node@v6 - run: npx nx start-ci-run - run: npm ci - run: npx nx affected -t lint test build # NEW: Add this step at the end of your job - run: npx nx fix-ci if: always() # IMPORTANT: Always run ``` {% /tabitem %} {% tabitem label="GitLab CI" %} ```yaml # .gitlab-ci.yml stages: - build main: stage: build script: # Your existing steps which check out the repo, start-ci-run, install # dependencies, etc. - npx nx start-ci-run - npm ci - npx nx affected -t lint test build # NEW: Add this section at the end of your job if you don't # already have an after_script section after_script: # IMPORTANT: after_script runs regardless of job success/failure # so it's like if: always() on GitHub - npx nx fix-ci ``` {% /tabitem %} {% tabitem label="Azure DevOps" %} ```yaml # azure-pipelines.yml trigger: - main pool: vmImage: 'ubuntu-latest' steps: # Your existing steps which check out the repo, start-ci-run, install # dependencies, etc. # These are just illustrative examples... - task: NodeTool@0 inputs: versionSpec: '22.x' - script: npx nx start-ci-run - script: npm ci - script: npx nx affected -t lint test build # NEW: Add this step at the end of your job - script: npx nx fix-ci condition: always() # IMPORTANT: Always run ``` {% /tabitem %} {% tabitem label="Bitbucket Pipelines" %} ```yaml # bitbucket-pipelines.yml image: node:22 pipelines: pull-requests: '**': - step: name: CI script: # Your existing steps which start-ci-run, install # dependencies, etc. # These are just illustrative examples... - npx nx start-ci-run - npm ci - npx nx affected -t lint test build after-script: # NEW: Add this section at the end of your step # IMPORTANT: after-script runs regardless of step success/failure # so it's like if: always() on GitHub - npx nx fix-ci ``` {% /tabitem %} {% /tabs %} > NOTE: If all tasks succeed then the `fix-ci` command becomes a no-op automatically, so that is why "always" is recommended. {% aside type="note" title="Using manual DTE?" %} If you use [manual distributed task execution](/docs/guides/nx-cloud/manual-dte) instead of the Nx Agents, Self-Healing CI works the same way. Add `nx fix-ci` to both the **main job** (the orchestrator) and each **agent job** with the appropriate "always run" condition. See the [manual DTE guide](/docs/guides/nx-cloud/manual-dte) for complete examples. {% /aside %} ## Configuring self-healing CI Self-Healing CI configuration is primarily managed through your [Nx Cloud workspace settings](https://cloud.nx.app/go/workspace/settings/self-healing-ci) in the Nx Cloud dashboard. This provides a centralized, auditable way to control the feature's behavior. ### General settings | Setting | Description | | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Enable Self-Healing CI** | Enable Self-Healing CI for PRs in the current workspace | | **GitHub PR comments** | Show Self-Healing CI feedback and actions within GitHub PR comments (in addition to the Nx Cloud UI) | | **Auto-retry flaky tasks** | Automatically re-run tasks that are detected as flaky to improve CI reliability. This works by pushing an empty commit to the PR branch | | **Allow public link access** | Allow anyone with access to the link the ability to apply or reject a suggested change (not recommended unless absolutely necessary) | | **Draft PR handling** | Allow Self-Healing CI to create fixes for draft PRs | | **Protected branch prefixes** | Configure branch prefixes for which fixes should not be generated (e.g., `release/` will match `release/v1.0`). The default branch and branches named `main`, `master`, `trunk`, `dev`, `stable`, or `canary` will never have fixes generated | ### Eligible tasks Control which failing tasks Nx Cloud will actively try and fix for pull request CI pipeline executions. {% aside type="note" title="CLI Override Available" %} This is analogous to the `--fix-tasks` flag on `nx start-ci-run`. Anything set on the CLI will take precedence over the settings here. {% /aside %} You can choose between two modes: | Mode | Description | | --------------------- | -------------------------------------------------------------------------------- | | **Any failing task** | Any task that fails during PR CI pipeline executions is eligible (recommended) | | **Specific patterns** | Limit Self-Healing CI to failing PR tasks that match the specified glob patterns | You can also specify **Never fix** patterns to exclude certain tasks from ever being fixed by Self-Healing CI. For example, `*e2e*` would exclude all e2e-related tasks. ### Auto-apply verified code changes {% youtube src="https://youtu.be/30qh5W8zXTY" title="Self-Healing CI Auto-Apply Suggestions" /%} Automatically commit code change suggestions to the PR branch **when ALL of these are true**: 1. The task **matches the glob patterns** configured below 2. The AI agent is **highly confident** that the suggested code change will fix the failing task 3. The suggestion has been **explicitly verified** to fix the failing task {% aside type="note" title="CLI Override Available" %} This is analogous to the `--auto-apply-fixes` flag on `nx start-ci-run`. Anything set on the CLI will take precedence over the settings here. {% /aside %} #### Deterministic Nx checks A built-in preset that auto-fixes failures from `nx format:check`, `nx sync:check`, and `nx conformance:check` commands. The AI agent has special knowledge of these commands and will invoke the corresponding "writable" version to fix the issue (e.g., `nx format`). You can safely enable this preset even if you only use a subset of these commands. #### Additional include patterns Tasks matching these patterns will also have high-confidence, verified code changes auto-applied. For example: `*build*`, `*test*`, `lint`. #### Exclude patterns Tasks matching these patterns will **never** have code changes auto-applied, even if they match the include patterns or presets specified above. For example: `*e2e*`. ## Configuration with SELF_HEALING.md Create a `.nx/SELF_HEALING.md` file in your repository to provide project-specific instructions to the Self-Healing CI agent. This file contains freeform markdown that the AI agent reads and interprets naturally. {% aside title="Why a dedicated file?" type="note" %} Using `.nx/SELF_HEALING.md` instead of `AGENTS.md` (or equivalent) separates CI-specific instructions from local development context. The file lives in the `.nx` directory alongside other Nx Cloud configuration. {% /aside %} ### Example SELF_HEALING.md ```markdown # Self-Healing Configuration ## Confidence Rules - Fixes involving "test" targets should require high confidence - Formatting fixes can be applied with medium confidence ## Off-Limits Areas - `/src/generated/` - auto-generated, do not modify - `/legacy/` - requires manual review ## Fix Preferences - Prefer updating ESLint rules over adding disable comments - For type errors, prefer explicit types over `any` ## Context See ARCHITECTURE.md for module boundaries. ``` ### What to include | Section | Purpose | Example | | -------------------- | --------------------------------------------------- | ---------------------------------------------------------------------------- | | **Confidence Rules** | Override how the AI categorizes failure severity | "Failures in `**/migrations/**` should be classified as `environment_state`" | | **Off-Limits Areas** | Directories or files the agent should never modify | "`/src/generated/` - auto-generated code" | | **Fix Preferences** | Guide the agent's approach to common issues | "Prefer updating ESLint rules over adding disable comments" | | **Predefined Fixes** | Specify deterministic solutions for known failures | "For lint failures, always try running `nx lint --fix` first" | | **Context** | Reference other documentation the agent should read | "See ARCHITECTURE.md for module boundaries" | ### Using CLAUDE.md If your repository already has a `CLAUDE.md` file at the root, the Self-Healing CI agent will read it for additional context. When both files exist: - **SELF_HEALING.md takes precedence** for any conflicting instructions - Both files are read, so general context in `CLAUDE.md` is still available - CI-specific instructions should go in `SELF_HEALING.md` This allows teams to maintain `CLAUDE.md` for local development workflows while using `SELF_HEALING.md` for CI-specific behavior. ### Viewing configuration status After a CI run, navigate to the pipeline execution in Nx Cloud and check the **Configurations** tab to see whether `SELF_HEALING.md` was detected and applied. ## Receiving fix notifications ### In your editor With [Nx Console](/docs/getting-started/editor-setup) installed, you'll receive notifications directly in VS Code, Cursor, or WebStorm when a fix is available: ![Notification in your editor about an AI fix](../../../../assets/features/notification-self-healing-ci.avif) ### On your pull request/Merge request Self-Healing CI posts a comment on your PR with: - A summary of the reasoning behind the fix - A diff view showing the proposed changes - Buttons to apply or reject the fix ![Self-Healing CI GitHub Comment](../../../../assets/features/self-healing-fix-gh-comment.avif) ## Applying and reverting fixes ### Applying a fix You can apply a proposed fix through: 1. **Editor notification** - Click "Apply" in the Nx Console notification 2. **PR/MR comment** - Click the "Apply" button 3. **Nx Cloud UI** - Use the apply button in the diff viewer ### Applying locally for fine-tuning If a fix is 90% correct but needs minor adjustments: 1. Click "Apply Locally" in the GitHub comment or Nx Cloud UI 2. Run the provided command in your terminal 3. Make your adjustments and commit ![Apply Self-Healing Fixes Locally](../../../../assets/features/self-healing-apply-locally.avif) ### Reverting a fix If you accidentally applied a fix, you can: 1. Manually revert the Git commit 2. Use the "Revert changes" button in the Nx Cloud diff viewer ## Advanced: CLI overrides {% aside type="caution" title="Prefer workspace settings" %} We recommend configuring Self-Healing CI through **[workspace settings](https://cloud.nx.app/go/workspace/settings/self-healing-ci)** in the Nx Cloud dashboard. CLI overrides should only be used for temporary, branch-specific adjustments. {% /aside %} {% youtube src="https://youtu.be/KSb48zHbaHg" title="Specify Which Tasks to Fix" /%} ### When to use CLI overrides - **Temporarily disable** Self-Healing CI on a sensitive branch - **Test configurations** before committing to workspace settings ### Available flags Pass these flags to `nx start-ci-run`: | Flag | Description | Example | | -------------------- | ------------------------------------------- | ----------------------------- | | `--fix-tasks` | Override which tasks are eligible for fixes | `--fix-tasks="*lint*,*test*"` | | `--auto-apply-fixes` | Override which tasks can be auto-applied | `--auto-apply-fixes="*lint*"` | ### Disabling features via CLI Use an empty string to **disable** a feature for a specific CI run: ```shell # Disable fix generation entirely for this run npx nx start-ci-run --fix-tasks="" # Disable auto-apply entirely for this run npx nx start-ci-run --auto-apply-fixes="" ``` Both flags treat empty string consistently as an explicit "disable" override. ### Pattern syntax Patterns use glob syntax to match task names: ```shell # Only fix lint and test tasks npx nx start-ci-run --fix-tasks="*lint*,*test*" # Fix everything except e2e tasks npx nx start-ci-run --fix-tasks="!*e2e*" # Auto-apply only lint fixes npx nx start-ci-run --auto-apply-fixes="lint" ``` ### Precedence rules When both workspace settings and CLI overrides are present: 1. **CLI override** (if provided) always takes precedence 2. Falls back to **workspace settings** if no CLI override 3. Empty string (`""`) is an explicit "disable", not "use defaults" ### Viewing applied configuration After your CI runs, navigate to the CIPE → **Configurations** tab to see: - What workspace settings were relevant at the time of the CI pipeline execution - What CLI overrides were applied, if any - What the final, effective configuration was --- ## Learn more - [Nx AI Documentation](/docs/features/enhance-ai) - [Nx Console Editor Setup](/docs/getting-started/editor-setup) - [Nx Cloud](/docs/features/ci-features) --- ## Automatically Split Slow Tasks by File (Atomizer) {% youtube src="https://youtu.be/0YxcxIR7QU0" title="10x Faster e2e Tests!" width="100%" /%} Certain tasks like end-to-end (e2e) tests, integration tests, or large unit test suites can be large, monolithic tasks that take a considerable amount of time to execute. As a result, teams often push them to a nightly or even weekly build rather than running them for each PR. This approach is suboptimal as it increases the risk of merging problematic PRs. Manually splitting these slow tasks can be complex and require ongoing maintenance. Nx Atomizer solves this by **automatically generating runnable targets for each test file**. For example, a task that takes 10 minutes can be split and distributed as five 2-minute tasks per agent. This allows for: - parallelization across multiple machines with [Nx Agents](/docs/features/ci-features/distribute-task-execution) - faster [flakiness detection & retries](/docs/features/ci-features/flaky-tasks) by isolating and re-running only the failed tests ## Enable automated task splitting ### Step 1: Connect to Nx Cloud To use **automated task splitting**, you need to connect your workspace to Nx Cloud (if you haven't already). ```shell npx nx@latest connect ``` See the [connect to Nx Cloud recipe](/docs/guides/nx-cloud/setup-ci) for all the details. ### Step 2: add the appropriate plugin Run this command to set up inferred tasks and enable task splitting for each plugin: {% tabs syncKey="test-runner" %} {% tabitem label="Cypress" %} ```shell nx add @nx/cypress ``` {% /tabitem %} {% tabitem label="Playwright" %} ```shell nx add @nx/playwright ``` {% /tabitem %} {% tabitem label="Jest" %} ```shell nx add @nx/jest ``` {% /tabitem %} {% tabitem label="Vitest" %} ```shell nx add @nx/vitest ``` {% /tabitem %} {% tabitem label="Gradle" %} ```shell nx add @nx/gradle ``` {% /tabitem %} {% /tabs %} This command will register the appropriate plugin in the `plugins` array of `nx.json`. If you upgraded Nx from an older version, ensure that [inferred tasks](/docs/concepts/inferred-tasks#existing-nx-workspaces) are enabled in `nx.json`: ```json // nx.json { ... // turned on by default; just make sure it is not set to false useInferencePlugins: true } ``` ## Update an existing project to use automated task splitting If you are already using the `@nx/cypress`, `@nx/playwright`, `@nx/jest`, `@nx/vitest`, or `@nx/gradle` plugin, you need to manually add the appropriate configuration to the `plugins` array of `nx.json`. Follow the instructions for the plugin you are using: - [Configure Cypress Task Splitting](/docs/technologies/test-tools/cypress/introduction#nxcypress-configuration) - [Configure Playwright Task Splitting](/docs/technologies/test-tools/playwright/introduction#nxplaywright-configuration) - [Configure Jest Task Splitting](/docs/technologies/test-tools/jest/introduction#splitting-e2e-tests) - [Configure Vitest Task Splitting](/docs/technologies/test-tools/vitest/introduction#splitting-e2e-tests) - [Configure Gradle Testing Task Splitting](/docs/technologies/java/gradle/introduction#test-distribution) ## Verify automated task splitting works Run the following command to open the project detail view for your test project: {% tabs %} {% tabitem label="CLI" %} ```shell nx show project my-project-e2e ``` {% /tabitem %} {% tabitem label="Project Detail View" %} {% project_details title="Project Details View" %} ```json { "project": { "name": "admin-e2e", "data": { "metadata": { "targetGroups": { "E2E (CI)": [ "e2e-ci--src/e2e/app.cy.ts", "e2e-ci--src/e2e/login.cy.ts", "e2e-ci" ] } }, "root": "apps/admin-e2e", "projectType": "application", "targets": { "e2e": { "cache": true, "inputs": ["default", "^production"], "outputs": [ "{workspaceRoot}/dist/cypress/apps/admin-e2e/videos", "{workspaceRoot}/dist/cypress/apps/admin-e2e/screenshots" ], "executor": "nx:run-commands", "dependsOn": ["^build"], "options": { "cwd": "apps/admin-e2e", "command": "cypress run" }, "configurations": { "production": { "command": "cypress run --env webServerCommand=\"nx run admin:preview\"" } }, "metadata": { "technologies": ["cypress"] } }, "e2e-ci--src/e2e/app.cy.ts": { "outputs": [ "{workspaceRoot}/dist/cypress/apps/admin-e2e/videos", "{workspaceRoot}/dist/cypress/apps/admin-e2e/screenshots" ], "inputs": [ "default", "^production", { "externalDependencies": ["cypress"] } ], "cache": true, "options": { "cwd": "apps/admin-e2e", "command": "cypress run --env webServerCommand=\"nx run admin:serve-static\" --spec src/e2e/app.cy.ts" }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["cypress"] } }, "e2e-ci--src/e2e/login.cy.ts": { "outputs": [ "{workspaceRoot}/dist/cypress/apps/admin-e2e/videos", "{workspaceRoot}/dist/cypress/apps/admin-e2e/screenshots" ], "inputs": [ "default", "^production", { "externalDependencies": ["cypress"] } ], "cache": true, "options": { "cwd": "apps/admin-e2e", "command": "cypress run --env webServerCommand=\"nx run admin:serve-static\" --spec src/e2e/login.cy.ts" }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["cypress"] } }, "e2e-ci": { "executor": "nx:noop", "cache": true, "inputs": [ "default", "^production", { "externalDependencies": ["cypress"] } ], "outputs": [ "{workspaceRoot}/dist/cypress/apps/admin-e2e/videos", "{workspaceRoot}/dist/cypress/apps/admin-e2e/screenshots" ], "dependsOn": [ { "target": "e2e-ci--src/e2e/app.cy.ts", "projects": "self", "params": "forward", "options": "forward" }, { "target": "e2e-ci--src/e2e/login.cy.ts", "projects": "self", "params": "forward", "options": "forward" } ], "options": {}, "configurations": {}, "metadata": { "technologies": ["cypress"] } }, "lint": { "executor": "@nx/eslint:lint", "inputs": ["default", "{workspaceRoot}/.eslintrc.json"], "cache": true, "outputs": ["{options.outputFile}"], "options": {}, "configurations": {}, "metadata": { "technologies": ["eslint"] } } }, "name": "admin-e2e", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "apps/admin-e2e/src", "tags": [], "implicitDependencies": ["admin"] } }, "sourceMap": { "root": ["apps/admin-e2e/project.json", "nx/core/project-json"], "projectType": ["apps/admin-e2e/project.json", "nx/core/project-json"], "targets": ["apps/admin-e2e/project.json", "nx/core/project-json"], "targets.e2e": ["apps/admin-e2e/project.json", "nx/core/target-defaults"], "targets.e2e.options": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e.cache": [ "apps/admin-e2e/project.json", "nx/core/target-defaults" ], "targets.e2e.inputs": [ "apps/admin-e2e/project.json", "nx/core/target-defaults" ], "targets.e2e.outputs": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e.configurations": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e.executor": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e.options.cwd": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e.options.command": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e.configurations.production": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e.configurations.production.command": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/app.cy.ts": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/app.cy.ts.outputs": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/app.cy.ts.inputs": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/app.cy.ts.cache": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/app.cy.ts.options": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/app.cy.ts.executor": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/app.cy.ts.options.cwd": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/app.cy.ts.options.command": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/login.cy.ts": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/login.cy.ts.outputs": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/login.cy.ts.inputs": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/login.cy.ts.cache": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/login.cy.ts.options": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/login.cy.ts.executor": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/login.cy.ts.options.cwd": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci--src/e2e/login.cy.ts.options.command": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci.executor": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci.cache": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci.inputs": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci.outputs": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e-ci.dependsOn": [ "apps/admin-e2e/cypress.config.ts", "@nx/cypress/plugin" ], "targets.e2e.dependsOn": [ "apps/admin-e2e/project.json", "nx/core/target-defaults" ], "targets.lint": ["apps/admin-e2e/project.json", "nx/core/project-json"], "targets.lint.executor": [ "apps/admin-e2e/project.json", "nx/core/project-json" ], "targets.lint.inputs": [ "apps/admin-e2e/project.json", "nx/core/target-defaults" ], "targets.lint.cache": [ "apps/admin-e2e/project.json", "nx/core/target-defaults" ], "name": ["apps/admin-e2e/project.json", "nx/core/project-json"], "$schema": ["apps/admin-e2e/project.json", "nx/core/project-json"], "sourceRoot": ["apps/admin-e2e/project.json", "nx/core/project-json"], "tags": ["apps/admin-e2e/project.json", "nx/core/project-json"], "implicitDependencies": [ "apps/admin-e2e/project.json", "nx/core/project-json" ], "implicitDependencies.admin": [ "apps/admin-e2e/project.json", "nx/core/project-json" ], "targets.lint.outputs": [ "apps/admin-e2e/project.json", "nx/core/project-json" ] } } ``` {% /project_details %} {% /tabitem %} {% /tabs %} If you configured Nx Atomizer properly, you'll see that there are tasks named `e2e`, `e2e-ci` (or similar for other test types) and a task for each test file. During local development, you'll want to continue using the base task (e.g., `e2e`, `test`) as it is more efficient on a single machine. ```shell nx e2e my-project-e2e ``` The `-ci` variant task truly shines when configured and run on CI. ## Configure automated task splitting on CI Update your CI pipeline to run the `-ci` variant task (e.g., `e2e-ci`, `test-ci`), which will automatically run all the inferred tasks for the individual test files. Here's an example of a GitHub Actions workflow: ```yaml {% meta="{17,27}" %} // .github/workflows/ci.yml name: CI # ... jobs: main: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 filter: tree:0 - uses: pnpm/action-setup@v4 with: version: 9 - run: pnpm dlx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="e2e-ci" - uses: actions/setup-node@v3 with: node-version: 20 cache: 'pnpm' - run: pnpm install --frozen-lockfile - uses: nrwl/nx-set-shas@v4 - run: pnpm exec nx affected -t lint test build e2e-ci ``` Learn more about configuring your [CI provider by following these detailed recipes](/docs/guides/nx-cloud/setup-ci). --- ## Enforce Module Boundaries If you partition your code into well-defined cohesive units, even a small organization will end up with a dozen apps and dozens or hundreds of libs. If all of them can depend on each other freely, chaos will ensue, and the workspace will become unmanageable. To help with that, Nx provides powerful mechanisms to enforce architectural boundaries and ensure projects can only depend on each other according to your organization's rules. You can declaratively define constraints using project tags and enforce them automatically. {% youtube src="https://www.youtube.com/embed/q0en5vlOsWY" title="Applying Module Boundaries" /%} ## How to enforce boundaries Nx offers two complementary approaches to enforce module boundaries: **ESLint Integration** - For JavaScript/TypeScript projects, enforce boundaries on code imports using the `@nx/enforce-module-boundaries` ESLint rule. This checks TypeScript imports and `package.json` dependencies during linting. **Language-Agnostic Conformance** - For any project type (e.g. Java, Python, PHP, JavaScript, etc.), use the [Conformance plugin's Enforce Project Boundaries rule](/docs/enterprise/conformance). This rule checks dependencies in the Nx graph during `nx conformance:check`. Requires [Nx Enterprise plan](https://nx.dev/enterprise). Both approaches use the same tag-based constraint system described below. ## Tags Nx comes with a generic mechanism for expressing constraints on project dependencies: tags. First, use your project configuration (in `project.json` or `package.json`) to annotate your projects with `tags`. In this example, we will use three tags: `scope:client`. `scope:admin`, `scope:shared`. {% tabs syncKey="project-config-file" %} {% tabitem label="package.json" %} ```jsonc // client/package.json { // ... more project configuration here "nx": { "tags": ["scope:client"], }, } ``` ```jsonc // admin/package.json { // ... more project configuration here "nx": { "tags": ["scope:admin"], }, } ``` ```jsonc // utils/package.json { // ... more project configuration here "nx": { "tags": ["scope:shared"], }, } ``` {% /tabitem %} {% tabitem label="project.json" %} ```jsonc // client/project.json { // ... more project configuration here "tags": ["scope:client"], } ``` ```jsonc // admin/project.json { // ... more project configuration here "tags": ["scope:admin"], } ``` ```jsonc // utils/project.json { // ... more project configuration here "tags": ["scope:shared"], } ``` {% /tabitem %} {% /tabs %} ## Configure boundary rules Once you have tagged your projects, configure the dependency constraints based on your chosen approach: {% tabs syncKey="eslint-config-preference" %} {% tabitem label="ESLint" %} For JavaScript/TypeScript projects, configure the `@nx/enforce-module-boundaries` ESLint rule: ```shell nx add @nx/eslint-plugin @nx/devkit ``` And configure the rule in your ESLint configuration: {% tabs syncKey="eslint-config-preference" %} {% tabitem label="Flat Config" %} ```javascript // eslint.config.mjs import nx from '@nx/eslint-plugin'; export default [ ...nx.configs['flat/base'], ...nx.configs['flat/typescript'], ...nx.configs['flat/javascript'], { files: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'], rules: { '@nx/enforce-module-boundaries': [ 'error', { allow: [], // update depConstraints based on your tags depConstraints: [ { sourceTag: 'scope:shared', onlyDependOnLibsWithTags: ['scope:shared'], }, { sourceTag: 'scope:admin', onlyDependOnLibsWithTags: ['scope:shared', 'scope:admin'], }, { sourceTag: 'scope:client', onlyDependOnLibsWithTags: ['scope:shared', 'scope:client'], }, ], }, ], }, }, ]; ``` {% /tabitem %} {% tabitem label="Legacy (.eslintrc.json)" %} ```jsonc // .eslintrc.json { // ... more ESLint config here // @nx/enforce-module-boundaries should already exist within an "overrides" block using `"files": ["*.ts", "*.tsx", "*.js", "*.jsx",]` "@nx/enforce-module-boundaries": [ "error", { "allow": [], // update depConstraints based on your tags "depConstraints": [ { "sourceTag": "scope:shared", "onlyDependOnLibsWithTags": ["scope:shared"], }, { "sourceTag": "scope:admin", "onlyDependOnLibsWithTags": ["scope:shared", "scope:admin"], }, { "sourceTag": "scope:client", "onlyDependOnLibsWithTags": ["scope:shared", "scope:client"], }, ], }, ], // ... more ESLint config here } ``` {% /tabitem %} {% /tabs %} If you violate the constraints, you will get an error when linting: ```plaintext A project tagged with "scope:admin" can only depend on projects tagged with "scoped:shared" or "scope:admin". ``` Read more about [ESLint rule options](/docs/technologies/eslint/eslint-plugin/guides/enforce-module-boundaries). {% /tabitem %} {% tabitem label="Conformance" %} For any project type or to enforce boundaries on the complete dependency graph, use the Conformance plugin: ```shell nx add @nx/conformance ``` Configure rules in your `nx.json`: ```jsonc // nx.json { "conformance": { "rules": [ { "rule": "@nx/conformance/enforce-project-boundaries", "options": { "depConstraints": [ { "sourceTag": "scope:shared", "onlyDependOnProjectsWithTags": ["scope:shared"], }, { "sourceTag": "scope:admin", "onlyDependOnProjectsWithTags": ["scope:shared", "scope:admin"], }, { "sourceTag": "scope:client", "onlyDependOnProjectsWithTags": ["scope:shared", "scope:client"], }, ], }, }, ], }, } ``` Run conformance checks in CI: ```yaml - name: Enforce all conformance rules run: npx nx conformance:check ``` Learn more about [Conformance rules](/docs/enterprise/conformance). {% /tabitem %} {% /tabs %} With these constraints in place, `scope:client` projects can only depend on projects with `scope:client` or `scope:shared`. And `scope:admin` projects can only depend on projects with `scope:admin` or `scope:shared`. So `scope:client` and `scope:admin` cannot depend on each other. Projects without any tags cannot depend on any other projects. The exception to this rule is by explicitly allowing all tags (see below). ### Tag formats - `*`: allow all tags Example: projects with any tags (including untagged) can depend on any other project. ```jsonc { "sourceTag": "*", "onlyDependOnLibsWithTags": ["*"], } ``` - `string`: allow exact tags Example: projects tagged with `scope:client` can only depend on projects tagged with `scope:util`. ```jsonc { "sourceTag": "scope:client", "onlyDependOnLibsWithTags": ["scope:util"], } ``` - `regex`: allow tags matching the regular expression Example: projects tagged with `scope:client` can depend on projects with a tag matching the regular expression `/^scope.*/`. In this case, the `scope:util`, `scope:client`, etc. are all allowed tags for dependencies. ```json { "sourceTag": "scope:client", "onlyDependOnLibsWithTags": ["/^scope.*/"] } ``` - `glob`: allow tags matching the glob Example: projects with a tag starting with `scope:` can depend on projects with a tag that starts with `scope:*`. In this case `scope:a`, `scope:b`, etc are all allowed tags for dependencies. ```json { "sourceTag": "scope:*", "onlyDependOnLibsWithTags": ["scope:*"] } ``` Globbing supports only the basic use of `*`. For more complex scenarios use the `regex` above. --- ## Enhance Your AI Coding Agent AI agents are moving beyond autocomplete. They can now operate independently across projects. But most setups hit a wall: agents lack workspace context (seeing files, not architecture), generate inconsistent code, and have a hard time to interact with CI. Nx monorepos solve this by enabling cross-project reasoning and by providing the structured metadata and CI integration that agents need to work autonomously: - Deep **workspace architecture** understanding and project relationships - **Code generators** for fast, predictable scaffolding - **CI pipeline integration** to fix failures autonomously - The ability to **iterate until CI is green** without human intervention ## Setup To configure your Nx workspace for AI agents, run: ```shell npx nx configure-ai-agents ``` This sets up: - **Agent configuration files**: `CLAUDE.md`, `AGENTS.md` with workspace-specific guidelines - **Agent skills**: Domain-specific knowledge for monorepo workflows — workspace exploration, code generation, task execution, CI monitoring, and package linking. Skills teach agents _how_ to work with Nx rather than dumping data into context. - **Nx MCP server**: Provides connectivity to Nx Cloud CI pipelines, self-healing fixes, running processes, and Nx documentation — things agents can't easily reach on their own ### Creating new workspaces For creating new Nx workspaces, use `npx create-nx-workspace@latest --template=nrwl/-template` (e.g. `react-template`, `angular-template`, `typescript-template`). For existing projects, use `npx nx init`. ## What this enables ### Self-healing CI integration Nx Cloud provides AI-powered [Self-Healing CI](/docs/features/ci-features/self-healing-ci) that analyzes failed runs and proposes verified fixes. With `configure-ai-agents`, your local agent connects to this CI counterpart via skills and the Nx MCP, gaining full context about run information, failures, and suggested fixes. Your agent can autonomously iterate until CI passes: ```text Commit this work, create a PR, and monitor CI until it's green. ``` The workflow: 1. Agent pushes changes and creates PR 2. Monitors CI pipeline 3. Receives failure context from Nx Cloud and Self-Healing CI 4. Accepts proposed fix or pulls context locally and manually applies it 5. Repeats until CI is green This reduces context-switching—you review the final PR rather than intervening at each failure. ### Workspace architecture understanding Nx exposes the project graph and relevant metadata to AI agents. This helps them move faster and more precisely: - Identify all applications and libraries in the workspace - Understand project relationships and dependencies - Recognize project types and ownership via tags - Determine which projects are affected by changes - Suggest where to implement new functionality based on existing structure This architectural awareness is critical for agents operating in large monorepos where understanding project relationships determines the quality of generated code. ### Predictable, fast code generation AI-generated code is token-intensive, slow, and not guaranteed to align with patterns in other projects. Nx generators solve this by providing predictable scaffolding that agents can invoke and then adapt. Your AI agent can: 1. Find generators from [Nx plugins](/docs/plugin-registry) or custom [local workspace generators](/docs/extending-nx/local-generators) 2. Run the generator with correct options 3. Make small adjustments based on the specific situation This approach is faster, produces consistent code across projects, and reduces hallucinations. ## Learn more - [Autonomous AI Agents at Scale](https://nx.dev/blog/ai-agents-and-continuity): Infrastructure requirements for AI agent workflows - [Why Nx and AI Work So Well Together](https://nx.dev/blog/nx-and-ai-why-they-work-together): The foundation for AI-powered development - [Nx MCP Server Reference](/docs/reference/nx-mcp): Complete tool reference and setup instructions --- ## Explore your Workspace Nx understands your workspace as a collection of projects. Each project can be explored to view the different tasks which can be run. The projects in the workspace have dependencies between them and form a graph known as the **Project Graph**. Nx uses this project graph in many ways to make informed decisions such as which tasks to run, when the results of a task can be restored from cache, and more. In addition to the project graph, Nx also runs your tasks as a **Task Graph**. This is a separate graph of tasks and their dependencies which is based on the project graph and determines the way in which the tasks are executed. Nx allows you to interactively explore your workspace through a UI which shows the information above. Using this tool is _vital_ to understanding both your workspace as well as how Nx behaves. Teach you to use this tool to explore projects, the project graph, and the task graphs for your workspace. ## Explore projects in your workspace Projects in Nx are the different parts of the monorepo which can have tasks run for them. The best way to see what projects are in your workspace is to view the [project graph](#explore-the-project-graph) which will be covered in the next section. Another way is to look at the **Projects** pane in [Nx Console](/docs/getting-started/editor-setup) or run `nx show projects` to show a list of projects in your terminal. You can see more details about a specific project in Nx Console or by running `nx show project --web`. Both methods will show something like the example below: {% project_details %} ```json { "project": { "name": "myreactapp", "type": "app", "data": { "root": "apps/myreactapp", "targets": { "build": { "options": { "cwd": "apps/myreactapp", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": [ "production", "^production", { "externalDependencies": ["vite"] } ], "outputs": ["{workspaceRoot}/dist/apps/myreactapp"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "serve": { "options": { "cwd": "apps/myreactapp", "command": "vite serve", "continuous": true }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "preview": { "options": { "cwd": "apps/myreactapp", "command": "vite preview" }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "serve-static": { "executor": "@nx/web:file-server", "options": { "buildTarget": "build", "continuous": true }, "configurations": {} }, "test": { "options": { "cwd": "apps/myreactapp", "command": "vitest run" }, "cache": true, "inputs": [ "default", "^production", { "externalDependencies": ["vitest"] } ], "outputs": ["{workspaceRoot}/coverage/apps/myreactapp"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "lint": { "cache": true, "options": { "cwd": "apps/myreactapp", "command": "eslint ." }, "inputs": [ "default", "{workspaceRoot}/.eslintrc.json", "{workspaceRoot}/apps/myreactapp/.eslintrc.json", "{workspaceRoot}/tools/eslint-rules/**/*", { "externalDependencies": ["eslint"] } ], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["eslint"] } } }, "name": "myreactapp", "$schema": "../../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "apps/myreactapp/src", "projectType": "application", "tags": [], "implicitDependencies": [], "metadata": { "technologies": ["react"] } } }, "sourceMap": { "root": ["apps/myreactapp/project.json", "nx/core/project-json"], "targets": ["apps/myreactapp/project.json", "nx/core/project-json"], "targets.build": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.build.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.cache": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.dependsOn": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.inputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.outputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.serve.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.preview.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.executor": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.options.buildTarget": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.test.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.cache": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.test.inputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.outputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.lint": ["apps/myreactapp/project.json", "@nx/eslint/plugin"], "targets.lint.command": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.cache": ["apps/myreactapp/project.json", "@nx/eslint/plugin"], "targets.lint.options": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.inputs": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.options.cwd": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "name": ["apps/myreactapp/project.json", "nx/core/project-json"], "$schema": ["apps/myreactapp/project.json", "nx/core/project-json"], "sourceRoot": ["apps/myreactapp/project.json", "nx/core/project-json"], "projectType": ["apps/myreactapp/project.json", "nx/core/project-json"], "tags": ["apps/myreactapp/project.json", "nx/core/project-json"] } } ``` {% /project_details %} The view shows a list of targets which can be [run by Nx](/docs/features/run-tasks). Each target has different options which determine how Nx runs the task. ## Explore the project graph Nx understands the projects in your workspace as a graph and uses this understanding to behave intelligently. Exploring this graph visually is vital to understanding how your code is structured and how Nx behaves. It always stays up to date without having to actively maintain a document as it is calculated by analyzing your source code. ### Launching the project graph To launch the project graph visualization for your workspace, use [Nx Console](/docs/getting-started/editor-setup) or run: ```shell npx nx graph ``` This will open a browser window with an interactive view of the project graph of your workspace. ### Focusing on valuable projects Viewing the entire graph can be unmanageable even for smaller repositories, so there are several ways to narrow the focus of the visualization down to the most useful part of the graph at the moment. 1. Focus on a specific project and then use the proximity and group by folder controls in the sidebar to modify the graph around that project. You can also start the graph with a project focused by running `nx graph --focus `. 2. Use the search bar to find all projects with names that contain a certain string. 3. Manually hide or show projects in the sidebar. Once the graph is displayed, you can explore deeper by clicking on nodes and edges in the graph. Click on a node to show a tooltip which also has a link to view more details about the project. You can trace the dependency chain between two projects by choosing a **Start** and **End** point in the project tooltips. Click on any dependency line to find which file(s) created the dependency. Composite nodes represent a set of projects in the same folder and can be expanded in place to show all the individual projects and their dependencies. You can also "focus" a composite node to render a graph of just the projects inside that node. Composite nodes are essential to navigate a graph of even a moderate size. Try playing around with a [fully interactive graph on a sample repo](https://nrwl-nx-examples-dep-graph.netlify.app/?focus=cart) or look at the more limited example below: **Project View:** {% graph height="450px" title="Project View" %} ```json { "composite": false, "projects": [ { "name": "shared-product-state", "type": "lib", "data": { "root": "shared/product-state", "tags": ["scope:shared", "type:state"] } }, { "name": "shared-product-types", "type": "lib", "data": { "root": "shared/product-types", "tags": ["type:types", "scope:shared"] } }, { "name": "shared-product-data", "type": "lib", "data": { "root": "shared/product-data", "tags": ["type:data", "scope:shared"] } }, { "name": "cart-cart-page", "type": "lib", "data": { "root": "cart/cart-page", "tags": ["scope:cart", "type:feature"] } }, { "name": "shared-styles", "type": "lib", "data": { "root": "shared/styles", "tags": ["scope:shared", "type:styles"] } }, { "name": "e2e-cart", "type": "e2e", "data": { "root": "e2e/cart", "tags": ["scope:cart", "type:e2e"] } }, { "name": "cart", "type": "app", "data": { "root": "cart", "tags": ["type:app", "scope:cart"] } } ], "dependencies": { "shared-product-state": [ { "source": "shared-product-state", "target": "shared-product-data", "type": "static" }, { "source": "shared-product-state", "target": "shared-product-types", "type": "static" } ], "shared-product-types": [], "shared-product-data": [ { "source": "shared-product-data", "target": "shared-product-types", "type": "static" } ], "shared-e2e-utils": [], "cart-cart-page": [ { "source": "cart-cart-page", "target": "shared-product-state", "type": "static" } ], "shared-styles": [], "e2e-cart": [ { "source": "e2e-cart", "target": "cart", "type": "implicit" } ], "cart": [ { "source": "cart", "target": "shared-styles", "type": "implicit" }, { "source": "cart", "target": "cart-cart-page", "type": "static" } ] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [], "enableTooltips": false } ``` {% /graph %} **Composite View (Nx 20+):** {% graph height="450px" title="Composite View (Nx 20+)" %} ```json { "composite": true, "projects": [ { "name": "shared-product-state", "type": "lib", "data": { "root": "shared/product-state", "tags": ["scope:shared", "type:state"] } }, { "name": "shared-product-types", "type": "lib", "data": { "root": "shared/product-types", "tags": ["type:types", "scope:shared"] } }, { "name": "shared-product-data", "type": "lib", "data": { "root": "shared/product-data", "tags": ["type:data", "scope:shared"] } }, { "name": "cart-cart-page", "type": "lib", "data": { "root": "cart/cart-page", "tags": ["scope:cart", "type:feature"] } }, { "name": "shared-styles", "type": "lib", "data": { "root": "shared/styles", "tags": ["scope:shared", "type:styles"] } }, { "name": "e2e-cart", "type": "e2e", "data": { "root": "e2e/cart", "tags": ["scope:cart", "type:e2e"] } }, { "name": "cart", "type": "app", "data": { "root": "cart/cart", "tags": ["type:app", "scope:cart"] } } ], "dependencies": { "shared-product-state": [ { "source": "shared-product-state", "target": "shared-product-data", "type": "static" }, { "source": "shared-product-state", "target": "shared-product-types", "type": "static" } ], "shared-product-types": [], "shared-product-data": [ { "source": "shared-product-data", "target": "shared-product-types", "type": "static" } ], "shared-e2e-utils": [], "cart-cart-page": [ { "source": "cart-cart-page", "target": "shared-product-state", "type": "static" } ], "shared-styles": [], "e2e-cart": [ { "source": "e2e-cart", "target": "cart", "type": "implicit" } ], "cart": [ { "source": "cart", "target": "shared-styles", "type": "implicit" }, { "source": "cart", "target": "cart-cart-page", "type": "static" } ] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [], "enableTooltips": false } ``` {% /graph %} ### Export project graph to JSON If you prefer to analyze the underlying data of the project graph with a script or some other tool, you can run: ```shell nx graph --file=output.json ``` This will give you all the information that is used to create the project graph visualization. ### Export the project graph as an image There is a floating action button in the bottom right of the project graph view which will save the graph as a `.png` file. Sharing this image with other developers is a great way to express how a project fits into the workspace. Some moments which you may want to share these images are: - When providing a high-level overview of the workspace - When introducing new project(s) into the workspace - When changing how project(s) are related - To share which other projects are directly affected by changes you are making ## Explore the task graph Nx uses the project graph of your workspace to determine the order in which to [run tasks](/docs/features/run-tasks). Pass the `--graph` flag to view the **task graph** which is executed by Nx when running a command. ```shell nx build myreactapp --graph # View the graph for building myreactapp nx run-many --targets build --graph # View the graph for building all projects nx affected --targets build --graph # View the graph for building the affected projects ``` Click on the nodes of this graph to see more information about the task such as: - Which executor was used to run the command - Which [inputs](/docs/guides/tasks--caching/configure-inputs) are used to calculate the computation hash. - A link to see more details about the project which the task belongs to Dependencies in this graph mean that Nx will need to wait for all task dependencies to complete successfully before running the task. --- ## Generate Code {% youtube src="https://youtu.be/hSM6MgWOYr8" title="Generate Code" /%} Code generators are like automation scripts that help you scaffold projects, enforce best practices, and automate repetitive tasks. Essentially, they are TypeScript functions that accept parameters and help boost your productivity by: - Allowing you to **scaffold new projects** or **augment existing projects** with new features, like [adding Storybook support](/docs/technologies/test-tools/storybook/introduction#generating-storybook-configuration) - **Automating repetitive tasks** in your development workflow - Ensuring your **code is consistent and follows best practices** ## Invoke generators Generators come as part of [Nx plugins](/docs/concepts/nx-plugins) and can be invoked using the `nx generate` command (or `nx g`) using the following syntax: `nx g : [options]`. Here's an example of generating a React library: ```shell nx g @nx/react:lib packages/mylib ``` You can also specify just the generator name and Nx will prompt you to pick between the installed plugins that provide a generator with that name. ```shell nx g lib packages/mylib ``` When running this command, you could be prompted to choose between the `@nx/react` and `@nx/js` plugins that each provide a library generator. To see a list of available generators in a given plugin, run `nx list `. As an example, to list all generators in the @nx/react plugin: ```shell nx list @nx/react ``` ### Use Nx Console If you prefer a visual interface, then [Nx Console](/docs/getting-started/editor-setup) is an excellent alternative. It provides a way to visually find and run generators: ![Using Nx Console to run generators](../../../assets/nx-console/nx-console-gen-code.avif) Nx Console is an IDE extension that can be [installed here](/docs/getting-started/editor-setup). ## Build your own generator You can also customize existing generators by overwriting their behavior or create completely new ones. This is a powerful mechanism as it allows you to: - **automate** your organization's specific processes and workflows - **standardize** how and where projects are created in your workspace to make sure they reflect your organization's best practices and coding standards - **ensure** that your codebase follows your organization's best practices and style guides At their core, generators are just functions with a specific signature and input options that get invoked by Nx. Something like the following: ```typescript // generator.ts import { Tree, formatFiles, installPackagesTask } from '@nx/devkit'; export default async function (tree: Tree, schema: any) { // Your implementation here // ... await formatFiles(tree); return () => { installPackagesTask(tree); }; } ``` To help build generators, Nx provides the `@nx/devkit` package containing utilities and helpers. Learn more about creating your own generators on [our docs page](/docs/extending-nx/local-generators) or watch the video below: {% youtube src="https://www.youtube.com/embed/myqfGDWC2go" title="Scaffold new Pkgs in a PNPM Workspaces Monorepo" caption="Demonstrates how to use Nx generators in a PNPM workspace to automate the creation of libraries" /%} --- ## Maintain TypeScript Monorepos Keeping all the industry-standard tools involved in a large TypeScript monorepo correctly configured and working well together is a difficult task. And the more tools you add, the more opportunity there is for tools to conflict with each other in some way. In addition to [generating default configuration files](/docs/features/generate-code) and [automatically updating dependencies](/docs/features/automate-updating-dependencies) to versions that we know work together, Nx makes managing all the tools in your monorepo easier in two ways: - Rather than adding another tool that you have to configure, Nx configures itself to match the existing configuration of other tools. - Nx also enhances certain tools to be more usable in a monorepo context. ## Auto-configuration Whenever possible, Nx will detect the existing configuration settings of other tools and update itself to match. ### Project detection with workspaces If your repository is using package manager workspaces, Nx will use those settings to find all the [projects](/docs/reference/project-configuration) in your repository. So you don't need to define a project for your package manager and separately identify the project for Nx. The `workspaces` configuration allows Nx to detect the project graph. ```json // package.json { "workspaces": ["apps/*", "packages/*"] } ``` {% graph height="200px" title="Project View" %} ```json { "composite": false, "projects": [ { "name": "product-state", "type": "lib", "data": { "root": "packages/cart/product-state", "tags": ["scope:cart", "type:state"] } }, { "name": "ui-buttons", "type": "lib", "data": { "root": "packages/ui/buttons", "tags": ["scope:shared", "type:ui"] } }, { "name": "cart", "type": "app", "data": { "root": "apps/cart", "tags": ["type:app", "scope:cart"] } } ], "dependencies": { "product-state": [], "ui-buttons": [], "cart": [ { "source": "cart", "target": "product-state", "type": "static" }, { "source": "cart", "target": "ui-buttons", "type": "static" } ] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [], "enableTooltips": false } ``` {% /graph %} ### Inferred tasks with tooling plugins Nx [plugins](/docs/concepts/nx-plugins) for tools like Vite, TypeScript, Playwright, and Jest automatically [infer task configuration](/docs/concepts/inferred-tasks) from your existing tooling config files — keeping them as the single source of truth. In the example below, because the `/apps/cart/vite.config.ts` file exists, Nx knows that the `cart` project can run a `build` task using Vite. If you expand the `build` task, you can also see that Nx configured the output directory for the [cache](/docs/features/cache-task-results) to match the `build.outDir` provided in the Vite configuration file. ```ts // /apps/cart/vite.config.ts /// import { defineConfig } from 'vite'; import react from '@vitejs/plugin-react'; export default defineConfig({ root: __dirname, cacheDir: '../../node_modules/.vite/apps/cart', plugins: [react()], build: { outDir: './dist', emptyOutDir: true, reportCompressedSize: true, commonjsOptions: { transformMixedEsModules: true, }, }, }); ``` {% project_details %} ```json { "project": { "name": "cart", "type": "app", "data": { "root": "apps/cart", "targets": { "build": { "options": { "cwd": "apps/cart", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": [ "production", "^production", { "externalDependencies": ["vite"] } ], "outputs": ["{projectRoot}/dist"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } } }, "name": "cart", "$schema": "../../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "apps/cart/src", "projectType": "application", "tags": [], "implicitDependencies": [], "metadata": { "technologies": ["react"] } } }, "sourceMap": { "root": ["apps/cart/project.json", "nx/core/project-json"], "targets": ["apps/cart/project.json", "nx/core/project-json"], "targets.build": ["apps/cart/vite.config.ts", "@nx/vite/plugin"], "targets.build.command": ["apps/cart/vite.config.ts", "@nx/vite/plugin"], "targets.build.options": ["apps/cart/vite.config.ts", "@nx/vite/plugin"], "targets.build.cache": ["apps/cart/vite.config.ts", "@nx/vite/plugin"], "targets.build.dependsOn": ["apps/cart/vite.config.ts", "@nx/vite/plugin"], "targets.build.inputs": ["apps/cart/vite.config.ts", "@nx/vite/plugin"], "targets.build.outputs": ["apps/cart/vite.config.ts", "@nx/vite/plugin"], "targets.build.options.cwd": [ "apps/cart/vite.config.ts", "@nx/vite/plugin" ], "name": ["apps/cart/project.json", "nx/core/project-json"], "$schema": ["apps/cart/project.json", "nx/core/project-json"], "sourceRoot": ["apps/cart/project.json", "nx/core/project-json"], "projectType": ["apps/cart/project.json", "nx/core/project-json"], "tags": ["apps/cart/project.json", "nx/core/project-json"] } } ``` {% /project_details %} ## Enhance tools for monorepos Nx does not just reduce its own configuration burden, it also improves the functionality of your existing tools so that they work better in a monorepo context. ### Keep TypeScript project references in sync TypeScript provides a feature called [Project References](https://www.typescriptlang.org/docs/handbook/project-references.html) that allows the TypeScript compiler to build and typecheck each project independently. When each project is typechecked, the TypeScript compiler will output an intermediate `*.tsbuildinfo` file that can be used by other projects instead of re-typechecking all dependencies. This feature can provide [significant performance improvements](/docs/concepts/typescript-project-linking#typescript-project-references-performance-benefits), particularly in a large monorepo. The main downside of this feature is that you have to manually define each project's references (dependencies) in the appropriate `tsconfig.*.json` file. This process is tedious to set up and very difficult to maintain as the repository changes over time. Nx can help by using a [sync generator](/docs/concepts/sync-generators) to automatically update the references defined in the `tsconfig.json` files based on the project graph it already knows about. {% graph height="200px" title="Project View" %} ```json { "composite": false, "projects": [ { "name": "product-state", "type": "lib", "data": { "root": "packages/cart/product-state", "tags": ["scope:cart", "type:state"] } }, { "name": "ui-buttons", "type": "lib", "data": { "root": "packages/ui/buttons", "tags": ["scope:shared", "type:ui"] } }, { "name": "cart", "type": "app", "data": { "root": "apps/cart", "tags": ["type:app", "scope:cart"] } } ], "dependencies": { "product-state": [], "ui-buttons": [], "cart": [ { "source": "cart", "target": "product-state", "type": "static" }, { "source": "cart", "target": "ui-buttons", "type": "static" } ] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [], "enableTooltips": false } ``` {% /graph %} ```jsonc // apps/cart/tsconfig.json { "extends": "../../tsconfig.base.json", "files": [], // intentionally empty "references": [ // UPDATED BY NX SYNC // All project dependencies { "path": "../../packages/product-state", }, { "path": "../../packages/ui/buttons", }, // This project's other tsconfig.*.json files { "path": "./tsconfig.lib.json", }, { "path": "./tsconfig.spec.json", }, ], } ``` Later, if someone adds another dependency to the `cart` app and then runs the `build` task, Nx will detect that the project references are out of sync and ask if the references should be updated. ```plaintext {% title="nx build cart" frame="terminal" %} NX The workspace is out of sync [@nx/js:typescript-sync]: Some TypeScript configuration files are missing project references to the projects they depend on or contain outdated project references. This will result in an error in CI. ? Would you like to sync the identified changes to get your workspace up to date? … ❯ Yes, sync the changes and run the tasks No, run the tasks without syncing the changes ``` --- ## Manage Releases Once you have leveraged Nx powerful code generation and task running capabilities to build your libraries and applications, you will want to share them with your users. {% linkcard title="Free Course: Versioning and Releasing NPM packages with Nx" href="https://www.epicweb.dev/tutorials/versioning-and-releasing-npm-packages-with-nx" /%} Nx provides a set of tools to help you manage your releases called `nx release`. > We recommend always starting with --dry-run, because publishing is difficult to undo ```shell nx release --dry-run ``` ## What makes up a release? A release can be thought about in three main phases: 1. **Versioning** - The process of determining the next version of your projects, and updating any projects that depend on them to use the new version. 2. **Changelog** - The process of deriving a changelog from your commit messages or [version plan](/docs/guides/nx-release/file-based-versioning-version-plans) files, which can be used to communicate the changes to your users. 3. **Publishing** - The process of publishing your projects to a registry, such as npm for TypeScript/JavaScript libraries, crates.io for Rust, or Docker registries for container images. ## Running releases The `nx release` command is used to run the release process from end to end. It is a wrapper around the three main phases of a release to provide maximum convenience and ease of use. By default, when you run `nx release` it will prompt you for a version keyword (e.g. major, minor, patch) or a custom version number. The release command will then run the three phases of the release process in order: versioning, changelog generation, and publishing. When trying it out for the first time, you need to pass the `--first-release` flag since there is no previous release to compare against for changelog purposes. It is strongly recommended to use the `--dry-run` flag to see what will be published in the first release without actually pushing anything to the registry. ```shell nx release --first-release --dry-run ``` {% aside type="tip" title="Semantic Versioning" %} By default, the version follows semantic versioning (semver) rules. To disable this behavior, set `release.releaseTag.requireSemver` to `false` in your `nx.json` file. This allows you to use custom versioning schemes. {% /aside %} ## Set up your workspace Follow our guides to set up Nx Release for your workspace. {% cardgrid %} {% linkcard title="TypeScript/JavaScript to NPM" description="Publish TypeScript and JavaScript packages to NPM or private registries with semantic versioning." href="/docs/guides/nx-release/release-npm-packages" /%} {% linkcard title="Docker Images" description="Version and publish Docker images with calendar-based versioning for continuous deployment." href="/docs/guides/nx-release/release-docker-images" /%} {% linkcard title="Rust Crates" description="Publish Rust packages to crates.io with cargo integration." href="/docs/guides/nx-release/publish-rust-crates" /%} {% /cardgrid %} ## Basic configuration Configure Nx Release in your `nx.json` file: ```jsonc // nx.json { "release": { "projects": ["packages/*"], }, } ``` The nx release command is customizable. You can customize the versioning, changelog, and publishing phases of the release process independently through a mixture of configuration and CLI arguments. See the [configuration reference](/docs/reference/nx-json#release) for all available options. ## Using the programmatic API for Nx release A powerful feature of Nx Release is the fact that it is designed to be used via a Node.js programmatic API in addition to the `nx release` CLI. Releases are a hugely complex and nuanced process, filled with many special cases and idiosyncratic preferences, and it is impossible for a CLI to be able to support all of them out of the box. By having a first-class programmatic API, you can go beyond the CLI and create custom release workflows that are highly dynamic and tailored to your specific needs. See our dedicated guide on the [programmatic API](/docs/guides/nx-release/programmatic-api) to learn more and see some example release scripts. ## Learn more ### Configuration & customization - **[Version Projects Independently](/docs/guides/nx-release/release-projects-independently)** - Version projects independently or together - **[Release Groups](/docs/guides/nx-release/release-groups)** - Organize projects into release groups with specific configuration for each group - **[Conventional Commits](/docs/guides/nx-release/automatically-version-with-conventional-commits)** - Automate versioning based on commit messages - **[Custom Registries](/docs/guides/nx-release/configure-custom-registries)** - Publish to private or alternative registries - **[CI/CD Integration](/docs/guides/nx-release/publish-in-ci-cd)** - Automate releases in your pipeline - **[Changelog Customization](/docs/guides/nx-release/configure-changelog-format)** - Control changelog generation and formatting - **[Custom Commit Types](/docs/guides/nx-release/customize-conventional-commit-types)** - Define custom conventional commit types - **[Version Prefixes](/docs/guides/nx-release/configuration-version-prefix)** - Configure version prefix patterns ### Workflows - **[Automate with GitHub Actions](/docs/guides/nx-release/automate-github-releases)** - Set up automated releases in GitHub workflows - **[Release Projects Independently](/docs/guides/nx-release/release-projects-independently)** - Manage independent versioning for projects - **[Use Conventional Commits](/docs/guides/nx-release/automatically-version-with-conventional-commits)** - Enable automatic versioning from commits - **[Build Before Versioning](/docs/guides/nx-release/build-before-versioning)** - Run builds before version updates ### References - **[`nx.json` configuration options](/docs/reference/nx-json#release)** - All available options for configuring `nx release` - **[`nx release` command](/docs/reference/nx-commands#nx-release)** - Run versioning, changelog generation, and publishing - **[`nx release version` command](/docs/reference/nx-commands#nx-release-version)** - Run only the versioning step - **[`nx release changelog` command](/docs/reference/nx-commands#nx-release-changelog)** - Run only the changelog generation step - **[`nx release publish` command](/docs/reference/nx-commands#nx-release-publish)** - Run only the publishing step - **[`nx release plan` command](/docs/reference/nx-commands#nx-release-plan)** - Create a version plan file for file-based versioning --- ## Run Tasks {% youtube src="https://youtu.be/aEdfYiA5U34" title="Run tasks with Nx" /%} In a monorepo setup, you don't just run tasks for a single project; you might have hundreds to manage. To help with this, Nx provides a powerful task runner that allows you to: - easily **run multiple targets** for multiple projects **in parallel** - define **task pipelines** to run tasks in the correct order - only run tasks for **projects affected by a given change** - **speed up task execution** with [caching](/docs/features/cache-task-results) ## Define tasks Nx tasks can be created from existing `package.json` scripts, [inferred from tooling configuration files](/docs/concepts/inferred-tasks), or defined in a `project.json` file. Nx combines these three sources to determine the tasks for a particular project. {% tabs syncKey="project-config-file" %} {% tabitem label="package.json" %} ```json // libs/mylib/package.json { "name": "mylib", "scripts": { "build": "tsc -p tsconfig.lib.json", "test": "jest" } } ``` {% /tabitem %} {% tabitem label="project.json" %} ```json // libs/mylib/project.json { "root": "libs/mylib", "targets": { "build": { "command": "tsc -p tsconfig.lib.json" }, "test": { "executor": "@nx/jest:jest", "options": { /* ... */ } } } } ``` {% /tabitem %} {% tabitem label="Inferred by Nx Plugins" %} [Nx plugins](/docs/concepts/nx-plugins) can detect your tooling configuration files (e.g. `vite.config.ts` or `.eslintrc.json`) and automatically configure runnable tasks including [Nx cache](/docs/features/cache-task-results). For example, the `@nx/jest` plugin will automatically create a `test` task for a project that uses Jest. The names can be configured in the `nx.json` file: ```json // nx.json { ... "plugins": [ { "plugin": "@nx/vite/plugin", "options": { "buildTargetName": "build", "testTargetName": "test", "serveTargetName": "serve", "previewTargetName": "preview", "serveStaticTargetName": "serve-static" } }, { "plugin": "@nx/eslint/plugin", "options": { "targetName": "lint" } }, { "plugin": "@nx/jest/plugin", "options": { "targetName": "test" } } ], ... } ``` Learn more about [inferred tasks here](/docs/concepts/inferred-tasks). {% /tabitem %} {% /tabs %} The [project configuration docs](/docs/reference/project-configuration) has the details for all the available configuration options. ## Run tasks Nx uses the following syntax: ![Syntax for Running Tasks in Nx](../../../assets/features/run-target-syntax.svg) {% aside type="tip" title="Terminal UI" %} In Nx 21, task output is displayed in an [interactive terminal UI](/docs/guides/tasks--caching/terminal-ui) that allows you to actively choose which task output to display, search through the list of tasks and display multiple tasks side by side. {% /aside %} ### Run a single task To run the `test` task for the `header` project run this command: ```shell npx nx test header ``` ### Run tasks for multiple projects You can use the `run-many` command to run a task for multiple projects. Here are a couple of examples. Run the `build` task for all projects in the repo: ```shell npx nx run-many -t build ``` Run the `build`, `lint` and `test` task for all projects in the repo: ```shell npx nx run-many -t build lint test ``` Run the `build`, `lint`, and `test` tasks only on the `header` and `footer` projects: ```shell npx nx run-many -t build lint test -p header footer ``` Nx parallelizes these tasks, ensuring they **run in the correct order based on their dependencies** and [task pipeline configuration](/docs/concepts/task-pipeline-configuration). You can also [control how many tasks run in parallel at once](/docs/guides/tasks--caching/run-tasks-in-parallel). Learn more about the [run-many](/docs/reference/nx-commands#nx-run-many) command. ### Run tasks on projects affected by a PR You can also run a command for all the projects affected by your PR like this: ```shell npx nx affected -t test ``` Learn more about the [affected command here](/docs/features/ci-features/affected). ## Defining a task pipeline It is pretty common to have dependencies between tasks, requiring one task to be run before another. For example, you might want to run the `build` target on the `header` project before running the `build` target on the `app` project. Nx can automatically detect the dependencies between projects (see [project graph](/docs/features/explore-graph)). {% graph height="450px" %} ```json { "projects": [ { "name": "myreactapp", "type": "app", "data": { "tags": [] } }, { "name": "shared-ui", "type": "lib", "data": { "tags": [] } }, { "name": "feat-products", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "myreactapp": [ { "source": "myreactapp", "target": "feat-products", "type": "static" } ], "shared-ui": [], "feat-products": [ { "source": "feat-products", "target": "shared-ui", "type": "static" } ] }, "workspaceLayout": { "appsDir": "", "libsDir": "" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false } ``` {% /graph %} However, you need to specify for which targets this ordering is important. In the following example we are telling Nx that before running the `build` target it needs to run the `build` target on all the projects the current project depends on: ```json // nx.json { ... "targetDefaults": { "build": { "dependsOn": ["^build"] } } } ``` This means that if we run `nx build myreactapp`, Nx will first execute `build` on `shared-ui` and `feat-products` before running `build` on `myreactapp`. You can define these task dependencies globally for your workspace in `nx.json` or individually in each project's `project.json` file. Learn more about: - [What a task pipeline is all about](/docs/concepts/task-pipeline-configuration) - [How to configure a task pipeline](/docs/guides/tasks--caching/defining-task-pipeline) ## Reduce repetitive configuration Learn more about leveraging `targetDefaults` to reduce repetitive configuration in the [dedicated recipe](/docs/guides/tasks--caching/reduce-repetitive-configuration). ## Run root-level tasks Sometimes, you need tasks that apply to the entire codebase rather than a single project. To still benefit from caching, you can run these tasks through the "Nx pipeline". Define them in the root-level `package.json` or `project.json` as follows: {% tabs syncKey="project-config-file" %} {% tabitem label="package.json" %} ```json // package.json { "name": "myorg", "scripts": { "docs": "node ./generateDocsSite.js" }, "nx": {} } ``` > Note the `nx: {}` property on the `package.json`. This is necessary to inform Nx about this root-level project. The property can also be expanded to specify cache inputs and outputs. If you want Nx to cache the task, but prefer to use npm (or pnpm/yarn) to run the script (i.e. `npm run docs`) you can use the [nx exec](/docs/reference/nx-commands#nx-exec) command: ```json // package.json { "name": "myorg", "scripts": { "docs": "nx exec -- node ./generateDocsSite.js" }, "nx": {} } ``` {% /tabitem %} {% tabitem label="project.json" %} ```json // project.json { "name": "myorg", ... "targets": { "docs": { "command": "node ./generateDocsSite.js" } } } ``` {% /tabitem %} {% /tabs %} To invoke the task, use: ```shell npx nx docs ``` Learn more about root-level tasks on [our dedicated recipe page](/docs/guides/tasks--caching/root-level-scripts). # Guides --- ## Guides {% index_page_cards path="guides" /%} --- ## Adopting Nx {% index_page_cards path="guides/adopting-nx" /%} --- ## Adding Nx to your Existing Project Nx can be added to any type of project, not just monorepos. A large benefit of Nx is its caching feature for package scripts. Each project usually has a set of scripts in the `package.json`: ```json // package.json { ... "scripts": { "build": "next build", "lint": "eslint ./src", "test": "node ./run-tests.js" } } ``` You can make these scripts faster by leveraging Nx [caching capabilities](/docs/features/cache-task-results). For example: - You change some spec files: in that case the `build` task can be cached and doesn't have to re-run. - You update your docs, changing a couple of markdown files: then there's no need to re-run builds, tests, linting on your CI. All you might want to do is trigger the Docusaurus build. Additionally, Nx also [speeds up your CI ⚡](#fast-ci) with [remote caching](/docs/features/ci-features/remote-cache) and [distributed task execution](/docs/features/ci-features/distribute-task-execution). ## Install Nx on a non-Monorepo project Run the following command: ```shell npx nx@latest init ``` Running this command will ask you a few questions about your workspace and then set up Nx for you accordingly. The setup process detects tools which are used in your workspace and suggests installing Nx plugins to integrate the tools you use with Nx. Running those tools through Nx will have caching enabled when possible, providing you with a faster alternative for running those tools. You can start with a few to see how it works and then add more with the [`nx add`](/docs/reference/nx-commands#nx-add) command later. You can also decide to add them all and get the full experience right away because adding plugins will not break your existing workflow. The first thing you may notice is that Nx updates your `package.json` scripts during the setup process. Nx Plugins setup Nx commands which run the underlying tool with caching enabled. When a `package.json` script runs a command which can be run through Nx, Nx will replace that script in the `package.json` scripts with an Nx command that has caching automatically enabled. Anywhere those `package.json` scripts are used, including your CI, will become faster when possible. Let's go through an example where the `@nx/next/plugin` and `@nx/eslint/plugin` plugins are added to a workspace with the following `package.json`. ```diff // package.json { "name": "my-workspace", ... "scripts": { - "build": "next build && echo 'Build complete'", + "build": "nx next:build && echo 'Build complete'", - "lint": "eslint ./src", + "lint": "nx eslint:lint", "test": "node ./run-tests.js" }, + "nx": {} } ``` The `@nx/next/plugin` plugin adds a `next:build` target which runs `next build` and sets up caching correctly. In other words, running `nx next:build` is the same as running `next build` with the added benefit of it being cacheable. Hence, Nx replaces `next build` in the `package.json` `build` script to add caching to anywhere running `npm run build`. Similarly, `@nx/eslint/plugin` sets up the `nx eslint:lint` command to run `eslint ./src` with caching enabled. The `test` script was not recognized by any Nx plugin, so it was left as is. After Nx has been setup, running `npm run build` or `npm run lint` multiple times, will be instant when possible. You can also run any npm scripts directly through Nx with `nx build` or `nx lint` which will run the `npm run build` and `npm run lint` scripts respectively. In the later portion of the setup flow, Nx will ask if you would like some of those npm scripts to be cacheable. By making those cacheable, running `nx build` rather than `npm run build` will add another layer of cacheability. However, `nx build` must be run instead of `npm run build` to take advantage of the cache. ## Inferred tasks You may have noticed that `@nx/next` provides `dev` and `start` tasks in addition to the `next:build` task. Those tasks were created by the `@nx/next/plugin` plugin from your existing Next.js configuration. You can see the configuration for the Nx Plugins in `nx.json`: ```json // nx.json { "plugins": [ { "plugin": "@nx/eslint/plugin", "options": { "targetName": "eslint:lint" } }, { "plugin": "@nx/next/plugin", "options": { "buildTargetName": "next:build", "devTargetName": "dev", "startTargetName": "start" } } ] } ``` Each plugin can accept options to customize the projects which they create. You can see more information about configuring the plugins on the [`@nx/next/plugin`](/docs/technologies/react/next/introduction) and [`@nx/eslint/plugin`](/docs/technologies/eslint/introduction) plugin pages. To view all available tasks, open the Project Details view with Nx Console or use the terminal to launch the project details in a browser window. ```shell nx show project my-workspace --web ``` {% project_details title="Project Details View" %} ```json { "project": { "name": "my-workspace", "data": { "root": ".", "targets": { "eslint:lint": { "cache": true, "options": { "cwd": ".", "command": "eslint ./src" }, "inputs": [ "default", "{workspaceRoot}/.eslintrc", "{workspaceRoot}/tools/eslint-rules/**/*", { "externalDependencies": ["eslint"] } ], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["eslint"] } }, "next:build": { "options": { "cwd": ".", "command": "next build" }, "dependsOn": ["^build"], "cache": true, "inputs": [ "default", "^default", { "externalDependencies": ["next"] } ], "outputs": ["{projectRoot}/.next", "{projectRoot}/.next/!(cache)"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["next"] } }, "dev": { "options": { "cwd": ".", "command": "next dev", "continuous": true }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["next"] } }, "start": { "options": { "cwd": ".", "command": "next start", "continuous": true }, "dependsOn": ["build"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["next"] } } }, "sourceRoot": ".", "name": "my-workspace", "projectType": "library", "implicitDependencies": [], "tags": [] } }, "sourceMap": { "root": ["package.json", "nx/core/package-json-workspaces"], "targets": ["package.json", "nx/core/package-json-workspaces"], "targets.eslint:lint": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.command": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.cache": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.options": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.inputs": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.options.cwd": [".eslintrc.json", "@nx/eslint/plugin"], "targets.next:build": ["next.config.js", "@nx/next/plugin"], "targets.next:build.command": ["next.config.js", "@nx/next/plugin"], "targets.next:build.options": ["next.config.js", "@nx/next/plugin"], "targets.next:build.dependsOn": ["next.config.js", "@nx/next/plugin"], "targets.next:build.cache": ["next.config.js", "@nx/next/plugin"], "targets.next:build.inputs": ["next.config.js", "@nx/next/plugin"], "targets.next:build.outputs": ["next.config.js", "@nx/next/plugin"], "targets.next:build.options.cwd": ["next.config.js", "@nx/next/plugin"], "targets.dev": ["next.config.js", "@nx/next/plugin"], "targets.dev.command": ["next.config.js", "@nx/next/plugin"], "targets.dev.options": ["next.config.js", "@nx/next/plugin"], "targets.dev.options.cwd": ["next.config.js", "@nx/next/plugin"], "targets.start": ["next.config.js", "@nx/next/plugin"], "targets.start.command": ["next.config.js", "@nx/next/plugin"], "targets.start.options": ["next.config.js", "@nx/next/plugin"], "targets.start.dependsOn": ["next.config.js", "@nx/next/plugin"], "targets.start.options.cwd": ["next.config.js", "@nx/next/plugin"], "sourceRoot": ["package.json", "nx/core/package-json-workspaces"], "name": ["package.json", "nx/core/package-json-workspaces"], "projectType": ["package.json", "nx/core/package-json-workspaces"], "targets.nx-release-publish": [ "package.json", "nx/core/package-json-workspaces" ], "targets.nx-release-publish.dependsOn": [ "package.json", "nx/core/package-json-workspaces" ], "targets.nx-release-publish.executor": [ "package.json", "nx/core/package-json-workspaces" ], "targets.nx-release-publish.options": [ "package.json", "nx/core/package-json-workspaces" ] } } ``` {% /project_details %} The project detail view lists all available tasks, the configuration values for those tasks and where those configuration values are being set. ## Configure an existing script to run with Nx If you want to run one of your existing scripts with Nx, you need to tell Nx about it. 1. Preface the script with `nx exec -- ` to have `npm run test` invoke the command with Nx. 2. Define caching settings. The `nx exec` command allows you to keep using `npm test` or `npm run test` (or other package manager's alternatives) as you're accustomed to. But still get the benefits of making those operations cacheable. Configuring the `test` script from the example above to run with Nx would look something like this: ```json // package.json { "name": "my-workspace", ... "scripts": { "build": "nx next:build", "lint": "nx eslint:lint", "test": "nx exec -- node ./run-tests.js" }, ... "nx": { "targets": { "test": { "cache": "true", "inputs": [ "default", "^default" ], "outputs": [] } } } } ``` Now if you run `npm run test` or `nx test` twice, the results will be retrieved from the cache. The `inputs` used in this example are as cautious as possible, so you can significantly improve the value of the cache by [customizing Nx Inputs](/docs/guides/tasks--caching/configure-inputs) for each task. ## Fast CI ⚡ This tutorial walked you through how Nx can improve the local development experience, but the biggest difference Nx makes is in CI. As repositories get bigger, making sure that the CI is fast, reliable and maintainable can get very challenging. Nx provides a solution. - Nx reduces wasted time in CI with the [`affected` command](/docs/features/ci-features/affected). - Nx Replay's [remote caching](/docs/features/ci-features/remote-cache) will reuse task artifacts from different CI executions making sure you will never run the same computation twice. - Nx Agents [efficiently distribute tasks across machines](/docs/concepts/ci-concepts/parallelization-distribution) ensuring constant CI time regardless of the repository size. The right number of machines is allocated for each PR to ensure good performance without wasting compute. - Nx Atomizer [automatically splits](/docs/features/ci-features/split-e2e-tasks) large e2e tests to distribute them across machines. Nx can also automatically [identify and rerun flaky e2e tests](/docs/features/ci-features/flaky-tasks). ### Connect to Nx Cloud Nx Cloud is a companion app for your CI system that provides remote caching, task distribution, e2e tests deflaking, better DX and more. Now that we're working on the CI pipeline, it is important for your changes to be pushed to a GitHub repository. 1. Commit your existing changes with `git add . && git commit -am "updates"` 2. [Create a new GitHub repository](https://github.com/new) 3. Follow GitHub instructions to push your existing code to the repository Now connect your repository to Nx Cloud with the following command: ```shell npx nx@latest connect ``` A browser window will open to register your repository in your [Nx Cloud](https://cloud.nx.app) account. The link is also printed to the terminal if the windows does not open, or you closed it before finishing the steps. The app will guide you to create a PR to enable Nx Cloud on your repository. ![](../../../../assets/guides/adopting-nx/nx-cloud-github-connect.avif) Once the PR is created, merge it into your main branch. ![](../../../../assets/guides/adopting-nx/github-cloud-pr-merged.avif) And make sure you pull the latest changes locally: ```shell git pull ``` You should now have an `nxCloudId` property specified in the `nx.json` file. ### Create a CI workflow Use the following command to generate a CI workflow file. ```shell npx nx generate ci-workflow --ci=github ``` This generator creates a `.github/workflows/ci.yml` file that contains a CI pipeline that will run the `lint`, `test`, `build` and `e2e` tasks for projects that are affected by any given PR. Since we are using Nx Cloud, the pipeline will also distribute tasks across multiple machines to ensure fast and reliable CI runs. The key lines in the CI pipeline are: ```yml {% meta="{10-14,21-22}" %} // .github/workflows/ci.yml name: CI # ... jobs: main: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 filter: tree:0 # This enables task distribution via Nx Cloud # Run this command as early as possible, before dependencies are installed # Learn more at https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun # Connect your workspace by running "nx connect" and uncomment this - run: npx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" - uses: actions/setup-node@v3 with: node-version: 20 cache: 'npm' - run: npm ci - uses: nrwl/nx-set-shas@v4 # Nx Affected runs only tasks affected by the changes in this PR/commit. Learn more: https://nx.dev/ci/features/affected - run: npx nx affected -t lint test build ``` ### Open a pull request Commit the changes and open a new PR on GitHub. ```shell git add . git commit -m 'add CI workflow file' git push origin add-workflow ``` When you view the PR on GitHub, you will see a comment from Nx Cloud that reports on the status of the CI run. ![Nx Cloud report](../../../../assets/guides/adopting-nx/github-pr-cloud-report.avif) The `See all runs` link goes to a page with the progress and results of tasks that were run in the CI pipeline. ![Run details](../../../../assets/guides/adopting-nx/nx-cloud-run-details.avif) For more information about how Nx can improve your CI pipeline, check out our [CI setup guides](/docs/guides/nx-cloud/setup-ci). ## Learn more {% cardgrid %} {% linkcard title="Customizing Inputs and Named Inputs" description="Learn more about how to fine-tune caching with custom inputs" href="/docs/guides/tasks--caching/configure-inputs" /%} {% linkcard title="Cache Task Results" description="Learn more about how caching works" href="/docs/features/cache-task-results" /%} {% linkcard title="Adding Nx to NPM/Yarn/PNPM Workspace" description="Learn more about how to add Nx to an existing monorepo" href="/docs/guides/adopting-nx/adding-to-monorepo" /%} {% /cardgrid %} --- ## Adding Nx to NPM/Yarn/PNPM Workspace {% aside type="note" title="Migrating from Lerna?" %} Interested in migrating from [Lerna](https://github.com/lerna/lerna) in particular? In case you missed it, Lerna v6 is powering Nx underneath. As a result, Lerna gets all the modern features such as caching and task pipelines. Read more on [https://lerna.js.org/upgrade](https://lerna.js.org/upgrade). {% /aside %} Nx has first-class support for [monorepos](/docs/getting-started/tutorials/crafting-your-workspace). If you have an existing NPM/Yarn or PNPM-based monorepo setup, you can easily add Nx to get - fast [task scheduling](/docs/features/run-tasks) - high-performance task [caching](/docs/features/cache-task-results) - [fast CI ⚡](#fast-ci) with [remote caching](/docs/features/ci-features/remote-cache) and [distributed task execution](/docs/features/ci-features/distribute-task-execution) This is a low-impact operation because all that needs to be done is to install the `nx` package at the root level and add an `nx.json` for configuring caching and task pipelines. {% course_video src="https://youtu.be/3hW53b1IJ84" courseTitle="From PNPM Workspaces to Distributed CI" courseUrl="https://nx.dev/courses/pnpm-nx-next/lessons-01-nx-init" /%} ## Installing Nx Run the following command to automatically set up Nx: ```shell npx nx@latest init ``` Running this command will ask you a few questions about your workspace and then set up Nx for you accordingly. The setup process detects tools which are used in your workspace and suggests installing Nx plugins to integrate the tools you use with Nx. Running those tools through Nx will have caching enabled when possible, providing you with a faster alternative for running those tools. You can start with a few to see how it works and then add more with the [`nx add`](/docs/reference/nx-commands#nx-add) command later. You can also decide to add them all and get the full experience right away because adding plugins will not break your existing workflow. The first thing you may notice is that Nx updates your `package.json` scripts during the setup process. Nx Plugins setup Nx commands which run the underlying tool with caching enabled. When a `package.json` script runs a command which can be run through Nx, Nx will replace that script in the `package.json` scripts with an Nx command that has caching automatically enabled. Anywhere those `package.json` scripts are used, including your CI, will become faster when possible. Let's go through an example where the `@nx/next/plugin` and `@nx/eslint/plugin` plugins are added to a workspace with the following `package.json`. ```diff // package.json { "name": "my-workspace", ... "scripts": { - "build": "next build && echo 'Build complete'", + "build": "nx next:build && echo 'Build complete'", - "lint": "eslint ./src", + "lint": "nx eslint:lint", "test": "node ./run-tests.js" }, ... } ``` The `@nx/next/plugin` plugin adds a `next:build` target which runs `next build` and sets up caching correctly. In other words, running `nx next:build` is the same as running `next build` with the added benefit of it being cacheable. Hence, Nx replaces `next build` in the `package.json` `build` script to add caching to anywhere running `npm run build`. Similarly, `@nx/eslint/plugin` sets up the `nx eslint:lint` command to run `eslint ./src` with caching enabled. The `test` script was not recognized by any Nx plugin, so it was left as is. After Nx has been setup, running `npm run build` or `npm run lint` multiple times, will be instant when possible. You can also run any npm scripts directly through Nx with `nx build` or `nx lint` which will run the `npm run build` and `npm run lint` scripts respectively. In the later portion of the setup flow, Nx will ask if you would like some of those npm scripts to be cacheable. By making those cacheable, running `nx build` rather than `npm run build` will add another layer of cacheability. However, `nx build` must be run instead of `npm run build` to take advantage of the cache. ## Inferred tasks You may have noticed that `@nx/next` provides `dev` and `start` tasks in addition to the `next:build` task. Those tasks were created by the `@nx/next/plugin` plugin from your existing Next.js configuration. You can see the configuration for the Nx Plugins in `nx.json`: ```json // nx.json { "plugins": [ { "plugin": "@nx/eslint/plugin", "options": { "targetName": "eslint:lint" } }, { "plugin": "@nx/next/plugin", "options": { "buildTargetName": "next:build", "devTargetName": "dev", "startTargetName": "start" } } ] } ``` Each plugin can accept options to customize the projects which they create. You can see more information about configuring the plugins on the [`@nx/next/plugin`](/docs/technologies/react/next/introduction) and [`@nx/eslint/plugin`](/docs/technologies/eslint/introduction) plugin pages. To view all available tasks, open the Project Details view with Nx Console or use the terminal to launch the project details in a browser window. ```shell nx show project my-workspace --web ``` {% project_details title="Project Details View" %} ```json { "project": { "name": "my-workspace", "data": { "root": ".", "targets": { "eslint:lint": { "cache": true, "options": { "cwd": ".", "command": "eslint ./src" }, "inputs": [ "default", "{workspaceRoot}/.eslintrc", "{workspaceRoot}/tools/eslint-rules/**/*", { "externalDependencies": ["eslint"] } ], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["eslint"] } }, "next:build": { "options": { "cwd": ".", "command": "next build" }, "dependsOn": ["^build"], "cache": true, "inputs": [ "default", "^default", { "externalDependencies": ["next"] } ], "outputs": ["{projectRoot}/.next", "{projectRoot}/.next/!(cache)"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["next"] } }, "dev": { "options": { "cwd": ".", "command": "next dev", "continuous": true }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["next"] } }, "start": { "options": { "cwd": ".", "command": "next start", "continuous": true }, "dependsOn": ["build"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["next"] } } }, "sourceRoot": ".", "name": "my-workspace", "projectType": "library", "implicitDependencies": [], "tags": [] } }, "sourceMap": { "root": ["package.json", "nx/core/package-json-workspaces"], "targets": ["package.json", "nx/core/package-json-workspaces"], "targets.eslint:lint": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.command": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.cache": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.options": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.inputs": [".eslintrc.json", "@nx/eslint/plugin"], "targets.eslint:lint.options.cwd": [".eslintrc.json", "@nx/eslint/plugin"], "targets.next:build": ["next.config.js", "@nx/next/plugin"], "targets.next:build.command": ["next.config.js", "@nx/next/plugin"], "targets.next:build.options": ["next.config.js", "@nx/next/plugin"], "targets.next:build.dependsOn": ["next.config.js", "@nx/next/plugin"], "targets.next:build.cache": ["next.config.js", "@nx/next/plugin"], "targets.next:build.inputs": ["next.config.js", "@nx/next/plugin"], "targets.next:build.outputs": ["next.config.js", "@nx/next/plugin"], "targets.next:build.options.cwd": ["next.config.js", "@nx/next/plugin"], "targets.dev": ["next.config.js", "@nx/next/plugin"], "targets.dev.command": ["next.config.js", "@nx/next/plugin"], "targets.dev.options": ["next.config.js", "@nx/next/plugin"], "targets.dev.options.cwd": ["next.config.js", "@nx/next/plugin"], "targets.start": ["next.config.js", "@nx/next/plugin"], "targets.start.command": ["next.config.js", "@nx/next/plugin"], "targets.start.options": ["next.config.js", "@nx/next/plugin"], "targets.start.dependsOn": ["next.config.js", "@nx/next/plugin"], "targets.start.options.cwd": ["next.config.js", "@nx/next/plugin"], "sourceRoot": ["package.json", "nx/core/package-json-workspaces"], "name": ["package.json", "nx/core/package-json-workspaces"], "projectType": ["package.json", "nx/core/package-json-workspaces"], "targets.nx-release-publish": [ "package.json", "nx/core/package-json-workspaces" ], "targets.nx-release-publish.dependsOn": [ "package.json", "nx/core/package-json-workspaces" ], "targets.nx-release-publish.executor": [ "package.json", "nx/core/package-json-workspaces" ], "targets.nx-release-publish.options": [ "package.json", "nx/core/package-json-workspaces" ] } } ``` {% /project_details %} The project detail view lists all available tasks, the configuration values for those tasks and where those configuration values are being set. ## Configure an existing script to run with Nx If you want to run one of your existing scripts with Nx, you need to tell Nx about it. 1. Preface the script with `nx exec -- ` to have `npm run test` invoke the command with Nx. 2. Define caching settings. The `nx exec` command allows you to keep using `npm test` or `npm run test` (or other package manager's alternatives) as you're accustomed to. But still get the benefits of making those operations cacheable. Configuring the `test` script from the example above to run with Nx would look something like this: ```json // package.json { "name": "my-workspace", ... "scripts": { "build": "nx next:build", "lint": "nx eslint:lint", "test": "nx exec -- node ./run-tests.js" }, ... "nx": { "targets": { "test": { "cache": "true", "inputs": [ "default", "^default" ], "outputs": [] } } } } ``` Now if you run `npm run test` or `nx test` twice, the results will be retrieved from the cache. The `inputs` used in this example are as cautious as possible, so you can significantly improve the value of the cache by [customizing Nx Inputs](/docs/guides/tasks--caching/configure-inputs) for each task. ## Incrementally adopting Nx All the features of Nx can be enabled independently of each other. Hence, Nx can easily be adopted incrementally by initially using Nx just for a subset of your scripts and then gradually adding more. For example, use Nx to run your builds: ```shell npx nx run-many -t build ``` But instead keep using NPM/Yarn/PNPM workspace commands for your tests and other scripts. Here's an example of using PNPM commands to run tests across packages ```shell pnpm run -r test ``` This allows for incrementally adopting Nx in your existing workspace. ## Fast CI ⚡ This tutorial walked you through how Nx can improve the local development experience, but the biggest difference Nx makes is in CI. As repositories get bigger, making sure that the CI is fast, reliable and maintainable can get very challenging. Nx provides a solution. - Nx reduces wasted time in CI with the [`affected` command](/docs/features/ci-features/affected). - Nx Replay's [remote caching](/docs/features/ci-features/remote-cache) will reuse task artifacts from different CI executions making sure you will never run the same computation twice. - Nx Agents [efficiently distribute tasks across machines](/docs/concepts/ci-concepts/parallelization-distribution) ensuring constant CI time regardless of the repository size. The right number of machines is allocated for each PR to ensure good performance without wasting compute. - Nx Atomizer [automatically splits](/docs/features/ci-features/split-e2e-tasks) large e2e tests to distribute them across machines. Nx can also automatically [identify and rerun flaky e2e tests](/docs/features/ci-features/flaky-tasks). ### Connect to Nx Cloud Nx Cloud is a companion app for your CI system that provides remote caching, task distribution, e2e tests deflaking, better DX and more. Now that we're working on the CI pipeline, it is important for your changes to be pushed to a GitHub repository. 1. Commit your existing changes with `git add . && git commit -am "updates"` 2. [Create a new GitHub repository](https://github.com/new) 3. Follow GitHub instructions to push your existing code to the repository Now connect your repository to Nx Cloud with the following command: ```shell npx nx@latest connect ``` A browser window will open to register your repository in your [Nx Cloud](https://cloud.nx.app) account. The link is also printed to the terminal if the windows does not open, or you closed it before finishing the steps. The app will guide you to create a PR to enable Nx Cloud on your repository. ![](../../../../assets/guides/adopting-nx/nx-cloud-github-connect.avif) Once the PR is created, merge it into your main branch. ![](../../../../assets/guides/adopting-nx/github-cloud-pr-merged.avif) And make sure you pull the latest changes locally: ```shell git pull ``` You should now have an `nxCloudId` property specified in the `nx.json` file. ### Create a CI workflow Use the following command to generate a CI workflow file. ```shell npx nx generate ci-workflow --ci=github ``` This generator creates a `.github/workflows/ci.yml` file that contains a CI pipeline that will run the `lint`, `test`, `build` and `e2e` tasks for projects that are affected by any given PR. Since we are using Nx Cloud, the pipeline will also distribute tasks across multiple machines to ensure fast and reliable CI runs. The key lines in the CI pipeline are: ```yml {% meta="{11-15,22-23}" %} // .github/workflows/ci.yml" name: CI # ... jobs: main: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 filter: tree:0 # This enables task distribution via Nx Cloud # Run this command as early as possible, before dependencies are installed # Learn more at https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun # Connect your workspace by running "nx connect" and uncomment this - run: npx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" - uses: actions/setup-node@v3 with: node-version: 20 cache: 'npm' - run: npm ci - uses: nrwl/nx-set-shas@v4 # Nx Affected runs only tasks affected by the changes in this PR/commit. Learn more: https://nx.dev/ci/features/affected - run: npx nx affected -t lint test build ``` ### Open a pull request Commit the changes and open a new PR on GitHub. ```shell git add . git commit -m 'add CI workflow file' git push origin add-workflow ``` When you view the PR on GitHub, you will see a comment from Nx Cloud that reports on the status of the CI run. ![Nx Cloud report](../../../../assets/guides/adopting-nx/github-pr-cloud-report.avif) The `See all runs` link goes to a page with the progress and results of tasks that were run in the CI pipeline. ![Run details](../../../../assets/guides/adopting-nx/nx-cloud-run-details.avif) For more information about how Nx can improve your CI pipeline, check out our [detailed tutorial for you CI provider](/docs/guides/nx-cloud/setup-ci). ## Learn more {% cardgrid %} {% linkcard title="Cache Task Results" description="Learn more about how caching works" href="/docs/features/cache-task-results" /%} {% linkcard title="Task Pipeline Configuration" description="Learn more about how to setup task dependencies" href="/docs/concepts/task-pipeline-configuration" /%} {% linkcard title="Nx Ignore" description="Learn about how to ignore certain projects using .nxignore" href="/docs/reference/nxignore" /%} {% linkcard title="Migrating from Turborepo to Nx" description="Read about Migrating from Turborepo to Nx" href="/docs/guides/adopting-nx/from-turborepo" /%} {% /cardgrid %} --- ## Migrating from Turborepo to Nx {% aside type="note" title="Looking for a comparison?" %} For a data-driven comparison of Nx and Turborepo covering setup complexity, CI performance, and advanced capabilities, see [Nx vs Turborepo](/docs/guides/adopting-nx/nx-vs-turborepo). {% /aside %} Nx is a superset of Turborepo, so migrating requires minimal effort. The diff is tiny: an `nx.json` file (equivalent to `turbo.json`), the `nx` package added to `package.json`, and a `.gitignore` entry for the Nx cache. No changes to your existing projects or scripts are needed. ## Easy automated migration example 1. Let's create a new Turborepo workspace using the recommended `create-turbo` command: ```shell npx create-turbo@latest ``` 2. Once that is finished, literally all we need to do make it a valid Nx workspace is run `nx init`: ```shell npx nx@latest init ``` That's it! As you can see, the diff is tiny: ```diff .gitignore | 3 +++ # Ignore the Nx cache package.json | 1 + # Add the "nx" package package-lock.json | nx.json | # Equivalent to turbo.json ``` - An `nx.json` file that is equivalent to the `turbo.json` file was added - The `package.json` file was updated to add the `nx` dev dependency (and the package-lock.json was updated accordingly) - The .gitignore entry for the Nx cache was added automatically It's important to remember that Nx is a superset of Turborepo, it can do everything Turborepo can do and much more, so there is absolutely no special configuration needed for Nx, it just works on the Turborepo workspace. ### Example: Basic configuration comparison To help with understanding the new `nx.json` file, let's compare it to the `turbo.json` file: ```json { "$schema": "https://turbo.build/schema.json", // Nx will automatically use an appropriate terminal output style for the tasks you run "ui": "tui", "tasks": { "build": { // This syntax of build depending on the build of its dependencies using ^ is the same // in Nx "dependsOn": ["^build"], // Inputs and outputs are in Turborepo are relative to a particular package, whereas in Nx they are consistently from the workspace root and it therefore has {projectRoot} and {projectName} helpers "inputs": ["$TURBO_DEFAULT$", ".env*"], "outputs": [".next/**", "!.next/cache/**"] }, "lint": { // Turborepo tasks are assumed to be cacheable by default, so there is no recognizable configuration here. In Nx, the "cache" value is clearly set to true. "dependsOn": ["^lint"] }, "check-types": { "dependsOn": ["^check-types"] }, "dev": { // Nx uses "continuous" instead of "persistent" "persistent": true } } } ``` After running `nx init`, you'll automatically have an equivalent `nx.json`: ```json { "$schema": "./node_modules/nx/schemas/nx-schema.json", "targetDefaults": { "build": { "dependsOn": ["^build"], "inputs": ["{projectRoot}/**/*", "{projectRoot}/.env*"], "outputs": ["{projectRoot}/.next/**", "!{projectRoot}/.next/cache/**"], "cache": true }, "lint": { "dependsOn": ["^lint"], "cache": true }, "check-types": { "dependsOn": ["^check-types"], "cache": true }, "dev": { "continuous": true } } } ``` ## Configuration migration guide Most settings in the old `turbo.json` file can be converted directly into `nx.json` equivalents. Here's how to map each configuration property: ### Global configuration | Turborepo Property | Nx Equivalent | | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `cacheDir` | Set in [`cacheDirectory`](/docs/reference/nx-json#task-options) | | `daemon` | Use [`NX_DAEMON=false` or set `useDaemonProcess: false`](/docs/concepts/nx-daemon#turning-it-off) in `nx.json` | | `envMode` | Nx core does not block any environment variables. See [React](/docs/technologies/react/guides/use-environment-variables-in-react) and [Angular](/docs/technologies/angular/guides/use-environment-variables-in-angular) guides | | `globalDependencies` | Add to the [`sharedGlobals` `namedInput`](/docs/guides/tasks--caching/configure-inputs) | | `globalEnv` | Add to the [`sharedGlobals` `namedInput`](/docs/guides/tasks--caching/configure-inputs) as an [`env` input](/docs/reference/inputs#environment-variables) | | `globalPassThroughEnv` | N/A. See [Defining Environment Variables](/docs/guides/tips-n-tricks/define-environment-variables) | | `remoteCache` | See [Nx Replay](/docs/features/ci-features/remote-cache) | | `ui` | Nx will intelligently pick the most appropriate terminal output style, but it can be overridden with [`--output-style`](/docs/reference/nx-commands#outputstyle) | ### Task configuration | Turborepo Property | Nx Equivalent | | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | | `extends` | N/A. Projects always extend `targetDefaults` from `nx.json` | | `dependsOn` | [Same syntax](/docs/reference/project-configuration#dependson) | | `env` | Define [env `inputs`](/docs/reference/inputs#environment-variables) | | `passThroughEnv` | N/A. See [Defining Environment Variables](/docs/guides/tips-n-tricks/define-environment-variables) | | `outputs` | [Similar syntax](/docs/reference/project-configuration#outputs) | | `cache` | [Similar syntax](/docs/reference/project-configuration#cache) | | `inputs` | [Similar syntax](/docs/reference/inputs#source-files) | | `outputLogs` | Use [`--output-style`](/docs/reference/nx-commands#outputstyle) | | `persistent` | Use [`continuous`](/docs/guides/tasks--caching/defining-task-pipeline#continuous-task-dependencies) | | `interactive` | N/A. Tasks marked [`continuous`](/docs/guides/tasks--caching/defining-task-pipeline#continuous-task-dependencies) can accept stdin automatically. | ## Command equivalents Here's how Turborepo commands map to Nx: | Turborepo Command | Nx Equivalent | | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `turbo run test lint build` | [`nx run-many -t test lint build`](/docs/reference/nx-commands#nx-run-many) | | `turbo run build --affected` | [`nx affected -t build`](/docs/reference/nx-commands#nx-affected) | | `turbo devtools` | [`nx graph`](/docs/reference/nx-commands#nx-graph) for full interactive experience, also available in [Nx Console](/docs/getting-started/editor-setup) | | `--cache-dir` | Set in [`nx.json` under `cacheDirectory`](/docs/reference/nx-json#task-options) | | `--concurrency` | [`--parallel`](/docs/reference/nx-commands#parallel) | | `--continue` | [Use `--nx-bail`](/docs/reference/nx-commands#nxbail) with the inverse value | | `--cpuprofile` | Use [`NX_PROFILE=profile.json`](/docs/troubleshooting/performance-profiling) | | `--cwd` | Available in [`run-commands` executor](/docs/reference/nx/executors#run-commands#cwd) | | `--daemon` | Use [`NX_DAEMON=false` or set `useDaemonProcess: false`](/docs/concepts/nx-daemon#turning-it-off) | | `--dry-run` | Use [`nx show target : --inputs --outputs`](/docs/reference/nx-commands#nx-show-target) to preview inputs and outputs (available since 22.6.0+) | | `--env-mode` | See [React](/docs/technologies/react/guides/use-environment-variables-in-react) and [Angular](/docs/technologies/angular/guides/use-environment-variables-in-angular) guides | | `--filter` | Use lots of advanced project matcher syntax like [`-p admin-*` or `-p tag:api-*`](/docs/reference/nx-commands#nx-run-many) | | `--force` | [`nx reset`](/docs/reference/nx-commands#nx-reset) and then run the command again | | `--framework-inference` | N/A. [Nx plugins infer tasks automatically as a first class feature](/docs/concepts/inferred-tasks) | | `--global-deps` | Use the [`sharedGlobals` `namedInput`](/docs/guides/tasks--caching/configure-inputs). Nx is far more flexible with composable [`namedInputs`](/docs/reference/inputs) | | `--graph` | [Similar syntax](/docs/reference/nx-commands#graph) or [`nx graph`](/docs/reference/nx-commands#nx-graph) for full interactive experience | | `--heap` | N/A. Use [`--verbose`](/docs/reference/nx-commands#verbose) | | `--ignore` | Use [`.nxignore`](/docs/reference/nxignore) or `.gitignore` | | `--log-order` | Use [`--output-style`](/docs/reference/nx-commands#outputstyle) | | `--no-cache` | Use [`--skip-nx-cache`](/docs/reference/nx-commands#skipnxcache) | | `--output-logs` | Use [`--output-style`](/docs/reference/nx-commands#outputstyle) | | `--only` | N/A | | `--parallel` | N/A | | `--preflight` | N/A | | `--summarize` | N/A | | `--token` | Set [Nx Cloud CI Access Token](/docs/guides/nx-cloud/access-tokens#setting-ci-access-tokens) | | `--team` | See `--token` for Nx Cloud workspace selection | | `--trace` | N/A. Use [`--verbose`](/docs/reference/nx-commands#verbose) | | `--verbosity` | Use [`--verbose`](/docs/reference/nx-commands#verbose) | | `turbo gen` | [Use `nx generate`](/docs/features/generate-code) | | `turbo login` | `nx login` - [Create an Nx Cloud account](/docs/reference/nx-commands#nx-login) | | `turbo link` | `nx connect` - [Connect a workspace to an Nx Cloud account](/docs/reference/nx-commands#nx-connect) | For a complete list of Nx commands and options, see the [Nx CLI documentation](/docs/reference/nx-commands). --- ## Import an Existing Project into an Nx Workspace {% youtube src="https://youtu.be/hnbwoV2-620" title="Importing an existing project into your monorepo" /%} Nx can help with the process of moving an existing project from another repository into an Nx workspace. In order to communicate clearly about this process, we'll call the repository we're moving the project out of the "source repository" and the repository we're moving the project into the "destination repository". Here's an example of what those repositories might look like. **Source Repository** {% filetree %} - inventory-app/ - ... (other files) - public/ - ... (public files) - src/ - assets/ - App.css - App.tsx - index.css - main.tsx - .eslintrc.cjs - index.html - package.json - README.md - tsconfig.json - tsconfig.node.json - vite.config.ts {% /filetree %} **Destination Repository** {% filetree %} - myorg/ - ... (other files) - packages/ - ... (shared packages) - apps/ - account/ - ... (account app files) - cart/ - ... (cart app files) - users/ - ... (users app files) - .eslintrc.json - .gitignore - nx.json - package.json - README.md - tsconfig.base.json {% /filetree %} In this example, the source repository contains a single application while the destination repository is already a monorepo. You can also import a project from a sub-directory of the source repository (if the source repository is a monorepo, for instance). The `nx import` command can be run with no arguments and you will be prompted to for the required arguments: ```shell nx import ``` Make sure to run `nx import` from within the **destination repository**. You can also directly specify arguments from the terminal, like one of these commands: ```shell nx import [sourceRepository] [destinationDirectory] nx import ../inventory-app apps/inventory nx import https://github.com/myorg/inventory-app.git apps/inventory ``` {% aside type="tip" title="Source Repository Local or Remote" %} The sourceRepository argument for `nx import` can be either a local file path to the source git repository on your local machine or a git URL to the hosted git repository. {% /aside %} The `nx import` command will: - Maintain the git history from the source repository - Suggest adding plugins to the destination repository based on the newly added project code Every code base is different, so you will still need to manually: - Manage any dependency conflicts between the two code bases - Migrate over code outside the source project's root folder that the source project depends on ## Manage dependencies If both repositories are managed with npm workspaces, the imported project will have all its required dependencies defined in its `package.json` file that is moved over. You'll need to make sure that the destination repository includes the `destinationDirectory` in the `workspaces` defined in the root `package.json`. If the destination repository does not use npm workspaces, it will ease the import process to temporarily enable it. With npm workspaces enabled, you can easily import a self-contained project and gain the benefits of code sharing, atomic commits and shared infrastructure. Once the import process is complete, you can make a follow-up PR that merges the dependencies into the root `package.json` and disables npm workspaces again. ## Migrate external code and configuration Few projects are completely isolated from the rest of the repository where they are located. After `nx import` has run, here are a few types of external code references that you should account for: - Project configuration files that extend root configuration files - Scripts outside the project folder that are required by the project - Local project dependencies that are not present or have a different name in the destination repository {% aside type="note" title="Importing Multiple Projects" %} If multiple projects need to be imported into the destination repository, try to import as many of the projects together as possible. If projects need to be imported with separate `nx import` commands, start with the leaf projects in the dependency graph (the projects without any dependencies) and then work your way up to the top level applications. This way every project that is imported into the destination repository will have its required dependencies available. {% /aside %} --- ## Manual Migration of Existing Code Bases The easiest way to start using Nx is to run the `nx init` command. ```shell npx nx@latest init ``` If you don't want to run the script, walk you through doing everything the script does manually. ## Install `nx` as a `devDependency` We'll start by installing the `nx` package: {% tabs syncKey="install-type" %} {% tabitem label="npm" %} ```shell npm add -D nx@latest ``` {% /tabitem %} {% tabitem label="yarn" %} ```shell yarn add -D nx@latest ``` {% /tabitem %} {% tabitem label="pnpm" %} ```shell pnpm add -D nx@latest ``` {% /tabitem %} {% tabitem label="bun" %} ```shell bun add -D nx@latest ``` {% /tabitem %} {% /tabs %} ## Create a basic `nx.json` file Next, we'll create a blank `nx.json` configuration file for Nx: ```json // nx.json {} ``` ## Add `.nx` directories to `.gitignore` Next, we'll add the Nx default task cache directory and the project graph cache directory to the list of files to be ignored by Git. Update the `.gitignore` file adding the `.nx/cache` and `.nx/workspace-data` entry: ```text // .gitignore ... .nx/cache .nx/workspace-data ``` ## Set up caching for a task Now, let's set up caching for a script in your `package.json` file. Let's say you have a `build` script that looks like this: ```json // package.json { "scripts": { "build": "tsc -p tsconfig.json" } } ``` In order for Nx to cache this task, you need to: - run the script through Nx using `nx exec -- ` - configure caching settings in the `nx` property of `package.json` The new `package.json` will look like this: ```json // package.json { "scripts": { "build": "nx exec -- tsc -p tsconfig.json" }, "nx": { "targets": { "build": { "cache": true, "inputs": [ "{projectRoot}/**/*.ts", "{projectRoot}/tsconfig.json", { "externalDependencies": ["typescript"] } ], "outputs": ["{projectRoot}/dist"] } } } } ``` Now, if you run `npm build` or `nx build` twice, the second run will be retrieved from the cache instead of being executed. ## Enable more features of Nx as needed You could stop here if you want, but there are many more features of Nx that might be useful to you. Many [plugins](/docs/plugin-registry) can automate the process of configuring the way Nx runs your tools. Plugins also provide [code generators](/docs/features/generate-code) and [automatic dependency updates](/docs/features/automate-updating-dependencies). You can speed up your CI with [remote caching](/docs/features/ci-features/remote-cache) and [distributed task execution](/docs/features/ci-features/distribute-task-execution). --- ## Nx vs Turborepo Both Nx and Turborepo seem to cover very similar ground. They provide task scheduling, caching (local and remote), and affected detection. The differences emerge as your needs grow. While **Turbo covers just the basics**, **Nx provides solutions along the entire software growth lifecycle**, even when you need more advanced features such as distributed CI, polyglot builds, or AI-powered CI workflows. That's where the gap widens. This doesn't come at the cost of adoption complexity though. Nx is designed to be modular from the ground up and can be **adopted incrementally as you need more**. {% aside type="note" title="Benchmarks" %} All benchmarks on this page use the same [pnpm workspace](https://github.com/meeroslav/pnpm-workspace-baseline) migrated with both tools. {% /aside %} This page starts with the basics, like onboarding, and progressively moves into more advanced capabilities. | Topic | Nx | Turborepo | | --------------------------------------------------- | ----------------------------------------------------------------- | --------------------------------------------------------- | | [Onboarding](#onboarding) | Zero-config or guided `nx init` (+3 lines) | Manual `turbo.json` (+144 lines) | | [Running tasks](#running-tasks) | Runs `package.json` scripts, optional plugin-based task inference | Runs `package.json` scripts, requires `turbo.json` config | | [Caching](#caching) | Explicit opt-in, composable `namedInputs` | Cached by default, flat input lists | | [Task sandboxing](#task-sandboxing) | IO tracing + cache poisoning protection | Not available | | [Code generation](#code-generation) | Programmatic generators with AST transforms and graph awareness | Template-based file scaffolding (Plop) | | [Module boundary rules](#module-boundary-rules) | Tag-based lint rule + conformance rules (polyglot) | Experimental `turbo boundaries` (since 2024) | | [Polyglot support](#polyglot-support) | Native support for JS/TS, Java, .NET, Python, Rust | Any CLI via `package.json` scripts, no native graph | | [AI integration](#ai-integration) | Agent skills, MCP, `configure-ai-agents`, self-healing CI | Official skill, no MCP or CI integration | | [CI solution](#running-nx-vs-turbo-on-ci) | Nx Cloud: distribution (9m 20s), self-healing, flaky detection | No CI solution (19m 18s with manual binning) | | [Cross-repo coordination](#cross-repo-coordination) | Polygraph (synthetic monorepo) | Not available | | [Release management](#release-management) | Built-in versioning, changelogs, and publishing | Requires manual setup or 3rd party tools | | [Observability](#observability) | Integrated dashboards and AI-powered run analysis | Experimental OpenTelemetry (OTLP) export | | [Developer experience](#developer-experience) | TUI, IDE extensions, and interactive project graph | Basic TUI and LSP support | ## Onboarding Nx works with your existing `package.json` scripts out of the box. Add the `nx` package to your workspace and you immediately get task orchestration, affected detection, and local caching, without writing any task configuration. Turborepo requires every task to be explicitly declared in `turbo.json` before anything runs, even if the same tasks already work with your package manager directly: ```plaintext • turbo 2.8.7 × Missing tasks in project ╰─▶ × Could not find task `build` in project ``` For a more guided experience, run `npx nx init`. The interactive setup detects your existing tooling, asks which tasks should be cacheable, and scaffolds an `nx.json` with the right configuration. If you have Next.js and ESLint, for example, Nx automatically infers `build`, `dev`, `start`, and `lint` targets without you declaring them. Looking at the raw impact on a repository using the same [pnpm workspace](https://github.com/meeroslav/pnpm-workspace-baseline) migrated with both tools: | Setup | Config written | Net impact on codebase | | ------------ | ------------------ | ------------------------------------------------------------------------- | | Nx (minimal) | 3 lines | [+3 lines](https://github.com/meeroslav/pnpm-workspace-baseline/pull/4) | | Nx (guided) | 85 lines | [-15 lines](https://github.com/meeroslav/pnpm-workspace-baseline/pull/2) | | Turborepo | 122 lines (manual) | [+144 lines](https://github.com/meeroslav/pnpm-workspace-baseline/pull/3) | For a full walkthrough, see [Adding Nx to your Existing Project](/docs/guides/adopting-nx/adding-to-existing-project). If you're coming from Turborepo, see [Migrating from Turborepo to Nx](/docs/guides/adopting-nx/from-turborepo). ## Running tasks Once installed, both tools run your existing `package.json` scripts. If your project has a `build` script, `nx build` runs it, just like `turbo run build`. No rewiring needed. ```shell nx run-many -t build test lint ``` Nx also provides [`nx affected`](/docs/features/ci-features/affected) to run only tasks affected by your current changes, which works immediately without configuration. Where Nx goes further is with [plugins](/docs/concepts/inferred-tasks). Adding an Nx plugin like `@nx/vite` automatically infers tasks from your existing tool configuration (e.g. `vite.config.mts`), so you don't need to maintain manual script definitions. Plugins also read your tool's config to set cache `inputs` and `outputs` automatically, meaning caching works correctly from the start without manual tuning. ## Caching Both tools cache task results, but the defaults and depth of caching support differ significantly. Better cache configuration means fewer false positives, fewer unnecessary re-runs, and faster CI. **Turborepo caches every task by default**, so you opt out of caching with `cache: false` on tasks like `dev`. **Nx does the opposite: nothing is cached unless you explicitly set `cache: true`.** This is a more cautious opt-in model, since not all tasks are cacheable by default. Nx also provides [`namedInputs`](/docs/reference/inputs), reusable input patterns that you can compose across targets. You define a pattern once (like "production sources") and reference it everywhere. Turborepo's `inputs` are flat lists with no composition, so every target repeats the same exclusions. Here's the same workspace configured with both tools: {% tabs %} {% tabitem label="nx.json" %} ```json { "namedInputs": { "default": ["{projectRoot}/**/*", "sharedGlobals"], "sharedGlobals": ["{workspaceRoot}/.github/workflows/ci.yml"], "production": [ "default", "!{projectRoot}/**/?(*.)+(spec|test).[jt]s?(x)?(.snap)", "!{projectRoot}/tsconfig.spec.json", "!{projectRoot}/vitest.config.[jt]s", "!{projectRoot}/jest.config.[jt]s" ] }, "targetDefaults": { "build": { "dependsOn": ["^build"], "inputs": ["production", "^production"], "outputs": ["{projectRoot}/dist"], "cache": true, "configurations": { "prod": {} } }, "check-types": { "dependsOn": ["^check-types"], "inputs": ["production", "^production"], "cache": true }, "test": { "inputs": ["default", "^production"], "outputs": ["{projectRoot}/coverage"], "cache": true }, "lint": { "inputs": ["default", "{workspaceRoot}/eslint.config.mjs"], "cache": true }, "dev": { "continuous": true }, "@org/shop-e2e:test:e2e": { "dependsOn": ["@org/shop:build"] }, "@org/admin-e2e:test:e2e": { "dependsOn": ["@org/admin:build"] } } } ``` {% /tabitem %} {% tabitem label="turbo.json" %} ```json { "tasks": { "build": { "outputs": ["dist/**"], "dependsOn": ["^build"], "inputs": [ "$TURBO_DEFAULT$", "!eslint.config.mjs", "!**/*.spec.ts", "!**/*.test.ts", "!**/*.spec.tsx", "!**/*.test.tsx", "!**/*.spec.js", "!**/*.test.js", "!**/*.spec.jsx", "!**/*.test.jsx", "!tsconfig.spec.json", "!vitest.config.ts", "!jest.config.js" ] }, "build:prod": { "outputs": ["dist/**"], "dependsOn": ["^build"], "inputs": [ "$TURBO_DEFAULT$", "!eslint.config.mjs", "!**/*.spec.ts", "!**/*.test.ts", "!**/*.spec.tsx", "!**/*.test.tsx", "!**/*.spec.js", "!**/*.test.js", "!**/*.spec.jsx", "!**/*.test.jsx", "!tsconfig.spec.json", "!vitest.config.ts", "!jest.config.js" ] }, "check-types": { "dependsOn": ["^check-types"], "inputs": [ "$TURBO_DEFAULT$", "!eslint.config.mjs", "!**/*.spec.ts", "!**/*.test.ts", "!**/*.spec.tsx", "!**/*.test.tsx", "!**/*.spec.js", "!**/*.test.js", "!**/*.spec.jsx", "!**/*.test.jsx", "!tsconfig.spec.json", "!vitest.config.ts", "!jest.config.js" ] }, "test": {}, "lint": {}, "dev": { "persistent": true, "cache": false }, "@org/shop-e2e#test:e2e": { "dependsOn": ["@org/shop#build"] }, "@org/admin-e2e#test:e2e": { "dependsOn": ["@org/admin#build"] } } } ``` {% /tabitem %} {% /tabs %} The `production` pattern in Nx is defined once and reused across `build` and `test`. A change to a spec file won't invalidate the build cache because `production` explicitly excludes test files. In Turborepo, the same exclusion list is repeated across `build`, `build:prod`, and `check-types`. There's no way to define it once and reuse it. ## Task sandboxing A cache is only valuable if you can trust it. Turborepo has no task sandboxing. During execution, tasks can read and write anywhere on the filesystem. A task can read files that aren't declared as inputs and produce undeclared outputs that get cached and replayed into a different context. The result: false cache hits, missing artifacts, and hard-to-trace failures. Nx provides [task sandboxing](/docs/features/ci-features/sandboxing) that monitors filesystem access during execution and flags any reads or writes outside declared `inputs` and `outputs`. Undeclared dependencies are surfaced automatically rather than discovered through debugging production incidents. This matters for security too. [CVE-2025-36852](https://www.cve.org/CVERecord?id=CVE-2025-36852) (CREEP) demonstrated that build systems without cache isolation are vulnerable to cache poisoning, where any contributor with PR access can inject compromised artifacts into production. Nx Cloud prevents this through branch-scoped cache isolation. For more details, see [cache security](/docs/concepts/ci-concepts/cache-security). Task sandboxing is an architectural difference, not a configuration problem. There's no workaround on the Turborepo side. ## Code generation Both tools offer code generation, but the depth differs significantly. Turborepo provides `turbo gen`, a thin wrapper around [Plop.js](https://plopjs.com/). It can scaffold new workspaces and create files from Handlebars templates, but it's limited to template-based file creation and simple string append/prepend operations. There's no AST-level code modification, no awareness of the project graph, and no migration/codemod system. Nx generators are built on top of [Nx Devkit](/docs/extending-nx/intro), a full programmatic API for workspace manipulation. Generators can read and modify the project graph, perform AST-level TypeScript transforms, and compose with other generators. You can create [local workspace generators](/docs/features/generate-code#creating-custom-generators) that encode your team's specific patterns. The real value isn't raw scaffolding, AI can do that too. It's deterministic, convention-aware generation. **AI agents can invoke your generators to produce code matching your patterns from the start.** This is faster and more token-efficient than generating everything from scratch. ## Module boundary rules In large workspaces, unchecked dependencies between projects lead to architectural drift. Both tools offer mechanisms to enforce boundaries, but with different maturity and scope. Nx has provided [module boundary rules](/docs/features/enforce-module-boundaries) since its earliest versions. You assign tags to projects and define which tags can depend on which, enforced as a lint rule. For polyglot workspaces where ESLint isn't available, Nx also offers [conformance rules](/docs/enterprise/conformance) that work across any language. This becomes especially important with AI coding agents. Boundary rules act as guardrails, preventing agents from creating arbitrary cross-project dependencies that violate your architecture. Turborepo added experimental [`turbo boundaries`](https://turborepo.dev/docs/reference/boundaries) in 2024, which can define allowed dependencies in `turbo.json` and visualize them in their devtools graph view. ## Polyglot support Nx provides [first-party plugins](/docs/plugin-registry) for Maven, Gradle, .NET, and Docker, plus community plugins for Python (UV, Poetry), Rust (Cargo), Go, and PHP. Each plugin provides automatic dependency detection, target inference, caching, affected detection, and distribution. Turborepo can orchestrate any language by wrapping CLI commands in `package.json` scripts. However, non-JS projects still require a `package.json`, and Turborepo provides no automatic dependency graph analysis or target inference for those languages. You must define everything manually. **This difference is critical for AI readiness.** When your backend is in Go and your frontend is in Next.js, an AI agent with Nx can see the full cross-language dependency chain. With Turborepo, those services are "islands," and an agent has no way to reason about how a change in the Go API affects the frontend. ## AI integration Nx actively embraces AI and autonomous agents across the entire development lifecycle, not just individual features. Running [`nx configure-ai-agents`](/blog/nx-ai-agent-skills) sets up everything your AI agent needs in one command: agent skills, an MCP server, and `CLAUDE.md` / `AGENTS.md` guidelines. It works across **Claude Code, Cursor, GitHub Copilot, Gemini, Codex, and OpenCode**. - **[Agent skills](/blog/why-we-deleted-most-of-our-mcp-tools)** teach agents _how_ to work in your monorepo: when to use generators, how to explore the project graph, how to run tasks efficiently. Skills are loaded incrementally, keeping context focused and token-efficient. - **[Self-healing CI](/docs/features/ci-features/self-healing-ci)** is a specialized AI agent that runs on CI, monitors runs, diagnoses broken tasks, provides verified fixes, and automatically identifies and [re-runs flaky tasks](/docs/features/ci-features/flaky-tasks). - Dedicated skills and an **[MCP server](/docs/reference/nx-mcp)** allow the local coding agent to connect and [coordinate with the remote CI agent](/blog/autonomous-ai-workflows-with-nx), creating fully autonomous push-fix-verify loops. - The **Nx CLI is [optimized for agentic use](/blog/making-nx-agent-ready)**: commands like `nx init`, `nx import`, and `create-nx-workspace` detect when they're called by an agent and emit structured JSON output instead of interactive prompts, reducing wasted tokens and retries. Turborepo provides an [official skill](https://skills.sh/vercel/turborepo-skills) covering task configuration and caching strategies, plus a `turbo docs` command. However, no CI integration for agents, and no AI powered self-healing CI system. ## Running Nx vs Turbo on CI Nx works on any CI provider out of the box. Run `nx affected` or `nx run-many` in your existing pipeline and you get caching, affected detection, and task orchestration without additional setup. For teams that need more, [Nx Cloud](/docs/features/ci-features) layers on remote caching, intelligent task distribution across machines, self-healing CI, and flaky task detection, all integrated directly into your existing CI provider. Turborepo has no CI-specific solution. It runs tasks on CI the same way it does locally, with no built-in distribution, failure recovery, or CI-aware features. ![Nx Cloud CI report integrated into a GitHub PR](../../../../assets/guides/adopting-nx/ci-report.avif) ### Single-machine CI performance On a single CI runner with no cache hits, Nx is measurably faster. Its Rust-powered task scheduler produces a more optimal execution order, and its file hashing and cache restoration are more efficient. ![Nx 21m 56s vs Turborepo 25m 32s on a single CI runner](../../../../assets/guides/adopting-nx/single-machine.avif) | Tool | Duration | Difference | | --------- | -------- | ----------- | | Nx | 21m 56s | N/A | | Turborepo | 25m 32s | ~16% slower | A 16% gap may sound modest, but on a 30-minute pipeline that's nearly 4 minutes saved on every run. Note that these numbers are without any cache optimization, just both tools running out of the box on the same codebase. ### Distributed CI When a single machine is no longer enough, the tools diverge significantly. On the same workspace distributed across 4 machines: | Metric | Nx Agents | Turborepo (binning) | | -------------- | -------------- | ------------------- | | Total duration | 9m 20s | 19m 18s | | Agent spread | 5m 1s - 9m 16s | 2m 50s - 18m 20s | {% tabs %} {% tabitem label="Nx Agents" %} **Nx Agents** splits work across multiple CI machines at the individual task level, dynamically balancing load based on historical timing data. The scheduler keeps all agents busy with minimal idle time. ![Nx Agents distributing work evenly across 4 machines](../../../../assets/guides/adopting-nx/nx-dte.avif) {% /tabitem %} {% tabitem label="Turborepo (binning)" %} **Turborepo** has no built-in distribution. Scaling to multiple machines requires manual "binning," where you statically assign tasks to runners and rebalance by hand as the codebase evolves. ![Turborepo binning with severely unbalanced agent utilization](../../../../assets/guides/adopting-nx/turbo-binning.avif) {% /tabitem %} {% /tabs %} Turborepo is 2x slower even on this small sample. One Turborepo agent ran for 18 minutes while another sat idle after 3 minutes. The gap grows with more machines. **The core difference is declarative vs imperative**: - With Nx you declare _what_ should run on CI, and Nx Cloud figures out the optimal distribution automatically. Nx Agents adapts automatically and gets smarter over time. - With Turborepo, you imperatively assign tasks to machines and maintain that mapping as the codebase evolves. To learn more about distribution on CI, read [Distribute Task Execution (Nx Agents)](/docs/features/ci-features/distribute-task-execution). Nx also provides **Atomizer**, which automatically splits slow e2e and integration test suites into per-file tasks that can run in parallel across machines. For more information, see [split e2e tasks](/docs/features/ci-features/split-e2e-tasks). ### CI throughput Raw CI speed is one part of throughput. The other is avoiding failed PRs. A failing PR means a developer has to stop, inspect logs, provide a fix, push again, and wait for another CI run. As AI agents ship more PRs, CI quickly becomes the bottleneck. Nx Cloud's [self-healing CI](/docs/features/ci-features/self-healing-ci) places a specialized AI agent on CI that analyzes failures, proposes fixes, and auto-applies verified fixes. This reduces friction on the developer side, allowing PRs to move forward without manual intervention. [Flaky task detection](/docs/features/ci-features/flaky-tasks) identifies flaky tests and re-runs them in isolation, preventing false failures from blocking your pipeline. ![Time to green data: ~1h reduction per PR, 1h 24m developer time saved, 22.6% fewer context switches](../../../../assets/guides/adopting-nx/time-to-green.avif) _The metrics shown above are extracted directly from the Nx Cloud dashboard._ Turborepo offers neither. A flaky test fails the build, and every failure requires a full human round trip. ### Observability Both tools offer visibility into your pipelines, but through different models. **Nx Cloud** provides integrated dashboards out of the box. You get detailed run reports, timing data, cache hit/miss trends, and historical performance analysis without any external setup. These insights are also queryable via the MCP server, letting AI assistants analyze CI performance conversationally. ![Nx DTE across 12 Nx Agents - all agents have minimum 97% utilization](../../../../assets/guides/adopting-nx/dte-chart.avif) _Nx Agent utilization chart, showing even distribution across CI runners._ For deep dives into resource utilization, see [CI Resource Usage](/docs/guides/nx-cloud/ci-resource-usage). **Turborepo** exposes run metrics via experimental **OpenTelemetry (OTLP)**. This is useful if you already have a mature observability stack (like Datadog or Grafana) and want to route build metrics into it, though it requires significant manual setup and maintenance of your own collector and visualization layer. ## Cross-repo coordination Most organizations don't have a single monorepo. They have several monorepos plus standalone repos across teams. Nx Cloud's [Polygraph](/docs/enterprise/polygraph) creates a [synthetic monorepo](/docs/concepts/synthetic-monorepos): a unified dependency graph across separate repositories without moving any code. AI agents can read the cross-repo graph, coordinate changes across repos, and manage PRs across repo boundaries. Turborepo has no cross-repo coordination. ## Release management Versioning and publishing libraries in a monorepo is a complex orchestration task. You need to identify what changed, determine the next version for each package, update internal dependencies, generate changelogs, and publish to registries. **Nx provides a first-party, unified release system** via [`nx release`](/docs/features/manage-releases) that automates the entire lifecycle of versioning, changelog generation, and publishing with a single, highly configurable command. Turborepo has no built-in release solution. Teams usually end up manually configuring third-party tools like [Changesets](https://changesets.js.org/) or [Lerna](https://lerna.js.org/), adding another layer of manual setup and tool maintenance to the workspace. ## Developer experience ### Project graph Both tools can visualize the dependency graph, but the implementations differ significantly at scale. Turborepo renders every node at once, which becomes unreadable past a few dozen projects. Nx provides an interactive graph that lets you filter, group, and search projects, then drill into any node for details on demand. {% tabs %} {% tabitem label="Nx (grouped)" %} ![Nx project graph with projects grouped by directory](../../../../assets/guides/adopting-nx/nx-graph-grouping.avif) {% /tabitem %} {% tabitem label="Nx (ungrouped)" %} ![Nx project graph with clean nodes and clear connections](../../../../assets/guides/adopting-nx/nx-graph.avif) {% /tabitem %} {% tabitem label="Turborepo" %} ![Turborepo graph rendering all nodes at once](../../../../assets/guides/adopting-nx/turbo-repo-graph.avif) {% /tabitem %} {% /tabs %} The Nx graph is also available inside [Nx Console](/docs/getting-started/editor-setup), so you can explore dependencies without leaving your editor. {% aside type="tip" title="Ready to migrate?" %} For step-by-step migration instructions, including configuration mapping and command equivalents, see [Migrating from Turborepo to Nx](/docs/guides/adopting-nx/from-turborepo). {% /aside %} ### Terminal UI Nx ships with a full [terminal UI](/docs/guides/tasks--caching/terminal-ui) that adapts to what you're running. For multi-task runs, it shows a task overview with filtering, pinning, and layout switching. For a single task with no dependencies, it drops into a simplified view with the output front and center. Colors, order, and indentation from your underlying tools are preserved. Turborepo has a more simplified version of a TUI which shows task output in a split view. You can opt-in to that TUI experience. ### IDE extensions [Nx Console](/docs/getting-started/editor-setup) provides extensions for VS Code and WebStorm/IntelliJ with close to 2 million installations. You can run tasks, explore the project graph, scaffold with generators, and inspect Nx Cloud CI runs directly from your editor. It also includes a language server that provides autocompletion in `nx.json` and `project.json` files. Turborepo provides basic LSP support for `turbo.json`. --- ## Preserving Git Histories when Migrating Projects to Nx {% aside type="note" title="Automatically import with 'nx import'" %} In Nx 19.8 we introduced `nx import` which helps you import projects into your Nx workspace, including preserving the Git history. [Read more in the according doc](/docs/guides/adopting-nx/import-project). {% /aside %} The nature of a monorepo is to swallow up standalone projects as your organization buys into the benefits of a monorepo workflow. As your monorepo consumes other projects though, it's important to ensure that git history for those projects is preserved inside of our Nx Workspace. Git has some helpful tools for this, and some of the common pitfalls and gotchas of this task! ## Merging in a standalone project To merge in another project, we'll essentially use the standard `git merge` command, but with a few lesser known options/caveats. If your standalone project was not an Nx workspace, it's likely that your migration work will also entail moving directories to match a typical Nx Workspace structure. You can find more information in the [Manual migration](/docs/guides/adopting-nx/manual) page, but when migrating an existing project, you'll want to ensure that you use [`git mv`](https://git-scm.com/docs/git-mv) when moving a file or directory to ensure that file history from the old standalone repo is not lost! In order to avoid merge conflicts later, it's best to first do the folder reorganization in the _standalone project repo_. For example, assuming you want the standalone app to end up at `apps/my-standalone-app` in the monorepo and your main branch is called `master`: ```shell cd my-standalone-app git checkout main git fetch git checkout -b monorepo-migration main mkdir -p apps/my-standalone-app git ls-files | sed 's!/.*!!'| uniq | xargs -i git mv {} apps/my-standalone-app ``` Check if you need to move back the `.gitignore` file to the root and/or update any paths so you don't commit previously ignored files/folders. If all is well proceed with the commit and push. ```shell git commit -m "Move files in preparation for monorepo migration" git push --set-upstream origin monorepo-migration ``` Next, in your monorepo, we'll add a remote repository url for where the standalone app is located: ```shell git remote add my-standalone-app git fetch my-standalone-app ``` Then we'll run ```shell git merge my-standalone-app/monorepo-migration --allow-unrelated-histories ``` Note that without the `--allow-unrelated-histories` option, the command would fail with the message: `fatal: refusing to merge unrelated histories`. --- ## Prepare Applications for Deployment via CI {% aside type="note" title="Using TS Solution Setup?" %} If your workspace uses TS project references (the default in Nx 20+), use the [prune workflow](/docs/technologies/node/guides/deploying-node-projects) instead. The `generatePackageJson` approach below applies to workspaces without TS Solution Setup. {% /aside %} A common approach to deploying applications is via docker containers. Some applications can be built into bundles that are environment agnostic, while others depend on OS-specific packages being installed. For these situations, having just bundled code is not enough, we also need to have `package.json`. Nx supports the generation of the project's `package.json` by identifying all the project's dependencies. The generated `package.json` is created next to the built artifacts (usually at `dist/apps/name-of-the-app`). Additionally, we should generate pruned lock file according to the generated `package.json`. This makes the installation in the container significantly faster as we only need to install a subset of the packages. Nx offers two varieties of Webpack plugin which can be used to generate `package.json`. ## Basic plugin configuration `@nx/webpack/plugin` plugin is compatible with a conventional webpack configuration setup which offers a smooth integration with the Webpack CLI. It is configured in the `plugins` array in `nx.json`. ```json // nx.json { "plugins": [ { "plugin": "@nx/webpack/plugin", "options": { "buildTargetName": "build", "serveTargetName": "serve", "serveStaticTargetName": "serve-static", "previewStaticTargetName": "preview" } } ] } ``` Where `build`, `serve`, `serve-static` and `preview` in conjunction with your `webpack.config.js` are the names of the targets that are used to _build_, _serve_, and _preview_ the application respectively. ### NxAppWebpackPlugin The [`NxAppWebpackPlugin`](/docs/technologies/build-tools/webpack/guides/webpack-plugins#nxappwebpackplugin) plugin takes a `main` entry file and produces a bundle in the output directory as defined in `output.path`. You can also pass the `index` option if it is a web app, which will handle outputting scripts and stylesheets in the output file. To generate a `package.json` we would declare it in the plugin options. ```js // apps/acme/app/webpack.config.js const { NxAppWebpackPlugin } = require('@nx/webpack/app-plugin'); const { join } = require('path'); module.exports = { output: { path: join(__dirname, '../../dist/apps/acme'), }, devServer: { port: 4200, }, plugins: [ new NxAppWebpackPlugin({ tsConfig: './tsconfig.app.json', compiler: 'swc', main: './src/main.tsx', index: '.src/index.html', styles: ['./src/styles.css'], generatePackageJson: true, }), ], }; ``` ## Programmatic usage If you are using a custom setup that does not support the creation of a `package.json` or a lock file, you can still use Nx to generate them via The `createPackageJson` and `createLockFile` functions which are exported from `@nx/js`: {% tabs %} {% tabitem label="Custom script" %} If you need to use a custom script, to build your application it should look similar to the following: ```javascript // scripts/create-package-json.js const { createProjectGraphAsync, readCachedProjectGraph, detectPackageManager, writeJsonFile, } = require('@nx/devkit'); const { createLockFile, createPackageJson, getLockFileName, } = require('@nx/js'); const { writeFileSync } = require('fs'); async function main() { const outputDir = 'dist'; // You can replace this with the output directory you want to use // Detect the package manager you are using (npm, yarn, pnpm, bun) const pm = detectPackageManager(); let projectGraph = readCachedProjectGraph(); if (!projectGraph) { projectGraph = await createProjectGraphAsync(); } // You can replace with the name of the project if you want. const projectName = process.env.NX_TASK_TARGET_PROJECT; const packageJson = createPackageJson(projectName, projectGraph, { isProduction: true, // Used to strip any non-prod dependencies root: projectGraph.nodes[projectName].data.root, }); const lockFile = createLockFile( packageJson, projectGraph, detectPackageManager() ); const lockFileName = getLockFileName(pm); writeJsonFile(`${outputDir}/package.json`, packageJson); writeFileSync(`${outputDir}/${lockFileName}`, lockFile, { encoding: 'utf8', }); //... Any additional steps you want to run } main(); ``` Then to run the script, update your `package.json` to include the following: ```json // package.json { "scripts": { "copy-package-json": "node scripts/create-package-json.js", "custom-build": "nx build && npm run copy-package-json" } } ``` Now, you can run `npm run custom-build` to build your application and generate the `package.json` and lock file. You can replace _npm_ with _yarn_, _pnpm_, or _bun_ if you are using those package managers. {% /tabitem %} {% tabitem label="Custom executor" %} ```typescript // Custom executor import { Schema } from './schema'; import { createPackageJson, createLockFile, getLockFileName } from '@nx/js'; import { writeFileSync } from 'fs'; import { detectPackageManager, ExecutorContext, writeJsonFile, } from '@nx/devkit'; export default async function buildExecutor( options: Schema, context: ExecutorContext ) { // ...your executor code const packageManager = detectPackageManager(); const packageJson = createPackageJson( context.projectName, context.projectGraph, { root: context.root, isProduction: true, // We want to strip any non-prod dependencies } ); // do any additional manipulations to "package.json" here const lockFile = createLockFile( packageJson, context.projectGraph, packageManager ); const lockFileName = getLockFileName(packageManager); writeJsonFile(`${options.outputPath}/package.json`, packageJson); writeFileSync(`${options.outputPath}/${lockFileName}`, lockFile, { encoding: 'utf-8', }); // any subsequent executor code } ``` {% /tabitem %} {% /tabs %} {% aside type="note" %} **What about Vite?** Vite is a build tool that is great for development, and we want to make sure that it is also great for production. We are working on an `NxVitePlugin` plugin for Vite that will have parity with the `NxWebpackPlugin`. Stay tuned for updates. {% /aside %} --- ## Enforce Module Boundaries {% index_page_cards path="guides/enforce-module-boundaries" /%} --- ## Ban Dependencies with Certain Tags Specifying which tags a project is allowed to depend on can sometimes lead to a long list of possible options: ```jsonc { "sourceTag": "scope:client", // we actually want to say it cannot depend on `scope:admin` "onlyDependOnLibsWithTags": [ "scope:shared", "scope:utils", "scope:core", "scope:client", ], } ``` The property `notDependOnLibsWithTags` is used to invert this condition by explicitly specifying which tag(s) it cannot depend on: ```jsonc { "sourceTag": "scope:client", // we accept any tag except for `scope:admin` "notDependOnLibsWithTags": ["scope:admin"], } ``` In contrast to `onlyDependOnLibsWithTags`, the `notDependOnLibsWithTags` will also follow down the _entire dependency tree_ to make sure there are no sub-dependencies that violate this rule. You can also use a combination of these two rules to restrict certain types of projects to be imported: ```jsonc { "sourceTag": "type:react", "onlyDependOnLibsWithTags": [ "type:react", "type:utils", "type:animation", "type:model", ], // make sure no `angular` code ends up being referenced by react projects "notDependOnLibsWithTags": ["type:angular"], } ``` --- ## Ban External Imports **This constraint is only available for projects using ESLint.** You may want to constrain what external packages a project may import. For example, you may want to prevent backend projects from importing packages related to your frontend framework. You can ban these imports using `bannedExternalImports` property in your dependency constraints configuration. A common example of this is for backend projects that use NestJS and frontend projects that use Angular. Both frameworks contain a class named `Injectable`. It's very easy for a developer to import the wrong one by mistake, especially when using auto-import in an IDE. To prevent this, add tags to define the type of project to distinguish between backend and frontend projects. Each tag should define its own list of banned external imports. {% tabs syncKey="eslint-config-preference" %} {% tabitem label="Flat Config" %} ```javascript // eslint.config.mjs import nx from '@nx/eslint-plugin'; export default [ ...nx.configs['flat/base'], ...nx.configs['flat/typescript'], ...nx.configs['flat/javascript'], { files: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'], rules: { '@nx/enforce-module-boundaries': [ 'error', { allow: [], // update depConstraints based on your tags depConstraints: [ // projects tagged with "frontend" can't import from "@nestjs/common" { sourceTag: 'frontend', bannedExternalImports: ['@nestjs/common'], }, // projects tagged with "backend" can't import from "@angular/core" { sourceTag: 'backend', bannedExternalImports: ['@angular/core'], }, ], }, ], }, }, ]; ``` {% /tabitem %} {% tabitem label="Legacy (.eslintrc.json)" %} ```jsonc // .eslintrc.json { // ... more ESLint config here // @nx/enforce-module-boundaries should already exist at the top-level of your config "@nx/enforce-module-boundaries": [ "error", { "allow": [], // update depConstraints based on your tags "depConstraints": [ // projects tagged with "frontend" can't import from "@nestjs/common" { "sourceTag": "frontend", "bannedExternalImports": ["@nestjs/common"], }, // projects tagged with "backend" can't import from "@angular/core" { "sourceTag": "backend", "bannedExternalImports": ["@angular/core"], }, ], }, ], // ... more ESLint config here } ``` {% /tabitem %} {% /tabs %} Another common example is ensuring that util libraries stay framework-free by banning imports from these frameworks. You can use wildcard `*` to match multiple projects e.g. `react*` would match `react`, but also `react-dom`, `react-native` etc. You can also have multiple wildcards e.g. `*react*` would match any package with word `react` in it's name. A workspace using React would have a configuration like this. {% tabs syncKey="eslint-config-preference" %} {% tabitem label="Flat Config" %} ```javascript // eslint.config.mjs import nx from '@nx/eslint-plugin'; export default [ ...nx.configs['flat/base'], ...nx.configs['flat/typescript'], ...nx.configs['flat/javascript'], { files: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'], rules: { '@nx/enforce-module-boundaries': [ 'error', { allow: [], // update depConstraints based on your tags depConstraints: [ // projects tagged with "type:util" can't import from "react" or related projects { sourceTag: 'type:util', bannedExternalImports: ['*react*'], }, ], }, ], }, }, ]; ``` {% /tabitem %} {% tabitem label="Legacy (.eslintrc.json)" %} ```jsonc // .eslintrc.json { // ... more ESLint config here // @nx/enforce-module-boundaries should already exist at the top-level of your config "@nx/enforce-module-boundaries": [ "error", { "allow": [], // update depConstraints based on your tags "depConstraints": [ // projects tagged with "type:util" can't import from "react" or related projects { "sourceTag": "type:util", "bannedExternalImports": ["*react*"], }, ], }, ], // ... more ESLint config here } ``` {% /tabitem %} {% /tabs %} ## Allowlist external imports with `allowedExternalImports` If you need a more restrictive approach, you can use the `allowedExternalImports` option to ensure that a project only imports from a specific set of packages. This is useful if you want to enforce separation of concerns _(e.g. keeping your domain logic clean from infrastructure concerns, or ui libraries clean from data access concerns)_ or keep some parts of your codebase framework-free or library-free. {% tabs syncKey="eslint-config-preference" %} {% tabitem label="Flat Config" %} ```javascript // eslint.config.mjs import nx from '@nx/eslint-plugin'; export default [ ...nx.configs['flat/base'], ...nx.configs['flat/typescript'], ...nx.configs['flat/javascript'], { files: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'], rules: { '@nx/enforce-module-boundaries': [ 'error', { allow: [], // update depConstraints based on your tags depConstraints: [ // limiting the dependencies of util libraries to the bare minimum // projects tagged with "type:util" can only import from "date-fns" { sourceTag: 'type:util', allowedExternalImports: ['date-fns'], }, // ui libraries clean from data access concerns // projects tagged with "type:ui" can only import packages matching "@angular/*" except "@angular/common/http" { sourceTag: 'type:ui', allowedExternalImports: ['@angular/*'], bannedExternalImports: ['@angular/common/http'], }, // keeping the domain logic clean from infrastructure concerns // projects tagged with "type:core" can't import any external packages. { sourceTag: 'type:core', allowedExternalImports: [], }, ], }, ], }, }, ]; ``` {% /tabitem %} {% tabitem label="Legacy (.eslintrc.json)" %} ```jsonc // .eslintrc.json { // ... more ESLint config here // @nx/enforce-module-boundaries should already exist at the top-level of your config "@nx/enforce-module-boundaries": [ "error", { "allow": [], // update depConstraints based on your tags "depConstraints": [ // limiting the dependencies of util libraries to the bare minimum // projects tagged with "type:util" can only import from "date-fns" { "sourceTag": "type:util", "allowedExternalImports": ["date-fns"], }, // ui libraries clean from data access concerns // projects tagged with "type:ui" can only import packages matching "@angular/*" except "@angular/common/http" { "sourceTag": "type:ui", "allowedExternalImports": ["@angular/*"], "bannedExternalImports": ["@angular/common/http"], }, // keeping the domain logic clean from infrastructure concerns // projects tagged with "type:core" can't import any external packages. { "sourceTag": "type:core", "allowedExternalImports": [], }, ], }, ], } ``` {% /tabitem %} {% /tabs %} --- ## Tag in Multiple Dimensions The example listed in [Enforce Module Boundaries](/docs/features/enforce-module-boundaries#tags) shows using a single dimension: `scope`. It's the most commonly used one. But you can find other dimensions useful. You can define which projects contain components, state management code, and features, so you, for instance, can disallow projects containing presentational UI components to depend on state management code. You can define which projects are experimental and which are stable, so stable applications cannot depend on experimental projects etc. You can define which projects have server-side code and which have client-side code to make sure your node app doesn't bundle in your frontend framework. Let's consider our previous three scopes - `scope:client`. `scope:admin`, `scope:shared`. By using just a single dimension, our `client-e2e` application would be able to import `client` application or `client-feature-main`. This is likely not something we want to allow as it's using framework that our E2E project doesn't have. Let's add another dimension - `type`. Some of our projects are applications, some are UI features and some are just plain helper libraries. Let's define three new tags: `type:app`, `type:feature`, `type:ui` and `type:util`. Our project configurations might now look like this: ```jsonc {% title="client" %} { // ... more project configuration here "tags": ["scope:client", "type:app"], } ``` ```jsonc {% title="client-e2e" %} { // ... more project configuration here "tags": ["scope:client", "type:app"], "implicitDependencies": ["client"], } ``` ```jsonc {% title="admin" %} { // ... more project configuration here "tags": ["scope:admin", "type:app"], } ``` ```jsonc {% title="admin-e2e" %} { // ... more project configuration here "tags": ["scope:admin", "type:app"], "implicitDependencies": ["admin"], } ``` ```jsonc {% title="client-feature-main" %} { // ... more project configuration here "tags": ["scope:client", "type:feature"], } ``` ```jsonc {% title="admin-feature-permissions" %} { // ... more project configuration here "tags": ["scope:admin", "type:feature"], } ``` ```jsonc {% title="components-shared" %} { // ... more project configuration here "tags": ["scope:shared", "type:ui"], } ``` ```jsonc {% title="utils" %} { // ... more project configuration here "tags": ["scope:shared", "type:util"], } ``` We can now restrict projects within the same group to depend on each other based on the type: - `app` can only depend on `feature`, `ui` or `util`, but not other apps - `feature` cannot depend on app or another feature - `ui` can only depend on other `ui` - everyone can depend on `util` including `util` itself {% tabs syncKey="eslint-config-preference" %} {% tabitem label="Flat Config" %} ```javascript // eslint.config.mjs import nx from '@nx/eslint-plugin'; export default [ ...nx.configs['flat/base'], ...nx.configs['flat/typescript'], ...nx.configs['flat/javascript'], { files: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'], rules: { '@nx/enforce-module-boundaries': [ 'error', { allow: [], // update depConstraints based on your tags depConstraints: [ { sourceTag: 'scope:shared', onlyDependOnLibsWithTags: ['scope:shared'], }, { sourceTag: 'scope:admin', onlyDependOnLibsWithTags: ['scope:shared', 'scope:admin'], }, { sourceTag: 'scope:client', onlyDependOnLibsWithTags: ['scope:shared', 'scope:client'], }, { sourceTag: 'type:app', onlyDependOnLibsWithTags: [ 'type:feature', 'type:ui', 'type:util', ], }, { sourceTag: 'type:feature', onlyDependOnLibsWithTags: ['type:ui', 'type:util'], }, { sourceTag: 'type:ui', onlyDependOnLibsWithTags: ['type:ui', 'type:util'], }, { sourceTag: 'type:util', onlyDependOnLibsWithTags: ['type:util'], }, ], }, ], }, }, ]; ``` {% /tabitem %} {% tabitem label="Legacy (.eslintrc.json)" %} ```jsonc // .eslintrc.json { // ... more ESLint config here // @nx/enforce-module-boundaries should already exist at the top-level of your config "@nx/enforce-module-boundaries": [ "error", { "allow": [], // update depConstraints based on your tags "depConstraints": [ { "sourceTag": "scope:shared", "onlyDependOnLibsWithTags": ["scope:shared"], }, { "sourceTag": "scope:admin", "onlyDependOnLibsWithTags": ["scope:shared", "scope:admin"], }, { "sourceTag": "scope:client", "onlyDependOnLibsWithTags": ["scope:shared", "scope:client"], }, { "sourceTag": "type:app", "onlyDependOnLibsWithTags": ["type:feature", "type:ui", "type:util"], }, { "sourceTag": "type:feature", "onlyDependOnLibsWithTags": ["type:ui", "type:util"], }, { "sourceTag": "type:ui", "onlyDependOnLibsWithTags": ["type:ui", "type:util"], }, { "sourceTag": "type:util", "onlyDependOnLibsWithTags": ["type:util"], }, ], }, ], // ... more ESLint config here } ``` {% /tabitem %} {% /tabs %} There are no limits to the number of tags, but as you add more tags the complexity of your dependency constraints rises exponentially. It's always good to draw a diagram and carefully plan the boundaries. ## Matching multiple source tags Matching just a single source tag is sometimes not enough for solving complex restrictions. To avoid creating ad-hoc tags that are only meant for specific constraints, you can also combine multiple tags with `allSourceTags`. Each tag in the array must be matched for a constraint to be applied: {% tabs syncKey="eslint-config-preference" %} {% tabitem label="Flat Config" %} ```javascript // eslint.config.mjs import nx from '@nx/eslint-plugin'; export default [ ...nx.configs['flat/base'], ...nx.configs['flat/typescript'], ...nx.configs['flat/javascript'], { files: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'], rules: { '@nx/enforce-module-boundaries': [ 'error', { allow: [], // update depConstraints based on your tags depConstraints: [ { // this constraint applies to all "admin" projects sourceTag: 'scope:admin', onlyDependOnLibsWithTags: ['scope:shared', 'scope:admin'], }, { sourceTag: 'type:ui', onlyDependOnLibsWithTags: ['type:ui', 'type:util'], }, { // we don't want our admin ui components to depend on anything except utilities, // and we also want to ban router imports allSourceTags: ['scope:admin', 'type:ui'], onlyDependOnLibsWithTags: ['type:util'], bannedExternalImports: ['*router*'], }, ], }, ], }, }, ]; ``` {% /tabitem %} {% tabitem label="Legacy (.eslintrc.json)" %} ```jsonc // .eslintrc.json { // ... more ESLint config here // @nx/enforce-module-boundaries should already exist at the top-level of your config "@nx/enforce-module-boundaries": [ "error", { "allow": [], // update depConstraints based on your tags "depConstraints": [ { // this constraint applies to all "admin" projects "sourceTag": "scope:admin", "onlyDependOnLibsWithTags": ["scope:shared", "scope:admin"], }, { "sourceTag": "type:ui", "onlyDependOnLibsWithTags": ["type:ui", "type:util"], }, { // we don't want our admin ui components to depend on anything except utilities, // and we also want to ban router imports "allSourceTags": ["scope:admin", "type:ui"], "onlyDependOnLibsWithTags": ["type:util"], "bannedExternalImports": ["*router*"], }, ], }, ], // ... more ESLint config here } ``` {% /tabitem %} {% /tabs %} ## Further reading - [Article: Taming Code Organization with Module Boundaries in Nx](https://nx.dev/blog/mastering-the-project-boundaries-in-nx) --- ## Tags Allow List Sometimes there are specific situations where you want to break the tag rules you've set up for project dependencies. Each project can set an `allow` property in the project configuration to override the tagging rules that have been set up. - `"allow": ['@myorg/mylib/testing']` allows importing `'@myorg/mylib/testing'`. - `"allow": ['@myorg/mylib/*']` allows importing `'@myorg/mylib/a'` but not `'@myorg/mylib/a/b'`. - `"allow": ['@myorg/mylib/**']` allows importing `'@myorg/mylib/a'` and `'@myorg/mylib/a/b'`. - `"allow": ['@myorg/**/testing']` allows importing `'@myorg/mylib/testing'` and `'@myorg/nested/lib/testing'`. --- ## Guides {% index_page_cards path="guides/installation" /%} --- ## Install Nx in a Non-JavaScript Repository Nx can manage its own installation without requiring a `package.json` file or a `node_modules` folder. This type of installation is useful for repositories that may not contain any JavaScript or TypeScript (e.g. .Net or Java based workspaces that want to leverage Nx features). In this setup, the Nx CLI is all contained within a `.nx` folder. ## Globally install Nx First globally install Nx using Homebrew (Mac and Linux) or with a manually installed version of Node (any OS). {% tabs syncKey="install-type" %} {% tabitem label="Homebrew" %} If you have [Homebrew installed](https://brew.sh/), you can install Nx globally with these commands: ```shell brew install nx ``` {% /tabitem %} {% tabitem label="Node" %} If you have [Node installed](https://nodejs.org/en/download), you can install Nx globally with this command: ```shell npm install --global nx ``` {% /tabitem %} {% /tabs %} ## Usage You can install Nx in the `.nx/installation` directory of your repository by running `nx init` in a directory without a `package.json` file. ```shell nx init ``` When Nx is installed in `.nx`, you can run Nx via a global Nx installation or the nx and nx.bat scripts that were created. In either case, the wrapper (.nx/nxw.js) will be invoked and ensure that the current workspace is up to date prior to invoking Nx. {% tabs %} {% tabitem label="Global Install" %} ```shell nx build my-project nx generate application nx graph ``` {% /tabitem %} {% tabitem label="nx shell script" %} ```shell ./nx build my-project ./nx generate application ./nx graph ``` {% /tabitem %} {% tabitem label="nx.bat" %} ```shell ./nx.bat build my-project ./nx.bat generate application ./nx.bat graph ``` {% /tabitem %} {% /tabs %} --- ## Update Your Global Nx Installation There are some cases where an issue could arise when using an outdated global installation of Nx. If the structure of your Nx workspace no longer matches up with what the globally installed copy of Nx expects, it may fail to hand off to your local installation properly and instead error. This commonly results in errors such as: - `Could not find Nx modules in this workspace.` - `The current directory isn't part of an Nx workspace.` If you find yourself in this position, you will need to update your global installation of Nx. In most cases, you can update a globally installed npm package by rerunning the command you used to install it. If you cannot remember which package manager you installed Nx globally with or are still encountering issues, you can locate other installations of Nx with these commands: {% tabs syncKey="install-type" %} {% tabitem label="npm" %} ```shell npm list --global nx ``` {% /tabitem %} {% tabitem label="yarn" %} **yarn 2+** ```shell yarn dlx list nx ``` yarn 1.x ```shell yarn global list nx ``` {% /tabitem %} {% tabitem label="pnpm" %} ```shell pnpm list --global nx ``` {% /tabitem %} {% /tabs %} You can then remove the extra global installations by running the following commands for the duplicate installations: {% tabs syncKey="install-type" %} {% tabitem label="npm" %} ```shell npm rm --global nx ``` {% /tabitem %} {% tabitem label="yarn" %} ```shell yarn global remove nx ``` {% /tabitem %} {% tabitem label="pnpm" %} ```shell pnpm rm --global nx ``` {% /tabitem %} {% /tabs %} Finally, to complete your global installation update, simply reinstall it as described [above](#update-your-global-nx-installation). --- ## Nx Cloud Guides {% index_page_cards path="guides/nx-cloud" /%} --- ## Nx CLI and CI Access Tokens {% youtube src="https://youtu.be/vBokLJ_F8qs" title="Configure CI access tokens" /%} The permissions and membership define what developers can access on [nx.app](https://cloud.nx.app?utm_source=nx.dev&utm_medium=docs&utm_campaign=nx-cloud-security), but they don't affect what happens when you run Nx commands in CI. To manage that, you need to provision CI access tokens in your workspace settings, under the `Access Control` tab. Learn more about [cache security best practices](/docs/concepts/ci-concepts/cache-security). ![Access Control Settings Page](../../../../assets/nx-cloud/access-control-settings.avif) ## Access types {% aside type="caution" title="Use Caution With Read-Write Tokens" %} The `read-write` tokens allow full write access to your remote cache. They should only be used in trusted environments. {% /aside %} There are currently two (2) types of CI Access Token for Nx Cloud's runner that you can use with your workspace. Both support distributed task execution and allow Nx Cloud to store metadata about runs. - `read-only` - `read-write` ### Read only access The `read-only` access tokens can only read from the global remote cache. Task results produced with this type of access token will be stored in an isolated remote cache accessible _only_ by that specific branch in a CI context, and cannot influence the global shared cache. The isolated remote cache produced with a `read-only` token is accessible to all machines or agents in the same CI execution, enabling cache sharing during distributed task execution. ### Read & write access The `read-write` access tokens allow task results to be stored in the remote cache for other machines or CI pipelines to download and replay. This access level should only be used for trusted environments such as protected branches within your CI Pipeline. ## Setting CI access tokens You can configure an access token in CI by setting the `NX_CLOUD_ACCESS_TOKEN` environment variable. The `NX_CLOUD_ACCESS_TOKEN` takes precedence over any authentication method in your `nx.json`. We recommend setting up a `read-write` token for you protected branches in CI and a `read-only` token for unprotected branches. You can leverage your CI provider's environment variables management to accomplish this. ### Azure DevOps Azure DevOps provides various [mechanisms to limit access to secrets](https://learn.microsoft.com/en-us/azure/devops/pipelines/security/secrets?view=azure-devops#limit-access-to-secret-variables). We'll be using _Variable groups_ in this process, but you can achieve the same result leveraging [Azure Key Vault](https://learn.microsoft.com/en-us/azure/key-vault/general/overview). 1. In your project, navigate to Pipelines > Library. ![Variable group settings page](../../../../assets/nx-cloud/ado-library-start.avif) 2. Create a new _Variable group_ called _protected_. - If you already have a variable group for protected environments, we recommend reusing that variable group. 3. Add the `NX_CLOUD_ACCESS_TOKEN` environment variable with the `read-write` token from Nx Cloud. ![create protected variable group](../../../../assets/nx-cloud/ado-protected-var-group.avif) 4. In _Pipeline permissions_, add your current pipeline configuration. ![variable group pipeline permission settings](../../../../assets/nx-cloud/ado-pipeline-permission.avif) 5. In _Approvals and checks_, add a new _Branch control_ check. ![variable group branch control settings](../../../../assets/nx-cloud/ado-add-branch-control.avif) 6. Create the _Branch control_ check with only allowing your protected branches and checking _Verify branch protection_ option. ![variable group branch control settings](../../../../assets/nx-cloud/ado-protected-branch.avif) 7. Create another variable group called _unprotected_. 8. Add the `NX_CLOUD_ACCESS_TOKEN` environment variable with the `read-only` token from Nx Cloud. 9. In _Pipeline permissions_, add your current pipeline configuration. 10. In _Approvals and checks_, add a new _Branch control_ check with the `*` wildcard for branches and leaving _Verify branch protection_ unchecked. ![unprotected variable group settings](../../../../assets/nx-cloud/ado-unprotected-branch.avif) 11. Now you should see 2 _Variable groups_ for _protected_ and _unprotected_ usage. ![completed variable group setup](../../../../assets/nx-cloud/ado-library-end.avif) 12. Update your pipeline to include the 2 variable groups, with conditional access for the _protected_ variable group. Example usage: ```yaml {% meta="{3-5}" %} // azure-pipelines.yml variables: - group: unprotected - ${{ if eq(variables['Build.SourceBranchName'], 'main') }}: - group: protected ``` {% aside type="tip" title="Can't someone change the variable group?" %} Since we use the _Verify branch protection_ option, CI can only read the variable when running in a protected branch. If a developer tries to edit the pipeline to use the _protected_ variable group, the pipeline will error out since permissions require running in on a protected branch. Take caution though, if you allow team members to have direct write access to a protected branch, then they could modify the pipeline to write to the nx cache without having a code review first. {% /aside %} ### BitBucket Cloud BitBucket Cloud supports setting environment variables per environment called _Deployment variables_. You can read the [official BitBucket Pipelines documentation](https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/#Deployment-variables) for more details. 1. In your repository, navigate to the _Repository settings_ > _Deployment_. 2. Select an environment you have configured for protected branches, or create a new one and protect your primary branches. - Note: selecting branch protection rules is a premium feature of BitBucket Cloud. ![Use deployments variables to provide protected environment variable access](../../../../assets/nx-cloud/bitbucket-deployment-env.avif) 3. Set the environment variable `NX_CLOUD_ACCESS_TOKEN` with the `read-write` token from Nx Cloud. 4. Navigate to the _Repository settings_ > _Repository variables_ tab and set the variable `NX_CLOUD_ACCESS_TOKEN` with the `read-only` token from Nx Cloud. ![add read-only Nx Cloud access token to bitbucket](../../../../assets/nx-cloud/bitbucket-repo-vars.avif) 5. Update the `bitbucket-pipelines.yml` file to include the deployment name mentioned in step 2. Example usage: ```yaml {% meta="{7}" %} // bitbucket-pipelines.yml pipelines: branches: main: - step: name: 'main checks' deployment: Production ... ``` ### CircleCI Circle CI allows creating _contexts_ and restricting those based on various rules. You can read the [official CircleCI documentation](https://circleci.com/docs/contexts/#restrict-a-context) for more details. 1. In your organization, navigate to _Organization settings_ > _Contexts_ and create a new context. - If you already have a context for protected environments, we recommend reusing that context. ![create a new context for protected environments](../../../../assets/nx-cloud/circle-new-context.avif) 2. Click on _Add Expression Restriction_ that restricts the context to protected branches only such as only the `main` branch, e.g., `pipeline.git.branch == "main"`. ![restrict context to protected branches](../../../../assets/nx-cloud/circle-expression-restriction.avif) 3. Click on _Add Environment Variable_ and add the `NX_CLOUD_ACCESS_TOKEN` environment variable with the `read-write` token from Nx Cloud. 4. Back on the organization home page, navigate to your projects, then view the pipeline settings. 5. Navigate to _Environment Variables_ and click _Add Environment Variable_ and add the `NX_CLOUD_ACCESS_TOKEN` environment variable with the `read-only` token from Nx Cloud. ![add read-only Nx Cloud access token to circleci](../../../../assets/nx-cloud/circle-new-env-var-token.avif) 6. Update your pipeline to include steps where you want to write to the nx cache with the correct contexts. Example usage: ```yaml {% meta="{11-20}" %} // .circleci/config.yml jobs: run-tests-protected: - ... run-tests-prs: - ... workflows: my-workflow: jobs: - run-tests-protected: context: - protected-branches filters: branches: only: main - run-tests-prs: filters: branches: ignore: main ``` ### GitHub Actions GitHub allows specifying different secrets for each environment, where an environment can be on a specific branch. You can read the [official GitHub Actions documentation](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-an-environment) for more details. 1. In your repository, navigate to Settings tab. 2. Click on "Environments" and create an environment for your protected branches. - Typically, organizations already have some kind of 'release' or 'protected' environments that can be leveraged. - If you do not have any protected branches, it's recommended to make at least your _default_ branch a protected branch i.e., `main`/`master`. 3. Add a restriction for how the environment will be applied, and apply to all protected branches. ![Select protected branches for the environment restriction configuration](../../../../assets/nx-cloud/github-select-protected-branches.avif) 4. Add the `read-write` access token with the name `NX_CLOUD_ACCESS_TOKEN` to your environment. 5. Click the _Secrets and variables_ > _Actions_ tab in the sidebar. 6. Add the `read-only` access token with the name `NX_CLOUD_ACCESS_TOKEN` to the repository secrets. 7. Now you should see 2 secrets where 1 is a part of the protected environment and the other is the default repository secrets. ![overview of GitHub Action secret configuration settings with environments set](../../../../assets/nx-cloud/github-secrets-settings.avif) Example usage: ```yaml {% meta="{4-5}" %} // .github/workflows/ci.yml name: CI env: NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }} jobs: main: runs-on: ubuntu-latest steps: ... ``` ### GitLab GitLab allows creating variables scoped to specific environments. You can read the [Official GitLab documentation](https://docs.gitlab.com/ci/environments/#limit-the-environment-scope-of-a-cicd-variable) for more details. 1. In your project, navigate to _Operate_ > _Environments_ and create a new environment. You do not need to fill out the External Url or GitLab agent. - Most projects already have a production/protected environments, so we recommend using this one if it's already defined. ![define gitlab environment for protected branches](../../../../assets/nx-cloud/gitlab-new-environment.avif) 2. In your project, navigate to _Settings_ > _CI/CD_ tab and expand the _Variables_ section. 3. Click on _Add variable_ and fill in the following information: - Type: _Variable_ - Environments: _All_ - Visibility: _Masked and hidden_ - Flags: uncheck _Protected variable_ - Description: "read-only token for nx-cloud" - Key: `NX_CLOUD_ACCESS_TOKEN` - Value: Your `read-only` token from Nx Cloud 4. Click _Add variable_. ![add read-only Nx Cloud access token to gitlab](../../../../assets/nx-cloud/gitlab-variable-settings-readonly.avif) 5. Click on _Add variable_ again and fill in the following information: - Type: _Variable_ - Environments: Your protected environment created in step 1 - Visibility: _Masked and hidden_ - Flags: check _Protected variable_ - Description: "read-write token for nx-cloud" - Key: `NX_CLOUD_ACCESS_TOKEN` - Value: Your `read-write` token from Nx Cloud 6. Click _Add variable_. ![add read-write Nx Cloud access token to gitlab](../../../../assets/nx-cloud/gitlab-variable-settings-readwrite.avif) 7. Now you should see 2 secrets where 1 is a part of the protected & tagged to the environment and the other is not. ![GitLab project variable configuration screen](../../../../assets/nx-cloud/gitlab-variable-setting.avif) 8. Update your pipeline to include steps where you want to write to the nx cache with the correct contexts. Example usage: ```yaml {% meta="{3-4}" %} // .gitlab-ci.yml : environment: name: ``` {% aside type="tip" title="Can't someone change the step environment?" %} Since we use the _Protected variable_ flag, CI can only read the variable when running in a protected branch. If a developer tries to edit the steps to run a PR in the environment with the `read-write` token will, then the token will not be populated in CI since their branch is not marked as protected. Take caution though, if you allow team members to have direct write access to a protected branch, then they could modify the steps to write to the nx cache without having a code review first. {% /aside %} ### Jenkins Jenkins configuration can be quite extensive making each Jenkins instance unique. Because of this we can only provide a minimal viable approach, but there can be multiple ways to provide scoped access tokens to your pipelines. The goal is to create two areas within Jenkins, where one is the _protected_ and the other is the _unprotected_. These specifically map to how you deem your branches should have read/write vs read permissions. We recommend making branches that developers cannot directly push to and require a code review to merge to, as the _protected_ branches, and the rest being _unprotected_. 1. Minimally, this can be achieved via the following Jenkins plugins: - [Folders](https://plugins.jenkins.io/cloudbees-folder/), [Credentials](https://plugins.jenkins.io/credentials/), [Credentials Binding](https://plugins.jenkins.io/credentials-binding/) 2. Create a folder for the _unprotected_ and _protected_ pipelines. - The names can be anything that makes sense for your organization, such as _releases_ or _PRs_ etc. 3. Go into the _unprotected_ folder and create a credential for `NX_CLOUD_ACCESS_TOKEN` with the `read-only` token from Nx Cloud. 4. Go into the _protected_ folder and create a credential for `NX_CLOUD_ACCESS_TOKEN` with the `read-write` token from Nx Cloud. 5. Use the credential inside your pipeline `Jenkinsfile` with the Credential Binding plugin. Example usage: ```groovy {% meta="{7-8}" %} // Jenkinsfile pipeline { agent any stages { stage('Build') { steps { withCredentials([string(credentialsId: 'NX_CLOUD_ACCESS_TOKEN', variable: 'NX_CLOUD_ACCESS_TOKEN')]) { sh 'echo "Nx Cloud access token is now set in this context"' } } } } } ``` ### Legacy methods of setting CI access tokens #### Using CI access tokens in nx.json We **do not recommend** that you commit an access token to your repository but older versions of Nx do support this and if you open your `nx.json`, you may see something like this: {% tabs %} {% tabitem label="Nx >= 17" %} ```json { "nxCloudAccessToken": "SOMETOKEN" } ``` {% /tabitem %} {% tabitem label="Nx < 17" %} ```json { "tasksRunnerOptions": { "default": { "runner": "nx-cloud", "options": { "accessToken": "SOMETOKEN" } } } } ``` {% /tabitem %} {% /tabs %} {% aside type="caution" title="Nx Cloud authentication is changing" %} From Nx 19.7 new workspaces are connected to Nx Cloud with a property called `nxCloudId` instead, and we recommend developers use [`nx login`](/docs/reference/nx-cloud-cli#npx-nxcloud-login) to provision their own local [personal access tokens](/docs/guides/nx-cloud/personal-access-tokens) for user based authentication. {% /aside %} #### Using `nx-cloud.env` You can set an environment variable locally via the `nx-cloud.env` file. Nx Cloud CLI will look in this file to load custom configuration like `NX_CLOUD_ACCESS_TOKEN`. These environment variables will take precedence over the configuration in `nx.json`. --- ## Track CI Resource Usage Nx Cloud tracks CPU and memory usage for each task in your CI pipeline. Use this data to find resource bottlenecks, debug out-of-memory errors, and pick the right agent size for your workload. {% aside type="note" title="Requirements" %} Requires Nx 22.1 or higher. The CI resource usage feature with Nx Cloud requires an [Enterprise plan](https://nx.dev/enterprise?utm_source=nx.dev&utm_medium=documentation-guide&utm_campaign=nx-cloud-task-metrics). {% /aside %} ## Resource usage with Nx Agents With [Nx Agents](/docs/features/ci-features/distribute-task-execution), resource metrics are collected automatically. You can view this data in the Nx Cloud dashboard for any CI pipeline execution. ### Viewing the analysis summary Open any CI pipeline execution in Nx Cloud and go to the analysis section. You'll see a list of agents used for the run, along with: - Average and maximum CPU usage - Average and maximum memory usage - Machine specs for that resource class This gives you a quick look at how resources were used across all agents. ![Resource usage summary showing agents with CPU and memory stats](../../../../assets/guides/nx-cloud/agent-resource-usage-table.png) ### Viewing usage details Click on any agent to see a breakdown of resource usage over time. The detail view shows: - Memory usage by process - CPU usage by process - Resource consumption for each task - Nx CLI overhead This view helps you find exactly which task is using the most resources, not just that "something" in your pipeline is the problem. ![Resource usage details showing memory and CPU by process](../../../../assets/guides/nx-cloud/resource-chart-details.png) ### Using the detail view The detail view has a few features to help you dig into resource usage: - **Legend**: Click items in the legend to focus on specific tasks or processes ![Using the legend to focus on specific tasks](../../../../assets/guides/nx-cloud/resource-chart-legend.png) - **Timeline scrubber**: Use the scrubber at the bottom to jump to specific points in time or zoom in on peak usage ![Timeline scrubber for navigating resource usage over time](../../../../assets/guides/nx-cloud/resource-chart-scrubber.jpg) - **View modes**: Switch between "stacked" view (total usage at any time) and "individual" view (each process separately) ![Stacked view showing total resource usage](../../../../assets/guides/nx-cloud/resource-stacked-chart-view.png) - **CSV export**: Download the raw data if you need to dig into sub-process details ## Common use cases - **Finding memory-hungry tasks**: Figure out which project eats the most memory when running tasks in parallel. You can then run just that project with lower parallelism instead of slowing down everything. - **Spotting misconfigured tooling**: See when a bundler or build tool is pulling in more files than it should. - **Debugging E2E bottlenecks**: Find out if the slow part is the tests themselves or something in the dependency chain. - **Comparing before and after upgrades**: Check if a dependency upgrade caused a spike in resource usage. - **Detecting memory leaks**: Look for tasks where memory keeps climbing over time. - **Picking the right resource class**: Figure out the right agent size when moving to Nx Agents from GitHub Actions or other CI providers. ## Manual metrics upload If you're running your own CI runners instead of Nx Agents, you can still collect resource metrics and upload them to Nx Cloud. ### How it works Nx writes resource metrics to a local directory during task execution. To view this data in Nx Cloud: 1. Save the metrics directory as a CI artifact 2. Download the artifact after the run finishes 3. Upload the metrics file in the Nx Cloud analysis screen ### Configuration Metrics collection is currently on by default for enterprise users with Nx version 22.1 or higher. To disable it, set the `NX_CLOUD_DISABLE_METRICS_COLLECTION` environment variable: ```shell export NX_CLOUD_DISABLE_METRICS_COLLECTION=true ``` Metrics are written to the local Nx cache directory by default (`.nx/cache/metrics` unless manually overridden). You can change this directory by setting the `NX_CLOUD_METRICS_DIRECTORY` environment variable. ```shell export NX_CLOUD_METRICS_DIRECTORY=/path/to/metrics ``` ### Saving metrics as CI artifacts Set up your CI to save the metrics directory as an artifact so you can download it later. The following examples assume that the default cache directory (`.nx/cache/metrics`) is used. If you overwrite the metrics directory, adjust the paths in the artifact upload step accordingly. {% tabs syncKey="ci-provider" %} {% tabitem label="GitHub Actions" %} ```yaml # .github/workflows/ci.yml jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 - run: npm ci - run: npx nx affected -t build test lint - name: Upload metrics artifact if: always() uses: actions/upload-artifact@v4 with: name: nx-metrics path: ${{ github.workspace }}/.nx/cache/metrics ``` {% /tabitem %} {% tabitem label="GitLab CI" %} ```yaml # .gitlab-ci.yml build: script: - npm ci - npx nx affected -t build test lint artifacts: paths: - .nx/cache/metrics when: always ``` {% /tabitem %} {% tabitem label="CircleCI" %} ```yaml # .circleci/config.yml jobs: build: docker: - image: cimg/node:lts steps: - checkout - run: npm ci - run: npx nx affected -t build test lint - store_artifacts: path: .nx/cache/metrics destination: nx-metrics ``` {% /tabitem %} {% /tabs %} ### Uploading metrics to Nx Cloud Once your CI run finishes, download the metrics artifact from your CI provider. Then go to the CI pipeline execution in Nx Cloud, open the analysis screen, and upload the metrics file. ![Manual upload interface in the Nx Cloud analysis screen](../../../../assets/guides/nx-cloud/resource-usage-manual-upload.png) --- ## Reduce the Number of Affected Projects in a CI Pipeline Execution When it comes to troubleshooting long-running CI pipeline executions, there are different tools available to help you identify the potential issues. One such tool is the **Affected Project Graph** feature on the CI Pipeline Execution page. ## Getting to the CI pipeline execution affected project graph To access the affected project graph for the CI pipeline execution, navigate to the CI pipeline execution details page and click on the **Affected Project Graph** navigation item. ![CIPE Affected Project Graph](../../../../assets/nx-cloud/cipe-affected-project-graph-nav-item.png) The affected project graph visualizes the projects that are part of the **current** CI pipeline execution. ## Identifying a potential over-run of a CI pipeline execution In this recipe, a scenario where the affected project graph can be used to identify a potential over-run of a CI pipeline execution. This is our repository structure: {% filetree %} - apps/ - web/ - web-e2e - nx-graph-test/ - nx-graph-test-e2e/ - recipes/ - client/ - client-e2e/ - libs/ - ui/ (button and icon components) - forms/ - input/ - tooltip/ {% /filetree %} Our most recent CI pipeline execution affects everything in the repository. ![CIPE Affected Project Graph -- every tasks](../../../../assets/nx-cloud/cipe-affected-project-graph-every-tasks.png) Likewise, the affected project graph for the Ci pipeline execution also visualizes all projects because everything is affected. ![CIPE Affected Project Graph -- affect everything](../../../../assets/nx-cloud/cipe-affected-project-graph-every-projects.png) ## Create a new CI pipeline execution with a code change Our `ui` library has 2 components: `button` and `tooltip`. From the graph, we can see that both our apps `client` and `web` depend on the `ui` library: `client` uses the `tooltip` component and `web` uses the `button` component. Let's make an update to the `tooltip` component and see how it affects our next CI pipeline execution. Pushing this change to our repository will trigger a new CI pipeline execution. ![CIPE Affected Project Graph -- new CIPE](../../../../assets/nx-cloud/cipe-affected-project-graph-tooltip-tasks.png) This CI pipeline execution contains 14 tasks that are affected by the change we made to the `tooltip` component. ![CIPE Affected Project Graph -- new CIPE tasks](../../../../assets/nx-cloud/cipe-affected-project-graph-tooltip-affected.png) The affected project graph also shows that the change to `tooltip` component , which is part of the `ui` library, affects both the `client` and `web` apps. At this point, we can ask ourselves that "Should a change to the `tooltip` component affect both the `client` and `web` apps or should it only affect the `web` app?" Our goal should be to always have the most efficient CI pipeline executions possible. Decreasing the number of affected projects will allow the number of tasks to be reduced, which will reduce the overall CI pipeline execution time. ## Break up the source of the affected projects To achieve our goal, we can break up the `ui` library into 2 separate libraries: `button` and `tooltip`. > Check out our [blog post](https://nx.dev/blog/improve-architecture-and-ci-times-with-projects) about splitting large projects into smaller ones. Once we have done this, we will end up with the following project graph: ![CIPE Affected Project Graph -- break up ui](../../../../assets/nx-cloud/cipe-affected-project-graph-break-up-ui.png) Let's make a change to the `button` component this time and see how it affects our next CI pipeline execution. ![CIPE Affected Project Graph -- button tasks](../../../../assets/nx-cloud/cipe-affected-project-graph-button-tasks.png) We've reduced the number of affected tasks from 14 to 8. ![CIPE Affected Project Graph -- button affected](../../../../assets/nx-cloud/cipe-affected-project-graph-button-affected.png) And the affected project graph also reflects that change properly. {% aside type="tip" %} **Does your Affected Project Graph only show affected projects and not touched?** - If your commit has changes to one of the global inputs, your projects will be affected but no specific project is touched directly. - Make sure you are calling `start-ci-run` to start using Nx Agents for touched projects to be recorded. Learn more about [Nx Agents](/docs/features/ci-features/distribute-task-execution) {% /aside %} --- ## Enable AI Features This guide walks you through enabling AI features in Nx Cloud for both cloud-hosted and on-prem installations. ## Nx Cloud hosted setup To enable AI features for your organization, go to [your organization's settings](https://cloud.nx.app/go/organization/edit) on Nx Cloud and select the organization where you want to enable AI. In the **settings** menu, find the "AI Features" section and toggle it to "On". ![enable ai features](../../../../assets/features/ci-features/ai-features.avif) Ensure that you **accept the AI terms** to start using the AI features. {% aside type="note" title="AI Features Availability" %} AI features are available on Hobby, Team and Enterprise [Nx Cloud plans](https://nx.dev/pricing). {% /aside %} ## Self-healing CI for enterprise on-prem installations To enable Self-Healing CI for enterprise on-prem installations, follow these steps: ### Setup Add the following configuration to your `helm-values.yaml` file: ```yaml nxApi deployment: env: - name: NX_CLOUD_AI_ENABLED value: 'true' - name: NX_CLOUD_AI_TOKEN_PROVIDER_TYPE value: 'fixed' - name: NX_CLOUD_ANTHROPIC_API_KEY value: 'sk-ant-...' frontend deployment: env: - name: NX_CLOUD_AI_ENABLED value: 'true' ``` Then, enable AI features in the [organization settings](https://cloud.nx.app/go/organization/edit) and enable Self-Healing CI in the [workspace settings](https://cloud.nx.app/go/workspace/settings). ### Running self-healing CI on-prem #### Automatic DTE agents If using DTE agents, the self-healing CI step will be automatically added when the setting is enabled in the workspace settings. #### Manual DTE For manual DTE configurations, the `nx fix-ci` command must be included in the agent configuration after running nx tasks. **Important:** This command must run **always**, meaning that even when previous nx tasks fail, the `fix-ci` command should still execute. #### Non-DTE For non-DTE setups, refer to the [Self-Healing CI documentation](/docs/features/ci-features/self-healing-ci) for detailed configuration instructions. ## Regional availability AI features are not available for the EU cluster in public cloud installations due to regional restrictions. However, on-prem customers in the EU can still use these features by providing their own Anthropic API key (for Self-Healing CI) and enabling the required environment variables. --- ## Enable End to End Encryption To turn on end to end encryption, specify an encryption key in one of two ways: - Set the `nxCloudEncryptionKey` property in `nx.json` - Set the `NX_CLOUD_ENCRYPTION_KEY` environment variable The key can be any string up to 32 characters long. Providing an encryption key tells Nx to encrypt task artifacts on your machine before they are sent to the remote cache. Then when cached results are downloaded to your machine they are decrypted before they are used. This ensures that even if someone gained access to the Nx Cloud servers, they wouldn't be able to view your task artifacts. ## Metadata All the artifacts Nx Cloud uses to replay a task for you are encrypted. That means that even if someone gets access to your Nx Cloud storage bucket, they will not be able to tamper with the files and terminal output that is restored when the task is replayed on your CI or developer's machines. We also store an un-encrypted version of the terminal output separately that is accessible only to invited members of the workspace on the Nx Cloud web app, so they can see why certain tasks failed. This un-encrypted output is only used in the browser, and not used when replaying the task. ## Summary Data is encrypted both at rest and in transit. - Every communication with the Nx Cloud API is encrypted in transit, including fetching/storing artifacts. - When using Nx Public Cloud, the stored metadata is encrypted. - When using Nx Public Cloud and e2e encryption, stored artifacts are encrypted. - When using the on-prem version of Nx Cloud, the stored metadata is encrypted if you run MongoDB yourself with encryption on - When using the on-prem version of Nx Cloud, stored artifacts are encrypted using e2e encryption. --- ## Connecting Nx Cloud to your existing Google identity provider If your organization uses [Google Identity](https://cloud.google.com/identity) or [Google Workspaces](https://workspace.google.com/intl/en_uk/) to manage employee accounts and permissions, your Nx Cloud workspace members can re-use the same accounts to sign-in to Nx Cloud and view runs, cache stats etc. Besides being more convenient for the employee, as they don't have to sign-in again, it also has a security benefit: if an employee leaves the company and their Google account is disabled, they won't be able to sign-in to Nx Cloud anymore. By default, when you invite a member by email, they can create a separate Nx Cloud account using their work e-mail address. **If their primary email address gets disabled, they will still be able to sign-in with their Nx Cloud account, unless you explicitly revoke their membership from the Members page.** If you'd like them to sign-in with Google directly, which ensures they automatically lose access to their Nx Cloud account if their email gets disabled, you need to enable this option when inviting them: "_Require Social OAuth Sign-In_". They will then only be able to accept the invite if they sign-in with Google directly. ![Require Google OAuth Sign-In toggle](../../../../assets/nx-cloud/require-google-signin.webp) ## SAML integration Direct integration with SAML identity providers is a feature of [Nx Enterprise](https://nx.dev/enterprise). You can, however, connect your existing SAML provider to Google, and then use the method above to invite employees: - [Azure AD](https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/google-apps-tutorial) - [Okta](https://www.okta.com/integrations/google-workspace/#overview) --- ## Manual Distributed Task Execution Using [Nx Agents](/docs/features/ci-features/distribute-task-execution) is the easiest way to distribute task execution, but your organization may not be able to use hosted Nx Agents. You can set up distributed task execution on your own CI provider using the recipes below. {% tabs syncKey="ci-provider" %} {% tabitem label="GitHub" %} Our [reusable GitHub workflow](https://github.com/nrwl/ci) represents a good set of defaults that works for a large number of our users. However, reusable GitHub workflows come with their [limitations](https://docs.github.com/en/actions/using-workflows/reusing-workflows). If the reusable workflow above doesn't satisfy your needs you should create a custom workflow. If you were to rewrite the reusable workflow yourself, it would look something like this: ```yaml // .github/workflows/ci.yml name: CI on: push: branches: - main pull_request: # Needed for nx-set-shas when run on the main branch permissions: actions: read contents: read env: NX_CLOUD_DISTRIBUTED_EXECUTION: true # this enables DTE NX_BRANCH: ${{ github.event.number || github.ref_name }} NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }} NPM_TOKEN: ${{ secrets.NPM_TOKEN }} # this is needed if our pipeline publishes to npm jobs: main: name: Nx Cloud - Main Job runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: # We need to fetch all branches and commits so that Nx affected has a base to compare against. fetch-depth: 0 filter: tree:0 # Set node/npm/yarn versions using volta - uses: volta-cli/action@v4 with: package-json-path: '${{ github.workspace }}/package.json' - name: Use the package manager cache if available uses: actions/setup-node@v3 with: node-version: 20 cache: 'npm' - name: Install dependencies run: npm ci - name: Derive appropriate SHAs for base and head for `nx affected` commands uses: nrwl/nx-set-shas@v4 - name: Initialize the Nx Cloud distributed CI run and stop agents when the build tasks are done run: npx nx start-ci-run --distribute-on="manual" --stop-agents-after=e2e-ci - name: Check the formatting run: npx nx record -- nx format:check - name: Lint, test, build, and run e2e run: npx nx affected -t lint,test,build,e2e-ci --configuration=ci # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci - name: Self-Healing CI run: npx nx fix-ci if: always() agents: name: Agent ${{ matrix.agent }} runs-on: ubuntu-latest strategy: matrix: # Add more agents here as your repository expands agent: [1, 2, 3] steps: - name: Checkout uses: actions/checkout@v4 # Set node/npm/yarn versions using volta - uses: volta-cli/action@v4 with: package-json-path: '${{ github.workspace }}/package.json' - name: Use the package manager cache if available uses: actions/setup-node@v3 with: node-version: 20 cache: 'npm' - name: Install dependencies run: npm ci - name: Start Nx Agent ${{ matrix.agent }} run: npx nx start-agent env: NX_AGENT_NAME: ${{ matrix.agent }} # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci - name: Self-Healing CI run: npx nx fix-ci if: always() env: NX_AGENT_NAME: ${{ matrix.agent }} ``` There are comments throughout the workflow to help you understand what is happening in each section. {% /tabitem %} {% tabitem label="Circle CI" %} Run agents directly on Circle CI with the workflow below: ```yaml // .circleci/config.yml version: 2.1 orbs: nx: nrwl/nx@1.5.1 jobs: main: docker: - image: cimg/node:lts-browsers steps: - checkout - run: npm ci - nx/set-shas # Tell Nx Cloud to use DTE and stop agents when the e2e-ci tasks are done - run: npx nx start-ci-run --distribute-on="manual" --stop-agents-after=e2e-ci # Send logs to Nx Cloud for any CLI command - run: npx nx record -- nx format:check # Lint, test, build and run e2e on agent jobs for everything affected by a change - run: npx nx affected --base=$NX_BASE --head=$NX_HEAD -t lint,test,build,e2e-ci --parallel=2 --configuration=ci # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci - run: npx nx fix-ci when: always agent: docker: - image: cimg/node:lts-browsers parameters: ordinal: type: integer steps: - checkout - run: npm ci # Wait for instructions from Nx Cloud - run: command: npx nx start-agent no_output_timeout: 60m environment: NX_AGENT_NAME: << parameters.ordinal >> # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci - run: command: npx nx fix-ci environment: NX_AGENT_NAME: << parameters.ordinal >> when: always workflows: build: jobs: - agent: matrix: parameters: ordinal: [1, 2, 3] - main ``` This configuration is setting up two types of jobs - a main job and three agent jobs. The main job tells Nx Cloud to use DTE and then runs normal Nx commands as if this were a single pipeline set up. Once the commands are done, it notifies Nx Cloud to stop the agent jobs. The agent jobs set up the repo and then wait for Nx Cloud to assign them tasks. {% /tabitem %} {% tabitem label="Azure" %} Run agents directly on Azure Pipelines with the workflow below: ```yaml // azure-pipelines.yml trigger: - main pr: - main variables: CI: 'true' ${{ if eq(variables['Build.Reason'], 'PullRequest') }}: NX_BRANCH: $(System.PullRequest.PullRequestNumber) TARGET_BRANCH: $[replace(variables['System.PullRequest.TargetBranch'],'refs/heads/','origin/')] BASE_SHA: $(git merge-base $(TARGET_BRANCH) HEAD) ${{ if ne(variables['Build.Reason'], 'PullRequest') }}: NX_BRANCH: $(Build.SourceBranchName) BASE_SHA: $(git rev-parse HEAD~1) HEAD_SHA: $(git rev-parse HEAD) jobs: - job: agents strategy: parallel: 3 displayName: Nx Cloud Agent pool: vmImage: 'ubuntu-latest' steps: - checkout: self fetchDepth: '0' fetchFilter: tree:0 persistCredentials: true - script: npm ci - script: npx nx start-agent env: NX_AGENT_NAME: $(System.JobPositionInPhase) # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci - script: npx nx fix-ci condition: always() env: NX_AGENT_NAME: $(System.JobPositionInPhase) - job: main displayName: Nx Cloud Main pool: vmImage: 'ubuntu-latest' steps: # Get last successfull commit from Azure Devops CLI - bash: | LAST_SHA=$(az pipelines build list --branch $(Build.SourceBranchName) --definition-ids $(System.DefinitionId) --result succeeded --top 1 --query "[0].triggerInfo.\"ci.sourceSha\"") if [ -z "$LAST_SHA" ] then echo "Last successful commit not found. Using fallback 'HEAD~1': $BASE_SHA" else echo "Last successful commit SHA: $LAST_SHA" echo "##vso[task.setvariable variable=BASE_SHA]$LAST_SHA" fi displayName: 'Get last successful commit SHA' condition: ne(variables['Build.Reason'], 'PullRequest') env: AZURE_DEVOPS_EXT_PAT: $(System.AccessToken) - script: git branch --track main origin/main - script: npm ci - script: npx nx start-ci-run --distribute-on="manual" --stop-agents-after="e2e-ci" - script: npx nx record -- nx format:check --base=$(BASE_SHA) --head=$(HEAD_SHA) - script: npx nx affected --base=$(BASE_SHA) --head=$(HEAD_SHA) -t lint,test,build,e2e-ci --parallel=2 --configuration=ci # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci - script: npx nx fix-ci condition: always() ``` This configuration is setting up two types of jobs - a main job and three agent jobs. The main job tells Nx Cloud to use DTE and then runs normal Nx commands as if this were a single pipeline set up. Once the commands are done, it notifies Nx Cloud to stop the agent jobs. The agent jobs set up the repo and then wait for Nx Cloud to assign them tasks. {% /tabitem %} {% tabitem label="Bitbucket" %} Run agents directly on Bitbucket Pipelines with the workflow below: ```yaml // bitbucket-pipelines.yml image: node:20 clone: depth: full definitions: steps: - step: &agent name: Agent script: - export NX_BRANCH=$BITBUCKET_PR_ID - export NX_AGENT_NAME=$BITBUCKET_STEP_UUID - npm ci - npx nx start-agent after-script: # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci - export NX_AGENT_NAME=$BITBUCKET_STEP_UUID - npx nx fix-ci pipelines: pull-requests: '**': - parallel: - step: name: CI script: - export NX_BRANCH=$BITBUCKET_PR_ID - npm ci - npx nx start-ci-run --distribute-on="manual" --stop-agents-after="e2e-ci" - npx nx record -- nx format:check - npx nx affected --target=lint,test,build,e2e-ci --parallel=2 after-script: # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci - npx nx fix-ci - step: *agent - step: *agent - step: *agent ``` This configuration is setting up two types of jobs - a main job and three agent jobs. The main job tells Nx Cloud to use DTE and then runs normal Nx commands as if this were a single pipeline set up. Once the commands are done, it notifies Nx Cloud to stop the agent jobs. The agent jobs set up the repo and then wait for Nx Cloud to assign them tasks. {% /tabitem %} {% tabitem label="GitLab" %} Run agents directly on GitLab with the workflow below: ```yaml // .gitlab-ci.yml image: node:18 # Creating template for DTE agents .dte-agent: interruptible: true cache: key: files: - yarn.lock paths: - '.yarn-cache/' script: - yarn install --cache-folder .yarn-cache --prefer-offline --frozen-lockfile - export NX_AGENT_NAME=$CI_JOB_ID - yarn nx start-agent # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci after_script: - export NX_AGENT_NAME=$CI_JOB_ID - yarn nx fix-ci # Creating template for a job running DTE (orchestrator) .base-pipeline: interruptible: true only: - main - merge_requests cache: key: files: - yarn.lock paths: - '.yarn-cache/' before_script: - yarn install --cache-folder .yarn-cache --prefer-offline --frozen-lockfile - NX_HEAD=$CI_COMMIT_SHA - NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA} artifacts: expire_in: 5 days paths: - dist # Main job running DTE nx-dte: stage: affected extends: .base-pipeline script: - yarn nx start-ci-run --distribute-on="manual" --stop-agents-after=e2e-ci - yarn nx record -- nx format:check --base=$NX_BASE --head=$NX_HEAD - yarn nx affected --base=$NX_BASE --head=$NX_HEAD -t lint,test,build,e2e-ci --parallel=2 # Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci after_script: - yarn nx fix-ci # Create as many agents as you want nx-dte-agent1: extends: .dte-agent stage: affected nx-dte-agent2: extends: .dte-agent stage: affected nx-dte-agent3: extends: .dte-agent stage: affected ``` This configuration is setting up two types of jobs - a main job and three agent jobs. The main job tells Nx Cloud to use DTE and then runs normal Nx commands as if this were a single pipeline set up. Once the commands are done, it notifies Nx Cloud to stop the agent jobs. The agent jobs set up the repo and then wait for Nx Cloud to assign them tasks. {% /tabitem %} {% tabitem label="Jenkins" %} Run agents directly on Jenkins with the workflow below: ```groovy // Jenkinsfile pipeline { agent none environment { NX_BRANCH = env.BRANCH_NAME.replace('PR-', '') } stages { stage('Pipeline') { parallel { stage('Main') { when { branch 'main' } agent any steps { sh "npm ci" sh "npx nx start-ci-run --distribute-on='manual' --stop-agents-after='e2e-ci'" sh "npx nx record -- nx format:check" sh "npx nx affected --base=HEAD~1 -t lint,test,build,e2e-ci --configuration=ci --parallel=2" } post { // Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci always { sh "npx nx fix-ci" } } } stage('PR') { when { not { branch 'main' } } agent any steps { sh "npm ci" sh "npx nx start-ci-run --distribute-on='manual' --stop-agents-after='e2e-ci'" sh "npx nx record -- nx format:check" sh "npx nx affected --base origin/${env.CHANGE_TARGET} -t lint,test,build,e2e-ci --parallel=2 --configuration=ci" } post { // Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci always { sh "npx nx fix-ci" } } } # Add as many agent you want stage('Agent1') { agent any environment { NX_AGENT_NAME = '1' } steps { sh "npm ci" sh "npx nx start-agent" } post { // Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci always { sh "npx nx fix-ci" } } } stage('Agent2') { agent any environment { NX_AGENT_NAME = '2' } steps { sh "npm ci" sh "npx nx start-agent" } post { // Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci always { sh "npx nx fix-ci" } } } stage('Agent3') { agent any environment { NX_AGENT_NAME = '3' } steps { sh "npm ci" sh "npx nx start-agent" } post { // Self-Healing CI: recommend fixes for failures. Learn more: https://nx.dev/ci/features/self-healing-ci always { sh "npx nx fix-ci" } } } } } } } ``` This configuration is setting up two types of jobs - a main job and three agent jobs. The main job tells Nx Cloud to use DTE and then runs normal Nx commands as if this were a single pipeline set up. Once the commands are done, it notifies Nx Cloud to stop the agent jobs. The agent jobs set up the repo and then wait for Nx Cloud to assign them tasks. {% /tabitem %} {% /tabs %} {% aside type="caution" title="Two types of parallelization" %} The agent configuration and the `--parallel` flag both parallelize tasks, but in different ways. In the examples above, there will be 3 agents running tasks and each agent will try to run 2 tasks at once. If a particular CI run only has 2 tasks, only one agent will be used. {% /aside %} ## Rerunning jobs with DTE Rerunning only failed jobs results in agent jobs not running, which causes the CI pipeline to hang and eventually timeout. This is a common pitfall when using a CI providers "rerun failed jobs", or equivalent, feature since agent jobs will always complete successfully. To enforce rerunning all jobs, you can set up your CI pipeline to exit early with a helpful error. For example: > You reran only failed jobs, but CI requires rerunning all jobs. > Rerun all jobs in the pipeline to prevent this error. At a high level: 1. Create a job that always succeeds and uploads an artifact on the pipeline with the run attempt number of the pipeline. 2. The main and agent jobs can read the artifact file when starting and assert they are on the same re-try attempt. 3. If the reattempt number does not match, then error with a message stating to rerun all jobs. Otherwise, the pipelines are on the same rerun and can proceed as normally. --- ## Optimize Your Time to Green (TTG) Time to Green (TTG) is the **time from when a pull request (PR) opens and triggers CI to the moment all checks are green and the PR is review-ready**. TTG is a practical sub-metric of Time to Merge (TTM): by compressing TTG (lower is better), you remove the biggest day‑to‑day bottlenecks that developers feel, which in turn improves overall TTM. ## Why is this important? The biggest day‑to‑day waste in engineering teams: **constant context switching and PR babysitting**. The common loop is: - 🧑‍💻 Write code - 🧑‍💻 Push PR - ⏳ CI runs - ❌ CI fails 2 minutes later - ⏳ Discover it much later - ‍🧑‍💻 Switch context to debug and trigger CI run - ⏳ Re-running CI - ❌ Flaky test fails CI - ‍🧑‍💻 Switch context to debug and trigger CI run - ⏳ Re-running CI - ✅ CI is finally green - ‍🧑‍💻 Reach out to someone to review This delay compounds across teams and drastically slows delivery. **Nx Cloud fixes this.** ## How to improve TTG High TTG usually comes from three sources: slow failure discovery, disruptive PR babysitting, and raw execution time. Tackle them in this order. **Prerequisite: Connect your workspace to Nx Cloud** If you haven't already, run the following command to connect your workspace to Nx Cloud: ```shell npx nx@latest connect ``` ### 1) Get failure feedback immediately (avoid late discovery) When you don't notice CI failed, you lose time before you can act. Tighten the loop so failures surface where you're working. **What to do:** See failures immediately where you work by getting a notification in your editor: install [Nx Console](/docs/getting-started/editor-setup). ### 2) Eliminate PR babysitting (minimize context switching) The expensive loop is switching branches to fix, re‑pushing, waiting, and repeating; especially with flakes. **What to do:** - Approve fixes instead of branch‑hopping: enable **[Self‑Healing CI](/docs/features/ci-features/self-healing-ci#configure-your-ci-pipeline)** to analyze failed tasks, propose and verify fixes, and commit to your PR after approval. - Also enable **[flaky task detection and retries](/docs/features/ci-features/flaky-tasks)** to automatically re-run flaky tasks in the background while you keep working undisturbed. ### 3) Shorten actual CI time (make the pipeline fast) Once feedback and context switching are handled, compress the compute side. **What to do:** - Reuse work with **[Remote caching (Nx Replay)](/docs/features/ci-features/remote-cache)**. - Run more in parallel with **[Distributed task execution (Nx Agents)](/docs/features/ci-features/distribute-task-execution)**. - Scale long suites with **[E2E test splitting](/docs/features/ci-features/split-e2e-tasks)** so they finish quickly. ## Measure TTG and diagnose bottlenecks ![TTG metrics](../../../../assets/nx-cloud/nx-cloud-ttg-stats.avif) **Doing too much work on PRs** - Use [Nx Affected](/docs/features/ci-features/affected) to only run what changed. **Cache hit rate is low** - Ensure tasks are cacheable and deterministic. Define `outputs` and configure `inputs`/`namedInputs` correctly. See: [Configure Inputs](/docs/guides/tasks--caching/configure-inputs), [Configure Outputs](/docs/guides/tasks--caching/configure-outputs), [Inputs Reference](/docs/reference/inputs) - Standardize Node/PNPM versions across dev and CI; avoid environment variables that unintentionally affect inputs. - Use [Remote caching (Nx Replay)](/docs/features/ci-features/remote-cache) to share results between CI and dev machines. **Agents are idle, or queue time is high** - Increase or right-size distributed capacity and parallelism with [Nx Agents](/docs/features/ci-features/distribute-task-execution). - Use [Dynamic Agents](/docs/features/ci-features/dynamic-agents) to scale based on PR size. - Remove unnecessary serialization (global locks, [overly strict `dependsOn`](/docs/guides/tasks--caching/defining-task-pipeline)). **E2E suites take too long** - Enable [E2E test splitting](/docs/features/ci-features/split-e2e-tasks) so large suites run across agents. - Ensure tests are shardable (no hidden global state, independent specs). **Flaky task rate is high** - Enable [Flaky task detection and automatic retries](/docs/features/ci-features/flaky-tasks). - Isolate and quarantine persistently flaky suites to keep pipelines green. **Late failure discovery / PR babysitting** - Install [Nx Console](/docs/getting-started/editor-setup) for instant failure and fix notifications in your editor. - Enable [Self‑Healing CI](/docs/features/ci-features/self-healing-ci) to propose and validate fixes automatically (ensure the `npx nx fix-ci` step runs with `if: always()`). ## Talk to us If you still need help feel free to [reach out to us](/contact). --- ## Nx Cloud and Personal Access Tokens {% youtube src="https://www.youtube.com/watch?v=vX-wgI1zlao" title="Configure your Nx Cloud Personal Access Token" /%} From Nx 19.7 repositories are connected to Nx Cloud via a property in `nx.json` called `nxCloudId`. By default this value allows anyone who clones the repository `read-write` access to Nx Cloud features for that workspace. These permissions can be updated in the workspace settings. To disallow access to anonymous users or allow `read-write` access to known users it is required that all users provision their own personal access token. To do that they need to use [`npx nx login`](/docs/reference/nx-cloud-cli#npx-nxcloud-login). {% aside type="caution" title="Personal Access Tokens require the `nxCloudId` field in `nx.json`" %} Ensure that you have the `nxCloudId` property in your `nx.json` file to connect to Nx Cloud with a Personal Access Token. If you have been using `nxCloudAccessToken`, you can convert it to `nxCloudId` by running [`npx nx-cloud convert-to-nx-cloud-id`](/docs/reference/nx-cloud-cli#npx-nxcloud-converttonxcloudid). {% /aside %} {% tabs %} {% tabitem label="Nx >= 19.7" %} ```json // nx.json { "nxCloudId": "SOMEID" } ``` {% /tabitem %} {% tabitem label="Nx <= 19.6" %} ```json // nx.json "tasksRunnerOptions": { "default": { "runner": "nx-cloud", "options": { "nxCloudId": "SOMEID" } } } ``` To utilize personal access tokens and Nx Cloud ID with Nx <= 19.6, the nx-cloud npm package is also required to be installed in your workspaces `package.json`. ```json // package.json { "devDependencies": { "nx-cloud": "latest" } } ``` {% /tabitem %} {% /tabs %} ## Personal access tokens (PATs) When you run [`npx nx login`](/docs/reference/nx-cloud-cli#npx-nxcloud-login) you will be directed to the Nx Cloud app where you will be required to create an account and login. A new personal access token will be provisioned and saved in a local configuration file in your home folder (the location of this will be displayed when login is complete and varies depending on OS). ### View your personal access tokens You can view your personal access tokens in the Nx Cloud app by navigating to your profile settings. Click your user icon in the top right corner of the app and select `Profile`. ![Profile Settings](../../../../assets/nx-cloud/profile-page.avif) From there, click on the `Personal access tokens` tab. ![Personal Access Tokens](../../../../assets/nx-cloud/personal-access-tokens-profile.avif) ### Manually create a personal access token Personal access tokens can also be manually created in the Nx Cloud app. Navigate to your profile settings and click on the `Personal access tokens` tab. Select `New access token`, enter a name for the token and click `Generate Token`. The token will be displayed on the screen and can be copied to your clipboard. You can then use [nx-cloud configure](/docs/reference/nx-cloud-cli#npx-nxcloud-configure) in your terminal to set the token in your local configuration file. ## Permissions There are two types of permissions that can be granted to users. ### Workspace ID access level These are the permissions granted to users who are not [logged in](/docs/reference/nx-cloud-cli#npx-nxcloud-login) or are not members of the Nx Cloud organization for this workspace. By default, all users have `read-write` access to the workspace. This can be updated in the workspace settings to `read-only` or `none`. While the initial setting for workspace ID access level is `read-write`, we recommend that you change this setting to `read-only` or `none` for any repository that is visible to people that do not have permission to edit the repository (i.e. open source repositories or repositories that are visible across an organization, but only editable by a specific team). ### Personal access token access level When a workspace member logs in with a personal access token after running [`npx nx login`](/docs/reference/nx-cloud-cli#npx-nxcloud-login) they are granted access to Nx Cloud features. By default all personal access tokens have `read-write` access to the remote cache. This can be updated to `read-only` in the workspace settings if required. ## Better security Without an access token committed to your `nx.json` file you gain more fine-grained control over who has access to your cache artifacts and who can utilise Nx Cloud features that you pay for. When you remove a member from your organization they will immediately lose access to all Nx Cloud features saving you the trouble of needing to cycle any tokens you were previously committing to the repository. --- ## Recording Non-Nx Commands Build and deploy pipelines often do much more than run builds. Unfortunately, this creates more opportunities for pieces to fail. To minimize the number of different sites you need to visit to diagnose issues, Nx Cloud 13.3 and above is capable of recording and saving output from arbitrary commands. ## Enable command recording To record a command with Nx Cloud: 1. Identify a command you would like recorded from your CI/CD configuration, or think of one to run on your machine. (example: echo "hello world") 2. Prefix your command with `npx nx record --`, or the appropriate execute command of your package manager. The `--` is optional but makes it easier to read what portion of the command will be recorded. (example: npx nx record -- echo "hello world") 3. Run the command! Nx Cloud will record output and status codes, and generate a link for you to view your output on so you can easily view or share the result. Make sure you run this command from your workspace root or one of its subdirectories so Nx Cloud can properly locate configuration information. ![npx nx record -- echo "hello world"](../../../../assets/nx-cloud/set-up/record-hello-world.webp) ## Locating command output in Nx Cloud Commands that Nx Cloud stores will appear under your "Runs" view. For easy identification, the stored output will be displayed as a "record-output" target being invoked on the "nx-cloud-tasks-runner" project. ![nx record -- nx format:check](https://nx.dev/nx-cloud/set-up/record-format-check.webp) If you use the Nx Cloud GitHub Integration, links to recorded output will also be displayed based on the exit code in the summary comment. ![Nx Cloud Report](../../../../assets/nx-cloud/set-up/record-report.webp) --- ## Setting Up CI for Nx Cloud Learn how to set up Nx Cloud for your workspace with your preferred CI platform. Get started connecting to Nx Cloud by running ```shell nx connect ``` Each platform has specific configurations and optimizations that help you build and test only what is affected, retrieve previous successful builds, and optimize CI performance. {% course_video src="https://www.youtube.com/watch?v=8mqHXYIl_qI" courseTitle="From PNPM Workspaces to Distributed CI" courseUrl="https://nx.dev/courses/pnpm-nx-next/lessons-06-nx-cloud-setup" /%} ## CI configuration {% tabs syncKey="ci-provider" %} {% tabitem label="GitHub" %} Need a starting point? Generate a new workflow file with the following command: ```shell nx g ci-workflow --ci=github ``` Below is an example of a GitHub Actions setup, building, and testing only what is affected. ```yaml // .github/workflows/ci.yml name: CI on: push: branches: - main pull_request: permissions: actions: read contents: read jobs: main: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: filter: tree:0 fetch-depth: 0 # This enables task distribution via Nx Cloud # Run this command as early as possible, before dependencies are installed # Learn more at https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun # Connect your workspace by running "nx connect" and uncomment this line to enable task distribution # - run: npx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" # Cache node_modules - uses: actions/setup-node@v4 with: node-version: 20 cache: 'npm' - run: npm ci - uses: nrwl/nx-set-shas@v4 # Prepend any command with "nx record --" to record its logs to Nx Cloud # - run: npx nx record -- echo Hello World - run: npx nx affected -t lint test build # Nx Cloud recommends fixes for failures to help you get CI green faster. Learn more: https://nx.dev/ci/features/self-healing-ci - run: npx nx fix-ci if: always() ``` {% /tabitem %} {% tabitem label="CircleCI" %} Need a starting point? Generate a new workflow file with the following command: ```shell nx g ci-workflow --ci=circleci ``` Below is an example of a Circle CI setup, building, and testing only what is affected. ```yaml // .circleci/config.yml version: 2.1 orbs: nx: nrwl/nx@1.7.0 jobs: main: docker: - image: cimg/node:lts-browsers steps: - checkout # This enables task distribution via Nx Cloud # Run this command as early as possible, before dependencies are installed # Learn more at https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun # Connect your workspace by running "nx connect" and uncomment this line to enable task distribution # - run: npx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" - run: npm ci - nx/set-shas: main-branch-name: 'main' # Prepend any command with "nx record --" to record its logs to Nx Cloud # - run: npx nx record -- echo Hello World - run: command: npx nx affected -t lint test build # Nx Cloud recommends fixes for failures to help you get CI green faster. Learn more: https://nx.dev/ci/features/self-healing-ci - run: command: npx nx fix-ci when: always workflows: version: 2 ci: jobs: - main ``` {% /tabitem %} {% tabitem label="GitLab" %} Need a starting point? Generate a new workflow file with the following command: ```shell nx g ci-workflow --ci=gitlab ``` Below is an example of a GitLab setup, building and testing only what is affected. ```yaml // .gitlab-ci.yml image: node:20 variables: CI: 'true' # Main job CI: interruptible: true only: - main - merge_requests script: # This enables task distribution via Nx Cloud # Run this command as early as possible, before dependencies are installed # Learn more at https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun # Connect your workspace by running "nx connect" and uncomment this line to enable task distribution # - npx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" - npm ci - NX_HEAD=$CI_COMMIT_SHA - NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA} # Prepend any command with "nx record --" to record its logs to Nx Cloud # - npx nx record -- echo Hello World - npx nx affected -t lint test build # Nx Cloud recommends fixes for failures to help you get CI green faster. Learn more: https://nx.dev/ci/features/self-healing-ci after_script: - npx nx fix-ci ``` {% /tabitem %} {% tabitem label="Azure" %} Need a starting point? Generate a new workflow file with the following command: ```shell nx g ci-workflow --ci=azure-pipelines ``` Below is an example of an Azure Pipelines setup building and testing only what is affected. ```yaml // azure-pipelines.yml name: CI trigger: - main pr: - main variables: CI: 'true' ${{ if eq(variables['Build.Reason'], 'PullRequest') }}: NX_BRANCH: $(System.PullRequest.PullRequestNumber) TARGET_BRANCH: $[replace(variables['System.PullRequest.TargetBranch'],'refs/heads/','origin/')] BASE_SHA: $(git merge-base $(TARGET_BRANCH) HEAD) ${{ if ne(variables['Build.Reason'], 'PullRequest') }}: NX_BRANCH: $(Build.SourceBranchName) BASE_SHA: $(git rev-parse HEAD~1) HEAD_SHA: $(git rev-parse HEAD) jobs: - job: main pool: vmImage: 'ubuntu-latest' steps: - checkout: self fetchDepth: '0' fetchFilter: tree:0 # Set Azure Devops CLI default settings - bash: az devops configure --defaults organization=$(System.TeamFoundationCollectionUri) project=$(System.TeamProject) displayName: 'Set default Azure DevOps organization and project' # Get last successfull commit from Azure Devops CLI - bash: | LAST_SHA=$(az pipelines build list --branch $(Build.SourceBranchName) --definition-ids $(System.DefinitionId) --result succeeded --top 1 --query "[0].triggerInfo.\"ci.sourceSha\"") if [ -z "$LAST_SHA" ] then echo "Last successful commit not found. Using fallback 'HEAD~1': $BASE_SHA" else echo "Last successful commit SHA: $LAST_SHA" echo "##vso[task.setvariable variable=BASE_SHA]$LAST_SHA" fi displayName: 'Get last successful commit SHA' condition: ne(variables['Build.Reason'], 'PullRequest') env: AZURE_DEVOPS_EXT_PAT: $(System.AccessToken) # This enables task distribution via Nx Cloud # Run this command as early as possible, before dependencies are installed # Learn more at https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun # Connect your workspace by running "nx connect" and uncomment this line to enable task distribution # - script: npx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" - script: npm ci - script: git branch --track main origin/main condition: eq(variables['Build.Reason'], 'PullRequest') # Prepend any command with "nx record --" to record its logs to Nx Cloud # - script: npx nx record -- echo Hello World - script: npx nx affected --base=$(BASE_SHA) --head=$(HEAD_SHA) -t lint test build # Nx Cloud recommends fixes for failures to help you get CI green faster. Learn more: https://nx.dev/ci/features/self-healing-ci - script: npx nx fix-ci condition: always() ``` {% /tabitem %} {% tabitem label="Jenkins" %} Below is an example of a Jenkins setup, building and testing only what is affected. ```groovy pipeline { agent none environment { NX_BRANCH = env.BRANCH_NAME.replace('PR-', '') } stages { stage('Pipeline') { parallel { stage('Main') { when { branch 'main' } agent any steps { // This line enables distribution // The "--stop-agents-after" is optional, but allows idle agents to shut down once the "e2e-ci" targets have been requested // sh "npx nx start-ci-run --distribute-on='3 linux-medium-js' --stop-agents-after='e2e-ci'" sh "npm ci" // Prepend any command with "nx record --" to record its logs to Nx Cloud // This requires connecting your workspace to Nx Cloud. Run "nx connect" to get started w/ Nx Cloud // sh "npx nx record -- nx format:check" // Without Nx Cloud, run format:check directly sh "npx nx format:check" sh "npx nx affected --base=HEAD~1 -t lint test build e2e-ci" } } stage('PR') { when { not { branch 'main' } } agent any steps { // This line enables distribution // The "--stop-agents-after" is optional, but allows idle agents to shut down once the "e2e-ci" targets have been requested // sh "npx nx start-ci-run --distribute-on='3 linux-medium-js' --stop-agents-after='e2e-ci'" sh "npm ci" // Prepend any command with "nx record --" to record its logs to Nx Cloud // This requires connecting your workspace to Nx Cloud. Run "nx connect" to get started w/ Nx Cloud // sh "npx nx record -- nx format:check" // Without Nx Cloud, run format:check directly sh "npx nx format:check" sh "npx nx affected --base origin/${env.CHANGE_TARGET} -t lint test build e2e-ci" } } } } } } ``` {% /tabitem %} {% tabitem label="Bitbucket" %} Need a starting point? Generate a new workflow file with the following command: ```shell nx g ci-workflow --ci=bitbucket-pipelines ``` Below is an example of a Bitbucket Pipelines, building and testing only what is affected. ```yaml // bitbucket-pipelines.yml image: node:20 clone: depth: full pipelines: pull-requests: '**': - step: name: 'Build and test affected apps on Pull Requests' script: - export NX_BRANCH=$BITBUCKET_PR_ID # This enables task distribution via Nx Cloud # Run this command as early as possible, before dependencies are installed # Learn more at https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun # Connect your workspace by running "nx connect" and uncomment this line to enable task distribution # - npx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" - npm ci # Prepend any command with "nx record --" to record its logs to Nx Cloud # npx nx record -- echo Hello World - npx nx affected --base=origin/main -t lint test build # Nx Cloud recommends fixes for failures to help you get CI green faster. Learn more: https://nx.dev/ci/features/self-healing-ci after-script: - npx nx fix-ci branches: main: - step: name: 'Build and test affected apps on "main" branch changes' script: - export NX_BRANCH=$BITBUCKET_BRANCH # This enables task distribution via Nx Cloud # Run this command as early as possible, before dependencies are installed # Learn more at https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun # Connect your workspace by running "nx connect" and uncomment this # - npx nx start-ci-run --distribute-on="3 linux-medium-js" --stop-agents-after="build" - npm ci # Prepend any command with "nx record --" to record its logs to Nx Cloud # - npx nx record -- echo Hello World - npx nx affected -t lint test build --base=HEAD~1 ``` The `pull-requests` and `main` jobs implement the CI workflow. {% /tabitem %} {% /tabs %} ## Get the commit of the last successful build {% tabs syncKey="ci-provider" %} {% tabitem label="GitHub" %} The `GitHub` can track the last successful run on the `main` branch and use this as a reference point for the `BASE`. The [nrwl/nx-set-shas](https://github.com/marketplace/actions/nx-set-shas) provides a convenient implementation of this functionality, which you can drop into your existing CI workflow. To understand why knowing the last successful build is important for the affected command, check out the [in-depth explanation in Actions's docs](https://github.com/marketplace/actions/nx-set-shas#background). {% /tabitem %} {% tabitem label="CircleCI" %} `CircleCI` can track the last successful run on the `main` branch and use this as a reference point for the `BASE`. The [Nx Orb](https://github.com/nrwl/nx-orb) provides a convenient implementation of this functionality, which you can drop into your existing CI workflow. Specifically, for push commits, `nx/set-shas` populates the `$NX_BASE` environment variable with the commit SHA of the last successful run. To understand why knowing the last successful build is important for the affected command, check out the [in-depth explanation in Orb's docs](https://github.com/nrwl/nx-orb#background). ### Using CircleCI in a private repository To use the [Nx Orb](https://github.com/nrwl/nx-orb) with a private repository on your main branch, you need to grant the orb access to your CircleCI API. Create an environment variable called `CIRCLE_API_TOKEN` in the context of the project. {% aside type="caution" title="Caution" %} It should be a user token, not the project token. {% /aside %} {% /tabitem %} {% tabitem label="GitLab" %} GitLab CI/CD uses built-in environment variables to determine the commit range for affected commands. The configuration automatically sets: - `NX_HEAD=$CI_COMMIT_SHA` - Current commit SHA - `NX_BASE=${CI_MERGE_REQUEST_DIFF_BASE_SHA:-$CI_COMMIT_BEFORE_SHA}` - Base commit for comparison This approach works for both merge requests and direct pushes to main, using GitLab's built-in variables to determine the appropriate base commit. {% /tabitem %} {% tabitem label="Azure" %} In the example above, we ran a script to retrieve the commit of the last successful build. The idea is to use [Azure Devops CLI](https://learn.microsoft.com/en-us/cli/azure/pipelines?view=azure-cli-latest) directly in the [Pipeline Yaml](https://learn.microsoft.com/en-us/azure/devops/cli/azure-devops-cli-in-yaml?view=azure-devops) First, we configure Devops CLI ```yaml # Set Azure Devops CLI default settings - bash: az devops configure --defaults organization=$(System.TeamFoundationCollectionUri) project=$(System.TeamProject) displayName: 'Set default Azure DevOps organization and project' ``` Then we can query the pipelines API (providing the auth token) ```yaml # Get last successfull commit from Azure Devops CLI - bash: | LAST_SHA=$(az pipelines build list --branch $(Build.SourceBranchName) --definition-ids $(System.DefinitionId) --result succeeded --top 1 --query "[0].triggerInfo.\"ci.sourceSha\"") if [ -z "$LAST_SHA" ] then echo "Last successful commit not found. Using fallback 'HEAD~1': $BASE_SHA" else echo "Last successful commit SHA: $LAST_SHA" echo "##vso[task.setvariable variable=BASE_SHA]$LAST_SHA" fi displayName: 'Get last successful commit SHA' condition: ne(variables['Build.Reason'], 'PullRequest') env: AZURE_DEVOPS_EXT_PAT: $(System.AccessToken) ``` We can target a specific build; in this example, we specified: - The branch (--branch) - The result type (--result) - The number of the result (--top) The command returns an entire JSON object with all the information. But we can narrow it down to the desired result with the `--query` param that uses [JMESPath](https://jmespath.org/) format ([more details](https://learn.microsoft.com/en-us/cli/azure/query-azure-cli?tabs=concepts%2Cbash)) Finally, we extract the result in a common [custom variable](https://learn.microsoft.com/en-us/azure/devops/pipelines/process/set-variables-scripts?view=azure-devops&tabs=bash) named `BASE_SHA` used later by the `nx format` and `nx affected` commands. {% /tabitem %} {% tabitem label="Jenkins" %} Unlike `GitHub Actions` and `CircleCI`, you don't have the metadata to help you track the last successful run on `main`. In the example below, the base is set to `HEAD~1` (for push) or branching point (for pull requests), but a more robust solution would be to tag an SHA in the main job once it succeeds and then use this tag as a base. See the [nx-tag-successful-ci-run](https://github.com/nrwl/nx-tag-successful-ci-run) and [nx-set-shas](https://github.com/nrwl/nx-set-shas) (version 1 implements tagging mechanism) repositories for more information. We also have to set `NX_BRANCH` explicitly. {% /tabitem %} {% tabitem label="Bitbucket" %} Unlike `GitHub Actions` and `CircleCI`, you don't have the metadata to help you track the last successful run on `main`. In the example below, the base is set to `HEAD~1` (for push) or branching point (for pull requests), but a more robust solution would be to tag an SHA in the main job once it succeeds and then use this tag as a base. See the [nx-tag-successful-ci-run](https://github.com/nrwl/nx-tag-successful-ci-run) and [nx-set-shas](https://github.com/nrwl/nx-set-shas) (version 1 implements tagging mechanism) repositories for more information. We also have to set `NX_BRANCH` explicitly. {% /tabitem %} {% /tabs %} ## Common features across platforms ### Task distribution with Nx Cloud All CI platforms support task distribution via Nx Cloud. To enable it: 1. Connect your workspace by running `nx connect` 2. Uncomment the `npx nx start-ci-run` command in your CI configuration 3. Configure the distribution settings based on your needs Learn more at [Nx Cloud CI Reference](https://nx.dev/ci/reference/nx-cloud-cli#npx-nxcloud-startcirun). ### Self-healing CI Nx Cloud can recommend fixes for failures to help you get CI green faster. The `nx fix-ci` command is included in all platform configurations and runs automatically after failures. Learn more about [Self-Healing CI features](https://nx.dev/ci/features/self-healing-ci). ### Recording commands You can record any command's logs to Nx Cloud by prepending it with `nx record --`. This helps with debugging and monitoring your CI pipeline. Example: ```bash npx nx record -- echo Hello World ``` --- ## Source Control Integration {% index_page_cards path="guides/nx-cloud/source-control-integration" /%} --- ## Azure DevOps Integration The Nx Cloud + Azure Devops Integration lets you access the result of every run—with all its logs and build insights—straight from your PR. ## Connecting your workspace ![Access VCS Setup](../../../../../assets/nx-cloud/set-up/access-vcs-setup.webp) Once on the VCS Integrations setup page, select "Azure DevOps". You will be prompted to enter the name of your organization and project. Identifying your organization and project can be done by looking at the URL of your project summary page. ```text // URL format https://dev.azure.com/[organization]/[project] ``` For example, the url `https://dev.azure.com/nrwl/my-monorepo-project` has an organization name of "nrwl", and a project name of "large-monorepo". You will also need to provide the id of your Azure Git repository, this can either be the internal GUID identifier, if known, or you can use the name of the repository from the URL you use to access it. For example, a URL of `https://dev.azure.com/nrwl/_git/large-monorepo` has the repository id of "large-monorepo". ![Add Azure DevOps Repository](../../../../../assets/nx-cloud/set-up/add-azure-devops-repository.webp) ### Configuring authentication #### Using a personal access token To use a Personal Access Token for authentication, one must be generated with proper permissions. The minimum required permissions are shown in the screenshot below. ![Work Items - Read, Code - Read, Build - Read & execute, Release - Read, write, & execute](../../../../../assets/nx-cloud/set-up/minimal-ado-access-token.webp) Once this token is created paste the value and then click "Connect". This will verify that Nx Cloud can connect to your repo. Upon a successful test, your configuration is saved, and setup is complete. Please note that Azure DevOps will impose rate limits which can degrade the performance of the integration leading to missing data or functionality. To mitigate the impact, we recommend you assign the [Basic + Test plan](https://learn.microsoft.com/en-us/azure/devops/organizations/billing/buy-basic-access-add-users?view=azure-devops#assign-basic-or-basic--test-plans) to the user whose token you utilise for this integration. ### Advanced configuration If your company runs a self-hosted Azure DevOps installation, you may need to override the default URL that Nx Cloud uses to connect to the Azure Devops API. To do so, check the box labeled "Override Azure DevOps API URL" and enter the correct URL for your organization. --- ## Bitbucket Integration The Nx Cloud + Bitbucket Integration lets you access the result of every run—with all its logs and build insights—straight from your PR. ### Using an API token API tokens can be generated on a user account level and can be scoped to specific applications like Bitbucket. This is the recommended approach for Nx Cloud integration. An API token is a secure credential that allows scripts and other processes to authenticate with Bitbucket Cloud applications. You should treat API tokens as securely as any other password. #### Creating an API token 1. First, navigate to your [BitBucket user security settings](https://id.atlassian.com/manage-profile/security/api-tokens) 2. Select "Create API token with scopes" ![Create API Token](../../../../../assets/nx-cloud/set-up/bitbucket-api-tokens-create-screen.jpg) 3. Give your token a name and proceed to the next step 4. When prompted to select the app, choose **Bitbucket**. This ensures the API token can only access Bitbucket APIs and perform git operations: ![Select Bitbucket App](../../../../../assets/nx-cloud/set-up/bitbucket-api-tokens-create-scope.jpg) 5. The required permissions for Nx Cloud are: - `read:pullrequest:bitbucket` - to read pull request information - `write:pullrequest:bitbucket` - to write comments on pull requests - `read:repository:bitbucket` - to read repository contents - `write:repository:bitbucket` - to write files to the repository - `read:user:bitbucket` - verify username from email address - `account:read` - verify username from email address 6. Click "Create token" and copy your newly created API token #### Configuring Nx Cloud with your API Token Once your API token is created, head back to your workspace settings on NxCloud to set up the BitBucket integration: ![Access VCS Setup](../../../../../assets/nx-cloud/set-up/access-vcs-setup.webp) 1. Fill-in all the required fields for selecting your BitBucket repository 2. Username is found on the [account settings](https://bitbucket.org/account/settings/) screen (it is not your email address) 3. Paste your API token created earlier into the API Token box 4. Click "Connect" to finish the setup ### Using an HTTP access token If you are using BitBucket Data Center (on-prem) you need to enable an [HTTP access token for authentication](https://confluence.atlassian.com/bitbucketserver/http-access-tokens-939515499.html). {% aside type="note" title="User linked Access Tokens" %} Due to the type of APIs NxCloud needs to call, we need to create an Access Token [**at the user level**](https://confluence.atlassian.com/bitbucketserver/http-access-tokens-939515499.html). Repo level access tokens will not work. {% /aside %} The minimum required permissions are write access to the repository: ![Create an Access Token](../../../../../assets/nx-cloud/set-up/bitbucket-data-center-access-token.png) Once the Access Token is created, save it in a secure location and then head back to your workspace settings on NxCloud and let's set up a BitBucket integration: ![Access VCS Setup](../../../../../assets/nx-cloud/set-up/access-vcs-setup.webp) 1. Fill-in all the required fields for selecting your Bitbucket repository 2. Username is found on the [account settings](https://your-bitbucket-instance.com/profile) screen (it is not your email address) 3. Paste your Access Token created earlier into the Access Token box 4. Make sure you give NxCloud the URL of your BitBucket instance (this can be in the simple form of `https://your-bitbucket-instance.com`) 5. Click "Connect" to finish the setup --- ## GitHub Integration Integrate Nx Cloud with your source control platform to access run results, logs, and build insights directly from your pull requests or merge requests. This integration is required to use core Nx Cloud features like task distribution and flaky task re-trying. The [Nx Cloud GitHub App](https://github.com/marketplace/official-nx-cloud-app) lets you access the result of every run—with all its logs and build insights—straight from your PR. It will comment on your PR with the results of the latest CI run, with a summary of the results and links to detailed, structured logs. If you’re using Self-Healing CI, Nx Cloud will comment with proposed fixes for CI failures. ## Install the app For the best experience, install the [Nx Cloud GitHub App](https://github.com/marketplace/official-nx-cloud-app). Using the app provides the most seamless authentication experience. This is not required if you wish to authenticate with a personal access token that you generate yourself. For a detailed breakdown of each permission the GitHub App requires and why, see the [GitHub App Permissions](/docs/guides/nx-cloud/source-control-integration/github-app-permissions) reference. ## Connecting your workspace Once you have installed the Nx Cloud GitHub App, you must link your workspace to the installation. To do this, sign in to Nx Cloud and navigate to the VCS Integrations setup page. This page can be found in your workspace settings, you need to be admin of the organization in order to access it. Once on the VCS Integrations setup page, you can choose what VCS you want to connect to your workspace. ![Access VCS Setup](../../../../../assets/nx-cloud/set-up/access-vcs-setup.webp) ### Choosing an authentication method The easiest way to configure the Nx Cloud GitHub Integration is through the Nx Cloud GitHub App, and this method should be preferred for users on [https://cloud.nx.app](https://cloud.nx.app). Users with strict privacy considerations may wish to generate a personal access token (PAT) instead. #### Using the GitHub app To use the Nx Cloud GitHub App for authentication, select the **Use GitHub application** radio button and then click **Connect**. This will verify that Nx Cloud can connect to your repo. Upon a successful test, your configuration is saved. Check if there's any [additional setup required for your CI platform](/docs/guides/nx-cloud/source-control-integration#ci-platform-considerations), then your setup is complete. ![Use GitHub App for Authentication](../../../../../assets/nx-cloud/set-up/use-github-app-auth.webp) #### Using a personal access token Note that users who authenticate with a PAT will not receive Nx Cloud comments with command results and self-healing CI fixes. Github supports two [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#about-personal-access-tokens) types: classic and fine-grained. To use a personal access token for authentication, one must be generated with proper permissions. The minimum required permissions are shown in the screenshot below. {% tabs %} {% tabitem label="Classic Token" %} ![Minimum GitHub Personal Access Token Permissions](../../../../../assets/nx-cloud/set-up/minimal-github-access-token.webp) {% /tabitem %} {% tabitem label="Fine-Grained Token" %} ![Minimum GitHub Fine-grained Personal Access Token Permissions](../../../../../assets/nx-cloud/set-up/fine-grained-gh-pat-permissions.avif) {% /tabitem %} {% /tabs %} Once this token is created, select the radio button for providing a personal access token, paste the value, and then click "Connect". This will verify that Nx Cloud can connect to your repo. Upon a successful test, your configuration is saved. Check the "_CI Platform Considerations_" section below, and if there are no additional instructions for your platform of choice, setup is complete. ### Advanced configuration If your company runs a self-hosted GitHub installation, you may need to override the default URL that Nx Cloud uses to connect to the GitHub API. To do so, check the box labeled "Override GitHub API URL" and enter the correct URL for your organization. ### Connect to GitHub for more features Get access to [easy workspace setup](/docs/features/ci-features/github-integration#easy-workspace-setup) and [access control through GitHub organizations](/docs/features/ci-features/github-integration#access-control) when you [connect your GitHub account to Nx Cloud](/docs/features/ci-features/github-integration#connect-to-github). ## CI platform considerations If you are using CircleCI, TravisCI, GitHub Actions or GitHub, there is nothing else you need to do. If you are using other CI providers, you need to set the `NX_BRANCH` environment variable in your CI configuration. The variable has to be set to a PR number. For instance, this is an example of doing it in Azure pipelines. ### Azure pipelines ```yml // azure-pipelines.yml variables: NX_BRANCH: $(System.PullRequest.PullRequestNumber) ``` ### CircleCI Make sure [GitHub checks are enabled](https://circleci.com/docs/2.0/enable-checks/#to-enable-github-checks). ### Jenkins [Install the Jenkins plugin](https://plugins.jenkins.io/github-checks/). Ensure this step from the plugin instructions is followed: Prerequisite: only GitHub App with proper permissions can publish checks, this guide helps you authenticate your Jenkins as a GitHub App. ## GitHub status checks The Nx Cloud GitHub Integration updates your PR with commit statuses that reflect the real-time progress of your runs. These statuses are generated dynamically based on your running commands. Enforcing these dynamically-named checks within your branch protection rules is not recommended, as it can result in stuck checks displaying `Waiting for status to be reported`. From your repository, go to `Settings -> Branches -> Protect matching branches` and ensure that no Nx Cloud status checks are listed in the `Require status checks to pass before merging` list. Enforcing that status checks pass on your default branch is sufficient. ![Ensure that any NxCloud status check is deselected from your branch protection rules](../../../../../assets/nx-cloud/set-up/do-not-enforce-nx-cloud-status-checks.webp) --- ## GitHub App Permissions The Nx Cloud GitHub App requires the following permissions to provide CI/CD integration and setup experiences. Most information is used transiently during operations and not stored in our systems. ## Required permissions Repository permissions: - `Administration: Read & Write` - `Checks: Read & Write` - `Contents: Read & Write` - `Commit Statuses: Read` - `Issues: Read & Write` - `Metadata: Read` - `Pull requests: Read & Write` - `Workflows: Read & Write` - `Actions: Read` Organization permissions: - `Administration: Read Only` - `Members: Read Only` ## Permission details ### Administration (write) **Used for:** Creating new repositories with a pre-configured Nx workspace during initial onboarding. **When it's used:** Only when you explicitly choose to create a new workspace through Nx Cloud's setup flow. [Single tenant instances](/docs/enterprise/single-tenant/overview) can safely forego this scope and will only lose the ability to create new workspaces through the app. ### Checks (write) **Used for:** Updating CI run statuses so you can see the progress and results of your Nx Cloud pipeline executions directly in GitHub. Also used for Self-Healing CI status check runs in PRs. **When it's used:** Automatically during CI runs to provide real-time status updates. ### Contents (read & write) **Used for:** - **Read:** Detecting your workspace's current Nx version to ensure compatibility. Reading files for Self-Healing CI. - **Write:** Adding Nx Cloud configuration (`nxCloudId` or access token) to your repository during setup. Creating commits and pushing fixes for Self-Healing CI. **When it's used:** During initial setup and configuration, and regularly if Self-Healing CI is enabled. ### Commit statuses (read) **Used for:** Reading commit status information to coordinate with other CI tools and provide accurate pipeline context. **When it's used:** During CI pipeline executions to gather context about your commits. ### Issues (read & write) **Used for:** PR comments (GitHub uses the Issues API for PR comments — see "Pull requests" below for more detail). **When it's used:** During CI runs and when posting status comments. ### Metadata (read) **Used for:** Accessing basic repository information (name, description, visibility). This is a required baseline permission for most GitHub App functionality. ### Pull requests (read & write) **Used for:** - **Read:** Gathering branch information, SHAs, and metadata necessary for CI pipeline execution and distributed task coordination. - **Write:** Posting comments on PRs with CI pipeline status, command results, and Self-Healing CI fixes. Creating PRs during initial Nx Cloud setup. Creating demo PRs for optional features like Self-Healing CI (only when you opt in). **When it's used:** Read operations occur during CI runs. Write operations occur during setup and when posting status comments. ### Workflows (write) **Used for:** Automatically configuring GitHub Actions workflow files when you opt in to features like Self-Healing CI and distributed task execution. **When it's used:** Only when you explicitly enable these features through the Nx Cloud interface. ### Actions (read) **Used for:** Retrieving GitHub Action logs so that they can be surfaced on Nx Cloud to help resolve failures before Nx Cloud had a chance to run tasks, and for Polygraph support. **When it's used:** Only when using Polygraph, and Nx Cloud MCP tools to get CI information. ## Your data and security Most information accessed through these permissions is used transiently during operations and is not stored. Limited version control metadata (such as branch names, SHAs, and commit information) may be stored as part of your CI pipeline execution records for analytics and debugging purposes. [Nx Cloud is SOC2 Type II certified](https://security.nx.app). We implement industry-standard security practices including encryption at rest and in transit, access logging, and regular security audits. You can revoke access to the Nx Cloud GitHub app at any time through your GitHub settings. Write operations (creating repos, posting comments, modifying workflows) only occur when explicitly triggered by your actions or when you opt in to specific features. --- ## GitLab Integration The Nx Cloud GitLab Integration lets you access the result of every run—with all its logs and build insights—straight from your Merge Requests. ### Connecting your workspace ![Access VCS Setup](../../../../../assets/nx-cloud/set-up/access-vcs-setup.webp) Once on the VCS Integrations setup page, select "GitLab". You will be prompted to enter your project's ID. ![Locate Gitlab Project ID](../../../../../assets/nx-cloud/set-up/find-gitlab-project-id.avif) To locate the ID for your project, visit the home page of your repository on GitLab. Click the three-dot menu button in the upper right corner and select "Copy project ID" to copy the value to your clipboard. ![Add GitLab Repository](../../../../../assets/nx-cloud/set-up/add-gitlab-repository.webp) ### Using a personal access token To use a Personal Access Token for authentication, one must be generated with proper permissions. The minimum required permissions are shown in the screenshot below. ![Minimum GitLab Personal Access Token Permissions](../../../../../assets/nx-cloud/set-up/minimal-gitlab-access-token.webp) Once this token is created, select the radio button for providing a personal access token, paste the value, and then click "Connect". This will verify that Nx Cloud can connect to your repo. Upon a successful test, your configuration is saved, and setup is complete. ### Advanced configuration If your company runs a self-hosted GitLab installation, you may need to override the default URL that Nx Cloud uses to connect to the GitLab API. To do so, check the box labeled "Override GitLab API URL" and enter the correct URL for your organization. --- ## Set Up Bun with Mise on CI Bun works as a drop-in package manager for Nx workspaces. You can use it to install dependencies and run Nx commands via `bunx`. To use bun with [Nx Agents](/docs/features/ci-features/distribute-task-execution), you need a custom launch template because the default templates only support npm, yarn, and pnpm. Using [mise](https://mise.jdx.dev/) to manage your toolchain isn't strictly required, but Nx Cloud launch templates have built-in support for it. Pinning tool versions in a single `mise.toml` gives you a predictable, reproducible setup across local development, CI, and Nx Agents. ## Configure your workspace Create a `mise.toml` at the root of your workspace to pin your toolchain versions: ```toml // mise.toml [tools] node = "24.11.0" bun = "1.3.11" ``` Then install dependencies with bun: ```shell bun install ``` Bun generates a `bun.lock` file. Commit it alongside your `mise.toml` and remove any previous lockfile (`package-lock.json`, `yarn.lock`, or `pnpm-lock.yaml`). Run Nx commands with `bunx`: ```shell bunx nx run-many -t build test lint ``` ## Set up CI Generate a starting CI workflow with: ```shell nx g ci-workflow --ci=github ``` Then adjust the generated configuration to use bun. If you haven't connected your workspace to Nx Cloud yet, run `nx connect` and configure your [CI access token](/docs/guides/nx-cloud/access-tokens). Below is an example GitHub Actions workflow using bun with Nx Cloud: ```yaml // .github/workflows/ci.yml name: CI on: push: branches: - main pull_request: permissions: actions: read contents: read env: NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }} jobs: main: runs-on: ubuntu-latest steps: - uses: actions/checkout@v6 with: fetch-depth: 0 filter: tree:0 - run: npx nx start-ci-run --distribute-on="5 linux-large" --stop-agents-after="e2e" - name: Setup toolchains with mise uses: jdx/mise-action@v3 - name: Install dependencies run: bun install --frozen-lockfile - uses: nrwl/nx-set-shas@v5 - run: bunx nx record -- nx format:check - run: bunx nx run-many -t lint test build - run: bunx nx run-many -t e2e - name: Self-healing run: bunx nx fix-ci if: always() ``` The `jdx/mise-action` reads your `mise.toml` and installs both Node and bun at the pinned versions. No separate `setup-node` or `setup-bun` action is needed. The `nx start-ci-run` command runs before dependencies are installed, so it uses `npx` instead of `bunx`. ## Configure Nx Agents {% aside type="note" title="Nx Cloud required" %} Nx Agents are part of [Nx Cloud](/docs/getting-started/nx-cloud). If you haven't set up Nx Cloud yet, see the [getting started guide](/docs/getting-started/nx-cloud). {% /aside %} The default Nx Cloud launch templates use the [`install-node-modules`](https://github.com/nrwl/nx-cloud-workflows/tree/main/workflow-steps/install-node-modules) workflow step, which doesn't support `bun.lock`. Create a **[custom launch template](/docs/reference/nx-cloud/launch-templates)** that installs bun via mise and runs `bun install` directly. Add `.nx/workflows/agents.yaml` to your workspace: ```yaml // .nx/workflows/agents.yaml common-init-steps: &common-init-steps - name: Checkout uses: 'nrwl/nx-cloud-workflows/v5/workflow-steps/checkout/main.yaml' - name: Setup toolchains uses: 'nrwl/nx-cloud-workflows/v5/workflow-steps/install-mise/main.yaml' - name: Install dependencies script: | bun install --frozen-lockfile launch-templates: linux-large: resource-class: 'docker_linux_amd64/large' image: 'ubuntu22.04-node24.14-v1' init-steps: *common-init-steps ``` The [`install-mise`](https://github.com/nrwl/nx-cloud-workflows/tree/main/workflow-steps/install-mise) workflow step reads your `mise.toml` and installs the specified toolchains on the agent. All environments (local, CI, and agents) stay in sync through a single config file. Reference the template name in your CI workflow: ```shell npx nx start-ci-run --distribute-on="5 linux-large" --stop-agents-after="e2e" ``` You can define multiple templates with different resource classes for different workloads. For all available options, see the [launch templates reference](/docs/reference/nx-cloud/launch-templates). --- ## Nx Console {% index_page_cards path="guides/nx-console" /%} --- ## Nx Console Generate Command The `Generate` action allows you to choose a generator and then opens a form listing out all the options for that generator. As you make changes to the form, the generator is executed in `--dry-run` mode in a terminal so you can preview the results of running the generator in real time. {% youtube src="https://www.youtube.com/embed/-nUr66MWRiE" title="Nx Console Generate UI Form" /%} **From the Command Palette** You can also launch the `Generate` action from the Command Palette (`⇧⌘P`) by selecting `nx: generate (ui)`. {% youtube src="https://www.youtube.com/embed/Sk2XjFwF8Zo" title="Nx Console Generate UI from Command Palette" /%} You can even construct the generator options while staying entirely within the Command Palette. Use `⇧⌘P` to open the Command Palette, then select `nx: generate`. After choosing a generator, select any of the listed options to modify the generator command. When you're satisfied with the constructed command, choose the `Execute` command at the top of the list. {% youtube src="https://www.youtube.com/embed/q5NTTqRYq9c" title="Nx Console Generate with Command Palette" /%} --- ## Nx Console Migrate UI The Nx Console Migrate UI provides a visual, guided way to apply migrations in your Nx workspace. This tool simplifies the process of updating your workspace by offering an easy-to-use interface that walks you through each step of the migration process. ## Starting the migration The Migrate UI is available for VSCode and Cursor editors. Make sure you have the [Nx Console extension](/docs/getting-started/editor-setup) installed before you continue. When an update to Nx is available, a badge will appear on the Nx Console icon in your Activity Bar. Bring up the Nx Console view (Hint: type in `> Show Nx Console` in the Command Palette), and you'll see a `Nx Migrate` section in the sidebar. Clicking `Start Migration` starts the migration process. ![](../../../../assets/guides/nx-console/console-migrate-1-start.avif) By default, clicking the migration button starts the migration process by upgrading to the recommended Nx version — the latest version of the next major release. This method ensures you upgrade one major version at a time in order to [avoid breakages](/docs/guides/tips-n-tricks/advanced-update#one-major-version-at-a-time-small-steps). To customize the version, click the pencil icon to provide a specific version to update to. You may also provide additional CLI options such as `--to` or `--from`. ![](../../../../assets/guides/nx-console/console-migrate-2-customize-version.avif) {% aside type="note" title="Ensure clean git status" %} If you have uncommitted changes, stash or commit them before initiating a migration. Otherwise, the migration button will be disabled. {% /aside %} Once you start the migration, Nx Console runs the `nx migrate` command to update your dependency versions and generate a `migrations.json` file. You'll be prompted to inspect the changes made to your `package.json` before installing them and proceeding. ![](../../../../assets/guides/nx-console/console-migrate-3-confirm.avif) ## The migration process After confirming the `package.json` changes, the Migrate UI opens and guides you step-by-step through each migration action. Each migration will be executed in the order as they appear in `migration.json`. If a migration results in file changes, you will be prompted to review the changes before continuing. You can either `Accept` or `Undo` the migration. ![](../../../../assets/guides/nx-console/console-migrate-4-approve.avif) If a migration step encounters an error, the process pauses so you can inspect the error details. ![](../../../../assets/guides/nx-console/console-migrate-5-error.avif) You can click through to view the migration source code, giving you the opportunity to patch it for your specific use-case or make necessary adjustments to your repository before rerunning the migration. Alternatively, you may choose to skip a problematic migration. ## Finalizing the migration When all migrations are done, or you don't want to run further migrations, you can finish the process by clicking the Finish button. By default, this will squash all commits created during the migration together, but you can opt into preserving them. You will be prompted for the git commit message, once you enter it the `migrations.json` file is removed and the migration process is finished. ![](../../../../assets/guides/nx-console/console-migrate-6-finish.avif) --- ## Nx Console & Nx Cloud Integration Nx Console for VSCode is integrated with Nx Cloud to help you stay on top of your CI Pipelines without leaving the editor. If your workspace is connected to Nx Cloud, you will have access to a new view in the Nx Console sidebar that provides at-a-glance information about your running and recent CI pipeline executions. ![Nx Console Nx Cloud View](../../../../assets/guides/nx-console/cloud-view.png) {% aside type="note" %} Nx Console will only show information about CI Pipelines from the last hour and triggered from branches that you have modified locally. If you want to see information about other pipelines, use the Nx Cloud application at [cloud.nx.app](https://cloud.nx.app). {% /aside %} ## Notifications In addition to the view, you will receive notifications when a pipeline completes or a task in it fails. ![Nx Console Nx Cloud Notifications](../../../../assets/guides/nx-console/cloud-notification.png) You can click on the buttons to view the results directly in Nx Cloud or open the Pull Request in the browser. To only be notified on failure or turn off notifications altogether, you can change the `nxConsole.nxCloudNotifications` setting. ## JetBrains This feature is only available in VSCode but coming soon to JetBrains. For now, you can see whether you're connected to Nx Cloud and navigate directly to the Nx Cloud application from the Nx Console Toolwindow. --- ## Nx Console Project Details View Nx Console provides seamless integration with the [Project Details View](/docs/features/explore-graph#explore-projects-in-your-workspace). You can learn more about your project, run tasks or navigate the task graph with just a few clicks! ![console-pdv-example.png](../../../../assets/guides/nx-console/console-pdv-example.png) You can access the integrated Project Details View in multiple ways: - By clicking on the Preview icon to the top right of your `project.json`, `package.json` or any file that modifies targets (for example `jest.config.ts` or `cypress.config.ts`) - By using the codelenses in any of these files - By running the `Nx: Open Project Details to Side` action while any file in a project is open In addition to viewing the Project Details View, Codelenses in tooling configuration files (like `jest.config.ts`) allow you to run targets via Nx with a single click. If you would like to disable the Codelens feature, you can do so easily: - In VSCode, simply turn off the `nxConsole.enableCodeLens` setting - In JetBrains IDEs, right-click a Codelens and select ``Hide `Code Vision: Nx Config Files` Inlay Hints `` --- ## Nx Console Run Command You can construct the executor command options while staying entirely within the Command Palette. Use `⇧⌘P` to open the Command Palette, then select `Nx: Run`. After choosing a project, select a target and any of the listed options to modify the executor command options. When you're satisfied with the constructed command, choose the `Execute` command at the top of the list. You can also use `Nx: Run Target` to select a target first and then a matching project. {% youtube src="https://www.youtube.com/embed/CsUkSyQcxwQ" title="Nx Console Run from Command Palette" /%} --- ## Nx Console Telemetry To ensure that we focus on creating features that benefit your day-to-day workflow, we collect some data from the Nx Console extensions. ## Collected data Here's the information we collect for each extension. ### User data > None of the information that we ask for is used to track any personal information | Property | Description | | ----------- | ----------------------------------------------------------------------------------------- | | Client ID | These are retrieved by APIs provided by each editor. We do not generate this information. | | User ID | We use the same value as the Client ID | | Session ID | Generated UUID | | OS | What operating system are you using? | | Editor | What editor are you using? Visual Studio Code, Intellij, etc | | App Version | What version of the extension is being used? | ### Event data | Property | Description | | ------------------- | ---------------------------------- | | Extension Activated | Extension activation timings | | Action Triggered | Nx Generate, Nx Run, Nx Graph, etc | ## Visual studio code For Visual Studio Code, we use the global telemetry setting provided by the editor. This is controlled by the `telemetry.telemetryLevel` setting #### How to disable telemetry for visual studio code Setting `telemetry.telemetryLevel` to `off` will disable telemetry for Nx Console in Visual Studio Code. Read more about the telemetry settings in Visual Studio Code [here](https://code.visualstudio.com/docs/getstarted/telemetry#_disable-telemetry-reporting) ## Jetbrains (IntelliJ, webstorm, etc) When the plugin is first installed, we will prompt you to opt in or out of reporting telemetry. #### How to disable telemetry for jetbrains editors To turn off telemetry after opting in, go to **Settings** > **Tools** > **Nx Console** > Uncheck **Enable Telemetry** --- ## Troubleshoot Nx Console Issues Often, issues with Nx Console are the result of underlying issues with Nx. Make sure to read the [Nx installation troubleshooting docs](/docs/troubleshooting/troubleshoot-nx-install-issues) for more help. ## Enable debug logging If you're experiencing issues with Nx Console, enabling debug logging can help diagnose the problem by providing detailed information about the extension's internal operations. ### VS Code 1. Open Settings (`Ctrl/Cmd + ,`) 2. Search for `nxConsole.enableDebugLogging` 3. Enable the checkbox 4. Restart VS Code or reload the window (`Ctrl/Cmd + Shift + P` → `Developer: Reload Window`) #### Viewing debug logs 1. Open Command Palette (`Ctrl/Cmd + Shift + P`) 2. Run `View: Toggle Output` 3. Select from the dropdown: - **Nx Console** - Main extension output - **Nx Language Server** - Language server diagnostics ### JetBrains IDEs (IntelliJ, WebStorm, etc.) 1. Enable verbose logging: - Go to `Help` → `Diagnostic Tools` → `Debug Log Settings...` - Add: `#dev.nx.console:trace` - Click `OK` and restart the IDE 2. Reproduce the issue 3. Access logs via `Help` → `Show Log in Explorer` (Windows) / `Show Log in Finder` (macOS) / `Show Log in Files` (Linux) 4. Look for `idea.log` file and paste relevant sections when reporting issues ## Configuration settings For a complete reference of all available configuration settings, see the [Nx Console Settings Reference](/docs/reference/nx-console-settings). ## VSCode + nvm issues VSCode loads a version of Node when it starts. It can use versions set via [`nvm`](https://github.com/nvm-sh/nvm) but there are some caveats. - If you've installed Node outside of `nvm` (for example using the Node installer or `brew` on Mac), VSCode will always use that version. You can check by running `nvm list` and looking for a `system` alias. To enable VSCode to pick up your `nvm` version, make sure to uninstall the version of Node that was installed outside of `nvm`. - VSCode will load the `default` alias from `nvm` at startup. You can set it by running `nvm alias default [version]`. The `default` alias needs to be set in your OS' default terminal for VSCode to pick it up. Setting it in a VSCode-integrated terminal won't persist after it's closed. Similarly, setting it in a third-party app like iTerm won't influence VSCode by default. - VSCode only loads the `default` version when the app is first started. This means that in order to change it, you need to close all VSCode windows and restart the app - running `Reload Window` won't work. - If you work with lots of different Node versions, there are various VSCode extensions available to dynamically run `nvm use` whenever you open a new integrated terminal. Search for `nvm`. - You can set a static version by using a launch configuration with `runtimeVersion` set. Refer to [this guide](https://code.visualstudio.com/docs/nodejs/nodejs-debugging#_multi-version-support). We try to make noticing discrepancies easier by showing you the currently loaded Node version on startup. To enable this, toggle the `nxConsole.showNodeVersionOnStartup` setting in VSCode. ## JetBrains WSL support The Node interpreter under Languages & Frameworks > Node.js needs to be configured to use the Node executable within the WSL distribution. You can read more on the [official Jetbrains docs](https://www.jetbrains.com/help/webstorm/how-to-use-wsl-development-environment-in-product.html#ws_wsl_node_interpreter_configure). --- ## Nx Release {% index_page_cards path="guides/nx-release" /%} --- ## Automate GitHub Releases Nx Release can automate the creation of [GitHub releases](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository) for you. GitHub releases are a great way to communicate the changes in your projects to your users. ## Authenticating with GitHub In order to be able to create the release on GitHub, you need to provide a valid token which can be used for authenticating with the GitHub API. Nx release supports two main ways of doing this: 1. In all environments it will preferentially check for an environment variable (the environment variable can either be called `GITHUB_TOKEN` or `GH_TOKEN`). Please ensure that this environment variable is set in your CI environment (and that the token it has been set to has been configured with the appropriate permissions to create releases) before attempting to create a release in CI. 2. It can also detect if you have a valid, authenticated installation of the [official `gh` CLI tool](https://cli.github.com), and leverage that automatically as a fallback when no environment variable is set. ## GitHub release contents When a GitHub release is created, it will include the changelog that Nx Release generates with entries based on the changes since the last release. Nx Release will parse the `feat` and `fix` type commits according to the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) specification and sort them into appropriate sections of the changelog. Take a look at the [Nx releases page](https://github.com/nrwl/nx/releases) to see examples of GitHub releases generated by Nx Release. ## Enable release creation To enable GitHub release creation for your workspace, set `release.changelog.workspaceChangelog.createRelease` to `'github'` in `nx.json`: ```jsonc // nx.json { "release": { "changelog": { "workspaceChangelog": { "createRelease": "github", }, }, }, } ``` ## Preview the release Use `nx release --dry-run` to preview the GitHub release instead of creating it. This allows you to see what the release will look like without pushing anything to GitHub. ## Disable file creation Since GitHub releases contain the changelog, you may wish to disable the generation and management of local the `CHANGELOG.md` file. To do this, set `release.changelog.workspaceChangelog.file` to `false` in `nx.json`: ```jsonc // nx.json { "release": { "changelog": { "workspaceChangelog": { "file": false, "createRelease": "github", }, }, }, } ``` Note: When configured this way, Nx Release will not delete existing changelog files, just ignore them. ## Project level changelogs Nx Release supports creating GitHub releases for project level changelogs as well. This is particularly useful when [releasing projects independently](/docs/guides/nx-release/release-projects-independently). To enable this, set `release.changelog.projectChangelogs.createRelease` to `'github'` in `nx.json`: ```jsonc // nx.json { "release": { "changelog": { "projectChangelogs": { "createRelease": "github", }, }, }, } ``` {% aside type="caution" title="Project and Workspace GitHub Releases" %} Nx Release does not support creating GitHub releases for both project level changelogs and the workspace changelog. You will need to choose one or the other. {% /aside %} ## Customizing the GitHub instance to use GitHub enterprise server If you are not using github.com, and instead using a self-hosted GitHub Enterprise Server instance, you can use a configuration object instead of the string for "createRelease" to provide the relevant hostname, and optionally override the API base URL, although this is not typically needed as it will default to `https://${hostname}/api/v3`. ```jsonc // nx.json { "release": { "changelog": { "workspaceChangelog": { "createRelease": { "provider": "github-enterprise-server", "hostname": "github.example.com", }, }, }, }, } ``` --- ## Automate GitLab Releases Nx Release can automate the creation of [GitLab releases](https://docs.gitlab.com/user/project/releases/) for you. GitLab releases are a great way to communicate the changes in your projects to your users. ## Authenticating with GitLab In order to be able to create the release on GitLab, you need to provide a valid token which can be used for authenticating with the GitLab API. Nx release supports two main ways of doing this: 1. In all environments it will preferentially check for an environment variable (the environment variable can either be called `GITLAB_TOKEN` or `GL_TOKEN`). 2. In GitLab CI it will check for and use the automatically created GitLab token in the `CI_JOB_TOKEN` environment variable. ## GitLab release contents When a GitLab release is created, it will include the changelog that Nx Release generates with entries based on the changes since the last release. Nx Release will parse the `feat` and `fix` type commits according to the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) specification and sort them into appropriate sections of the changelog. ## Enable release creation To enable GitLab release creation for your workspace, set `release.changelog.workspaceChangelog.createRelease` to `'gitlab'` in `nx.json`: ```jsonc // nx.json { "release": { "changelog": { "workspaceChangelog": { "createRelease": "gitlab", }, }, }, } ``` ## Preview the release Use `nx release --dry-run` to preview the GitLab release instead of creating it. This allows you to see what the release will look like without pushing anything to GitLab. ## Disable file creation Since GitLab releases contain the changelog, you may wish to disable the generation and management of the local `CHANGELOG.md` file. To do this, set `release.changelog.workspaceChangelog.file` to `false` in `nx.json`: ```jsonc // nx.json { "release": { "changelog": { "workspaceChangelog": { "file": false, "createRelease": "gitlab", }, }, }, } ``` Note: When configured this way, Nx Release will not delete existing changelog files, just ignore them. ## Project level changelogs Nx Release supports creating GitLab releases for project level changelogs as well. This is particularly useful when [releasing projects independently](/docs/guides/nx-release/release-projects-independently). To enable this, set `release.changelog.projectChangelogs.createRelease` to `'gitlab'` in `nx.json`: ```jsonc // nx.json { "release": { "changelog": { "projectChangelogs": { "createRelease": "gitlab", }, }, }, } ``` {% aside type="caution" title="Project and Workspace GitLab Releases" %} Nx Release does not support creating GitLab releases for both project level changelogs and the workspace changelog. You will need to choose one or the other. {% /aside %} ## Customizing the GitLab instance If you are not using gitlab.com, and are instead using a self-hosted GitLab instance, you can use a configuration object instead of the string for "createRelease" to provide the relevant hostname, and optionally override the API base URL, although this is not typically needed as it will default to `https://${hostname}/api/v4`. ```jsonc // nx.json { "release": { "changelog": { "workspaceChangelog": { "createRelease": { "provider": "gitlab", "hostname": "gitlab.example.com", }, }, }, }, } ``` --- ## Automatically Version with Conventional Commits If you wish to bypass the versioning prompt, you can configure Nx Release to defer to the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) standard to determine the version bump automatically. This is useful for automating the versioning process in a CI/CD pipeline. See the [`nx release version`](/docs/reference/nx-commands#nx-release-version) CLI reference for all available versioning options. ## Enable automatic versioning To enable automatic versioning via conventional commits, set the `release.version.conventionalCommits` property to `true` in `nx.json`: ```json // nx.json { "release": { "version": { "conventionalCommits": true } } } ``` ## Determine the version bump Nx Release will use the commit messages since the last release to determine the version bump. It will look at the type of each commit and determine the highest version bump from the following list: - 'feat' -> minor - 'fix' -> patch For example, if the git history looks like this: ```text - fix(pkg-1): fix something - feat(pkg-2): add a new feature - chore(pkg-3): update docs - chore(release): 1.0.0 ``` then Nx Release will select the `minor` version bump and elect to release version 1.1.0. This is because there is a `feat` commit since the last release of 1.0.0. To customize the version bump for different types of commits, or to trigger a version bump with custom commit types, see the [Customize Conventional Commit Types](/docs/guides/nx-release/customize-conventional-commit-types) recipe. {% aside type="note" title="No changes detected" %} If Nx Release does not find any relevant commits since the last release, it will skip releasing a new version. This works with [independent releases](/docs/guides/nx-release/release-projects-independently) as well, allowing for only some projects to be released and some to be skipped. {% /aside %} ## Usage with independent releases If you are using [independent releases](/docs/guides/nx-release/release-projects-independently), Nx Release will determine the version bump for each project independently. For example, if the git history looks like this: ```text - fix(pkg-1): fix something - feat(pkg-2): add a new feature - chore(pkg-3): update docs - chore(release): publish ``` Nx Release will select the `patch` version bump for `pkg-1` and `minor` for `pkg-2`. `pkg-3` will be skipped entirely, since it has no `feat` or `fix` commits. {% aside type="note" title="Determining if a commit affects a project" %} Note that this determination is made based on files changed by each commit, _not_ by the scope of the commit message itself. This means that `feat(pkg-2): add a new feature` could trigger a version bump for a project other than `pkg-2` if it updated files in another project. {% /aside %} An example partial output of running Nx Release with independent releases and conventional commits enabled: ```plaintext {% frame="terminal" title="nx release" %} NX Running release version for project: pkg-1 pkg-1 🏷️ Resolved the current version as 0.4.0 from git tag "pkg-1@0.4.0", based on releaseTag.pattern "{projectName}@{version}" pkg-1 📄 Resolved the specifier as "patch" using git history and the conventional commits standard pkg-1 ❓ Applied semver relative bump "patch", derived from conventional commits data, to get new version 0.4.1 pkg-1 ✍️ New version 0.4.1 written to manifest: packages/pkg-1/package.json NX Running release version for project: pkg-2 pkg-2 🏷️ Resolved the current version as 0.4.0 from git tag "pkg-2@0.4.0", based on releaseTag.pattern "{projectName}@{version}" pkg-2 📄 Resolved the specifier as "minor" using git history and the conventional commits standard pkg-2 ❓ Applied semver relative bump "minor", derived from conventional commits data, to get new version 0.5.0 pkg-2 ✍️ New version 0.5.0 written to manifest: packages/pkg-2/package.json NX Running release version for project: pkg-3 pkg-3 🏷️ Resolved the current version as 0.4.0 from git tag "pkg-3@0.4.0", based on releaseTag.pattern "{projectName}@{version}" pkg-3 🚫 No changes were detected using git history and the conventional commits standard ``` --- ## Build Before Versioning In order to ensure that projects are built before the new version is applied to their package manifest, you can use the `preVersionCommand` property in `nx.json`: ```json // nx.json { "release": { "version": { "preVersionCommand": "npx nx run-many -t build" } } } ``` This command will run the `build` target for all projects before the version step of Nx Release. Any command can be specified, including non-nx commands. This step is often required when [publishing from a custom dist directory](/docs/guides/nx-release/updating-version-references#scenario-2-i-want-to-publish-from-a-custom-dist-directory-and-update-references-in-my-both-my-source-and-dist-packagejson-files), as the dist directory must be built before the version is applied to the dist directory's package manifest. When using release groups in which the member projects are versioned together, you can use `groupPreVersionCommand` and it will be executed before the versioning step for that release group. ```json // nx.json { "release": { "groups": { "my-group": { "projects": ["my-lib-one", "my-lib-two"], "version": { "groupPreVersionCommand": "npx nx run-many -t build -p my-lib-one,my-lib-two" } } } } } ``` The `groupPreVersionCommand` will run in addition to the global `preVersionCommand`. ## Build before Docker versioning In order to ensure that images are built before versioning, use the `preVersionCommand` property in the `docker` section of `nx.json`. ```jsonc // nx.json { "release": { "docker": { "preVersionCommand": "npx nx run-many -t docker:build", }, }, } ``` If `preVersionCommand` is not set, the default is `npx nx run-many -t docker:build`, which builds all projects with a `docker:build` target. You can customize this command to be anything that runs prior to Docker versioning. When using release groups with Docker, use the `groupPreVersionCommand` option to run a command before the versioning step for that group. ```jsonc // nx.json { "release": { "groups": { "my-group": { "projects": ["api", "microservice"], "docker": { "groupPreVersionCommand": "npx nx run-many -t docker:build -p api,microservice", }, }, }, }, } ``` The `groupPreVersionCommand` will run in addition to the global `preVersionCommand` for Docker. --- ## Configuring Version Prefix for Dependencies This guide explains how to configure a custom version prefix in Nx Release using the `versionPrefix` option. The version prefix allows you to automatically add a specific prefix format to dependencies, providing control over how dependency versions are specified in your project's manifest files (such as `package.json`, `Cargo.toml`, etc.). ## The `versionPrefix` option The `versionPrefix` option controls which prefix is applied to dependency versions during the versioning process. By default, `versionPrefix` is set to `"auto"`, which selects a prefix format (either `""`, `"~"`, `"^"`, or `"="`) by respecting what is already in the manifest file. For example, having the following `package.json` file as an example manifest: ```json { "name": "my-package", "version": "0.1.1", "dependencies": { "dependency-one": "~1.2.3", "dependency-two": "^2.3.4", "dependency-three": "3.0.0" } } ``` Then next patch bump will be: ```json { "name": "my-package", "version": "0.1.2", "dependencies": { "dependency-one": "~1.2.4", "dependency-two": "^2.3.4", "dependency-three": "3.0.0" } } ``` Preserving the prefix for `dependency-one` and `dependency-two` and continuing to use no prefix for `dependency-three`. ### Available prefix options You can set `versionPrefix` to one of the following values: - `"auto"`: Automatically chooses a prefix based on the existing declaration in the manifest file. This is the default value. - `""`: Uses the exact version without a prefix. - `"~"`: Specifies compatibility with patch-level updates. - `"^"`: Specifies compatibility with minor-level updates. - `"="`: Locks the version to an exact match (the `=` is not commonly used in the JavaScript ecosystem, but is in others such as Cargo for Rust). Example configuration: ```json // nx.json { "release": { "version": { "versionPrefix": "~" } } } ``` ## Configuring version prefix in `nx.json` or `project.json` To set the versionPrefix option globally or for a specific project, add it to either your `nx.json` or `project.json` configuration files: ```jsonc { "release": { "version": { "versionPrefix": "^", // or "", "~", "^", "=" depending on your preference }, }, } ``` With the `versionPrefix` option set to `^`, your `package.json` dependencies might look like this: ```json { "name": "my-package", "version": "0.1.1", "dependencies": { "dependency-one": "^1.0.0", "dependency-two": "^2.3.4", "dependency-three": "^3.0.0" } } ``` This configuration helps enforce a consistent approach to dependency management, allowing flexibility in how updates to dependencies are tracked and managed across your project. --- ## Configure Changelog Format The default changelog renderer for `nx release` generates a changelog entry for each released project similar to the following: ```md ## 7.9.0 (2024-05-13) ### 🚀 Features - **rule-tester:** check for missing placeholder data in the message ([#9039](https://github.com/typescript-eslint/typescript-eslint/pull/9039)) ### ❤️ Thank You - Kirk Waiblinger - Sheetal Nandi - Vinccool96 ``` ## Include all metadata There are a few options available to modify the default changelog renderer output. They can be applied to both `workspaceChangelog` and `projectChangelogs` in exactly the same way. All four options are true by default: ```json // nx.json { "release": { "changelog": { "projectChangelogs": { "renderOptions": { "authors": true, "applyUsernameToAuthors": true, "commitReferences": true, "versionTitleDate": true } } } } } ``` #### `authors` Whether the commit authors should be added to the bottom of the changelog in a "Thank You" section. Defaults to `true`. #### `applyUsernameToAuthors` If authors is enabled, controls whether or not to try to map the authors to their GitHub usernames using https://ungh.cc (from https://github.com/unjs/ungh) and the email addresses found in the commits. Defaults to `true`. You should disable this option if you don't want to make any external requests to https://ungh.cc NOTE: Prior to Nx v21, this option was named `mapAuthorsToGitHubUsernames`. #### `commitReferences` Whether the commit references (such as commit and/or PR links) should be included in the changelog. Defaults to `true`. #### `versionTitleDate` Whether to include the date in the version title. It can be set to `false` to disable it, or `true` to enable with the default of (YYYY-MM-DD). Defaults to `true`. ## Remove all metadata If you prefer a more minimalist changelog, you can set all the options to false, like this: ```json // nx.json { "release": { "changelog": { "projectChangelogs": { "renderOptions": { "authors": false, "applyUsernameToAuthors": false, "commitReferences": false, "versionTitleDate": false } } } } } ``` Which will generate a changelog that looks similar to the following: ```md ## 7.9.0 ### 🚀 Features - **rule-tester:** check for missing placeholder data in the message ``` ## Custom changelog renderer {% badge text="Nx 22+" /%} For complete control over changelog formatting, you can provide a custom changelog renderer implementation. The renderer can be specified as a path to a module that exports a class extending the base `ChangelogRenderer`. ```jsonc { "release": { "changelog": { "projectChangelogs": { "renderer": "./tools/custom-changelog-renderer.ts", }, }, }, } ``` Your custom renderer must extend the `ChangelogRenderer` class and implement the required methods: ```typescript // tools/custom-changelog-renderer.ts import { ChangelogRenderer, ChangelogRenderOptions } from '@nx/js/release'; export default class CustomChangelogRenderer extends ChangelogRenderer { async renderMarkdown( changes: any[], options: ChangelogRenderOptions ): Promise { // Your custom changelog generation logic return '# My Custom Changelog\n\n...'; } } ``` {% aside type="note" title="Programmatic API" %} When using the programmatic `ReleaseClient` API (Nx 22+), you can also pass the renderer implementation class directly instead of a file path, allowing for dynamic renderer selection without file system access. {% /aside %} --- ## Configure Custom Registries To publish JavaScript packages, Nx Release uses the `npm` CLI under the hood, which defaults to publishing to the `npm` registry (`https://registry.npmjs.org/`). If you need to publish to a different registry, you can configure the registry in the `.npmrc` file in the root of your workspace or at the project level in the project configuration. ## Set the registry in the root .npmrc file The easiest way to configure a custom registry is to set it in the `npm` configuration via the root `.npmrc` file. This file is located in the root of your workspace, and Nx Release will use it for publishing all projects. To set the registry, add the 'registry' property to your root `.npmrc` file: ```bash // .npmrc registry=https://my-custom-registry.com/ ``` ### Authenticate to the Registry in CI To authenticate with a custom registry in CI, you can add authentication tokens to the `.npmrc` file: ```bash // .npmrc registry=https://my-custom-registry.com/ //my-custom-registry.com/:_authToken= ``` See the [npm documentation](https://docs.npmjs.com/cli/v10/configuring-npm/npmrc#auth-related-configuration) for more information. ## Configure multiple registries The recommended way to determine which registry packages are published to is by using [npm scopes](https://docs.npmjs.com/cli/v10/using-npm/scope). All packages with a name that starts with your scope will be published to the registry specified in the `.npmrc` file for that scope. Consider the following example: ```bash // .npmrc @my-scope:registry=https://my-custom-registry.com/ //my-custom-registry.com/:_authToken= @other-scope:registry=https://my-other-registry.com/ //my-other-registry.com/:_authToken= registry=https://my-default-registry.com/ //my-default-registry.com/:_authToken= ``` With the above `.npmrc`, the following packages would be published to the specified registries: - `@my-scope/pkg-1` -> `https://my-custom-registry.com/` - `@other-scope/pkg-2` -> `https://my-other-registry.com/` - `pkg-3` -> `https://my-default-registry.com/` ## Specify an alternate registry for a single package In some cases, you may want to configure the registry on a per-package basis instead of by scope. This can be done by setting options in the project's configuration. {% aside type="note" title="Authentication" %} All registries set for specific packages must still have authentication tokens set in the root `.npmrc` file for publishing in CI. See [Authenticate to the Registry in CI](#authenticate-to-the-registry-in-ci) for an example. {% /aside %} ### Set the registry in the project configuration The project configuration for Nx Release is in two parts - one for the version step and one for the publish step. #### Update the version step The version step of Nx Release is responsible for determining the new version of the package. If you have set the `version.currentVersionResolver` to 'registry', then Nx Release will check the remote registry for the current version of the package. **Note:** If you do not use the 'registry' current version resolver, then this step is not needed. To set custom registry options for the current version lookup, add the registry and/or tag to the `currentVersionResolverMetadata` in the project configuration: ```json // project.json { "name": "pkg-5", "sourceRoot": "...", "targets": { ... }, "release": { "version": { "currentVersionResolverMetadata": { "registry": "https://my-unique-registry.com/", "tag": "next" } } } } ``` #### Update the publish step The publish step of Nx Release is responsible for publishing the package to the registry. To set custom registry options for publishing, you can add the `registry` and/or `tag` options for the `nx-release-publish` target in the project configuration: ```json // project.json { "name": "pkg-5", "sourceRoot": "...", "targets": { ..., "nx-release-publish": { "options": { "registry": "https://my-unique-registry.com/", "tag": "next" } } } } ``` ### Set the registry in the package manifest It is not recommended to set the registry for a package in the `publishConfig` property of its `package.json` file. `npm publish` will always prefer the registry from the `publishConfig` over the `--registry` argument. Because of this, the `--registry` CLI and programmatic API options of Nx Release will no longer be able to override the registry for purposes such as publishing locally for end to end testing. --- ## Customize Conventional Commit Types [Nx release](/docs/features/manage-releases) allows you to leverage the [conventional commits](/docs/guides/nx-release/automatically-version-with-conventional-commits) standard to automatically determine the next version increment. By default, this results in: - `feat(...)` triggering a minor version bump (`1.?.0`) - `fix(...)` triggering a patch version bump (`1.?.x`) - `BREAKING CHANGE` in the footer of the commit message or with an exclamation mark after the commit type (`fix(...)!`) triggers a major version bump (`?.0.0`) {% aside type="note" title="No changes detected" %} If Nx Release does not find any relevant commits since the last release, it will skip releasing a new version. This works with [independent releases](/docs/guides/nx-release/release-projects-independently) as well, allowing for only some projects to be released while others are skipped. {% /aside %} However, you can customize how Nx interprets these conventional commits, for both **versioning** and **changelog** generation. ## Disable a commit type for versioning and changelog generation To disable a commit type, set it to `false`. ```json // nx.json { "release": { "conventionalCommits": { "types": { // disable the docs type for versioning and in the changelog "docs": false, ... } } } } ``` If you just want to disable a commit type for versioning, but still want it to appear in the changelog, set `semverBump` to `none`. ```json // nx.json { "release": { "conventionalCommits": { "types": { // disable the docs type for versioning, but still include it in the changelog "docs": { "semverBump": "none", ... }, ... } } } } ``` ## Changing the type of semver version bump Assume you'd like `docs(...)` commit types to cause a `patch` version bump. You can define that as follows: ```json // nx.json { "release": { "conventionalCommits": { "types": { "docs": { "semverBump": "patch", ... }, } } } } ``` ## Renaming the changelog section for a commit type To rename the changelog section for a commit type, set the `title` property. ```json // nx.json { "release": { "conventionalCommits": { "types": { ... "docs": { ... "changelog": { "title": "Documentation Changes" } }, ... } } } } ``` ## Hiding a commit type from the changelog To hide a commit type from the changelog, set `changelog` to `false`. ```json // nx.json { "release": { "conventionalCommits": { "types": { ... "chore": { "changelog": false }, ... } } } } ``` Alternatively, you can set `hidden` to `true` to achieve the same result. ```json // nx.json { "release": { "conventionalCommits": { "types": { ... "chore": { "changelog": { "hidden": true } }, ... } } } } ``` ## Defining non-standard commit types If you want to use custom, non-standard conventional commit types, you can define them in the `types` object. If you don't specify a `semverBump`, Nx will default to `patch`. ```json // nx.json { "release": { "conventionalCommits": { "types": { "awesome": {} } } } } ``` ## Including invalid commits in the changelog Nx Release ignores all commits that do not conform to the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/#summary) standard by default. A special `__INVALID__` type is available in situations where you want to process invalid messages. {% aside type="caution" title="Invalid != Unmatched" %} This type will only include **invalid** commits. _e.g. those that do not follow the `: ...` format._ Commits that are otherwise valid, but with a type that is not enabled, will not be matched by this group. {% /aside %} This can be useful in cases where you have not managed to be consistent with your use of the Conventional Commits standard (e.g. when applying it retroactively to an existing codebase) but still want a changelog to be generated with the contents of each commit message and/or for invalid commits to still affect project versioning. {% aside type="note" title="Alternative to Conventional Commits" %} If you cannot adhere to the Conventional Commits standard for your commits, file based versioning via Nx Release Version Plans could be a good alternative for managing your releases. See our docs on [File Based Versioning](/docs/guides/nx-release/file-based-versioning-version-plans) for more information. {% /aside %} ```json // nx.json { "release": { "conventionalCommits": { "types": { "__INVALID__": { "semverBump": "patch", // Note: the default is "none" "changelog": { "title": "Uncategorized changes" } } } } } } ``` --- ## File Based Versioning (Version Plans) Tools such as Changesets and Beachball helped popularize the concept of tracking the desired semver version bump in a separate file on disk (which is committed to your repository alongside your code changes). This has the advantage of separating the desired bump from your git commits themselves, which can be very useful if you are not able to enforce that all contributors follow a strict commit message format ([e.g. Conventional Commits](/docs/guides/nx-release/automatically-version-with-conventional-commits)), or if you want multiple commits to be included in the same version bump and therefore not map commits 1:1 with changelog entries. Nx release supports file based versioning as a first class use-case through a feature called "version plans". The idea behind the name is that you are creating a _plan_ to version; a plan which will be _applied_ sometime in the future when you actually invoke the `nx release` CLI or programmatic API. Therefore you can think about version plans as having two main processes: - creating version plans and - applying version plans. Both in this recipe, but first we need to enable the feature itself. ## Enable version plans To enable version plans as a feature in your workspace, set `release.versionPlans` to `true` in `nx.json`: ```jsonc // nx.json { "release": { "versionPlans": true, // other release config such as projects to include in releases etc // projects: ["packages/**/*"] // ... }, } ``` You can also enable or disable this for specific release groups by setting the property at the group level if you don't want to apply it to all matching projects in your workspace. ## Create version plans Version plan files live in the `.nx/version-plans/` directory within your workspace (which needs to be tracked by git, so ensure that you are not ignoring the whole `.nx` directory, and instead only the `.nx/workspace-data` and `.nx/cache` directories). The files themselves are written in markdown (`.md` files) and contain Front Matter YAML metadata at the top of the file. The Front Matter YAML section is denoted via triple dashes `---` at the start and end of the section. For example: ```md ## // .nx/version-plans/version-plan-1723732065047.md # # FRONT MATTER YAML HERE # --- # # Regular markdown here # ``` We leverage the Front Matter YAML section to store a mapping of project or release group names to desired semver bump types. The general markdown section represents the description of the change(s) made that will be used in any relevant `CHANGELOG.md` files that are generated later at release time. For example, the following Front Matter YAML section specifies that the `my-app` project should have a `minor` version bump and describes the changes (again, note that there are no constraints on the format of the description, it can contain multiple lines, paragraphs etc): ```md ## // .nx/version-plans/version-plan-1723732065047.md ## my-app: minor This is an awesome change! A new paragraph describing the change in greater detail. All of this will be included in the CHANGELOG.md. All of this structure within the markdown section is optional and flexible. ``` Any number of different projects and different desired semver bumps can be combined within a single version plan file (which represents one change and therefore changelog entry, if applicable). For example: ```md ## // .nx/version-plans/version-plan-1723732065047.md my-app: minor my-lib: patch release-group-a: major --- One change that affects multiple projects and release groups. ``` The project or release group names specified in the Front Matter YAML section must match the names of the projects and/or release groups in your workspace. If a project or release group is not found, an error will be thrown when applying the version plan as part of running `nx release`. {% aside type="note" title="Single Version for All Packages" %} If you use a single version for all your packages (see [Release projects independetly](/docs/guides/nx-release/release-projects-independently)) your version plan file might look like this: ```md ## // .nx/version-plans/version-plan-1723732065047.md ## **default**: minor This is an awesome change! ``` While you could still specify the name of the project it is redundant in this case because all projects will be bumped to the same version. {% /aside %} Because these are just files, they can be created manually or by any custom scripts you may wish to write. They simply have to follow the guidance above around structure, location (`./.nx/version-plans/`) and naming (`.md` extension). The exact file name does not matter, it just needs to be unique within the `.nx/version-plans/` directory. To make things easier, Nx release comes with a built in command to help you generate valid version plan files. See the [`nx release plan`](/docs/reference/nx-commands#nx-release-plan) CLI reference for all available options. ```shell nx release plan ``` When you run this command you will receive a series of interactive prompts which guide you through the process of creating a version plan file. It will generate a unique name for you and ensure it is written to the correct location. ## Apply version plans at release time Using version plans does not change how versioning, changelog generation and publishing is invoked, you can still use the `nx release` CLI or programmatic API as you would for any other versioning strategy. The only difference is that Nx release will know to reference your version plan files as the source of truth for the desired version bumps. You still retain the same control around resolving the current version (disk vs registry vs git tags) however you want, and other configuration options around things like git operations are all still applicable. When you run `nx release` or use the programmatic API, Nx will look for version plan files in the `.nx/version-plans/` directory and apply the desired version bumps to the projects and release groups specified in the Front Matter YAML section of each file. If a project or release group is not found, an error will be thrown and the release will not proceed. Once a particular version plan has been applied it will be deleted from the `.nx/version-plans/` directory so that it does not inadvertently get applied again in the future. The deleted file will be staged and committed alongside your other changed files that were modified directly as part of the release command (depending on your Nx release configuration). ## Ensure that version plans exist for relevant changes When making changes to your codebase and using version plans as your versioning strategy it is likely that you will want to ensure that a version plan file exists for the changes you are making. Attempting to keep track of this manually as a part of pull request reviews can be error prone and time consuming, therefore Nx release provides a `nx release plan:check` command which can be used to ensure that a version plan file exists for the changes you are making. ```shell nx release plan:check ``` Running this command will analyze the changed files (supporting the same options you may be familiar with from `nx affected`, such as `--base`, `--head`, `--files`, `--uncommitted`, etc) and then determine which projects have been "touched" as a result. Note that it is specifically touched projects, and not affected in this case, because only directly changed projects are relevant for versioning. The side-effects of versioning independently released dependents are handled by the release process itself (controllable via the `version.updateDependents` option). ### Running release plan:check in CI As mentioned, `nx release plan:check` supports the same options as `nx affected` for determining the range of commits to consider. Therefore, in CI, you must also ensure that the base and head are set appropriately just like you would for `nx affected`. For GitHub Actions, we provide a utility action to do this for you: ```yaml # ...other steps - uses: nrwl/nx-set-shas@v4 # ...other steps including the use of `nx release plan:check` ``` For CircleCI, you can reference our custom orb as a step: ```yaml # ...other steps - nx/set-shas # ...other steps including the use of `nx release plan:check` ``` You can read more about these utilities and why they are needed on their respective READMEs: - https://github.com/nrwl/nx-set-shas?tab=readme-ov-file#background - https://github.com/nrwl/nx-orb#background Nx release will compare the touched projects to the projects and release groups that are specified in the version plan files in the `.nx/version-plans/` directory. If a version plan file does not exist, the command will print an error message and return a non-zero exit code, which can be used to fail CI builds or other automation. By default, all files that have changed are considered, but you may not want all files under a project to require a version plan be created for them. For example, you may wish to ignore test only files from consideration from this check. The way you can achieve this is by setting version plans to be a configuration object instead of a boolean, and set the `ignorePatternsForPlanCheck` property to an array of glob patterns that should be ignored when checking for version plans. For example: ```jsonc { "release": { "versionPlans": { "ignorePatternsForPlanCheck": ["**/*.spec.ts"], }, }, } ``` {% aside type="caution" title="Important: Pattern Syntax" %} The `ignorePatternsForPlanCheck` patterns follow [gitignore semantics](https://git-scm.com/docs/gitignore). When using negation patterns (patterns starting with `!`) to "un-ignore" certain files, be aware that: **Working patterns:** - `["**/*.spec.ts"]` - ignore all spec files - `["**/*.ts", "!**/src/**"]` - ignore all .ts files except those in src/ directories **Non-working patterns:** - `["*", "!src/"]` - this does NOT work as expected because `*` in gitignore matches at the root level and negation patterns cannot properly un-ignore nested paths If you want to ignore all files except those in specific directories, use file extension patterns instead of wildcards: ```jsonc { "release": { "versionPlans": { "ignorePatternsForPlanCheck": ["**/*.ts", "**/*.json", "!**/src/**"], }, }, } ``` {% /aside %} To see more details about the changed files that were detected and the filtering logic that was used to determine the ultimately changed projects behind the scenes, you can pass `--verbose` to the command: ```shell nx release plan:check --verbose ``` --- ## Programmatic API {% aside type="note" title="Learn about Nx Release" %} Be sure to read our introduction to [Nx Release](/docs/features/manage-releases) to understand the basics of how Nx Release works and the different phases of a release before moving onto the programmatic API. {% /aside %} A powerful feature of Nx Release is the fact that it is designed to be used via a Node.js programmatic API in addition to the [`nx release`](/docs/reference/nx-commands#nx-release) CLI. Releases are a hugely complex and nuanced process, filled with many special cases and idiosyncratic preferences, and it is impossible for a CLI to be able to support all of them out of the box. By having a first-class programmatic API, you can go beyond the CLI and create custom release workflows that are highly dynamic and tailored to your specific needs. Just as with the CLI, the programmatic API is broken up into the distinct phases of a release: versioning, changelog generation, and publishing. These are available via the `releaseChangelog`, `releasePublish`, and `releaseVersion` functions, which are importable from the `nx/release` entrypoint. These functions are the exact functions used behind the scenes by the CLI, and so they will read from the "release" config in `nx.json` in just the same way. If you need even more fine grained control over configuration via the programmatic API, see the section on using the [`ReleaseClient` class](#using-the-releaseclient-class) below. ## Using the programmatic API ```ts import { releaseChangelog, releasePublish, releaseVersion } from 'nx/release'; ``` The functions are all asynchronous and return data relevant to their specific phase of the release process. They all support `dryRun` and `verbose` boolean options. For `releaseVersion` and `releaseChangelog`, `dryRun` prevents changes from being applied. For `releasePublish`, `dryRun` is forwarded to the underlying executor (see the [releasePublish section](#releasepublish) for important details). You can inspect their types to see what each one supports in terms of additional config options. {% aside type="caution" title="Project or Group Filtering" %} If you apply project or group filtering to the programmatic API via the `projects` or `groups` options that each function supports, be sure to pass the same filters to all three functions. {% /aside %} ### releaseVersion `releaseVersion` will return a `NxReleaseVersionResult` object containing the following properties: - `workspaceVersion`: The new overall version of the workspace. This is only applicable in cases where all projects are versioned together in a single release group. In all other cases, this will be `null`. - `projectsVersionData`: A map of project names to their version data. The version data is a `VersionData` object, which contains the following properties: - `currentVersion`: The current version of the project. This is the version that the project was at before the versioning process began. - `newVersion`: The new version of the project. - `dockerVersion`: The new version of the project if it is a docker project. - `dependentProjects`: A list of projects that depend on the current project. - `releaseGraph`: The release graph that was generated for the nx release config and workspace data. This can be passed to subsequent operations (changelog, publish) to avoid recomputing and improve performance. ```ts const { workspaceVersion, projectsVersionData, releaseGraph } = await releaseVersion({ // E.g. if releasing a specific known version // otherwise if using e.g. conventional-commits this is not needed specifier: '1.0.0', dryRun: true, verbose: true, // ... other options }); console.log(workspaceVersion); console.log(projectsVersionData); console.log(releaseGraph); ``` ### releaseChangelog `releaseChangelog` will return a `NxReleaseChangelogResult` object containing the following properties: - `workspaceChangelog`: The changelog data for the workspace, if applicable based on the nx release config. - `releaseVersion`: Relevant version data for the new changelog entry. - `contents`: The changelog entry contents. - `projectChangelogs`: A map of project names to their changelog data, if applicable based on the nx release config. - `releaseVersion`: Relevant version data for the new changelog entry. - `contents`: The changelog entry contents. ```ts const { workspaceChangelog, projectChangelogs } = await releaseChangelog({ // Re-use the existing release graph from the releaseVersion // call (if applicable) to avoid recomputing in each subcommand releaseGraph, // NOTE: One of either version or versionData must be provided versionData: projectsVersionData, // Pass the detailed project version data from the releaseVersion call version: workspaceVersion, // Pass the new workspace version from the releaseVersion call dryRun: true, verbose: true, // ... other options }); ``` ### releasePublish `releasePublish` will return a `PublishProjectsResult` object, which is a map of project names to their publish result which is a simple object with a `code` property representing the exit code of the publish operation for the project. {% aside type="caution" title="dryRun Behavior" %} Unlike `releaseVersion` and `releaseChangelog`, the `dryRun` option for `releasePublish` does **not** prevent the underlying commands from being executed. Instead, the `dryRun` flag is forwarded to the `nx-release-publish` executor as an option, and the `NX_DRY_RUN` environment variable is set to `'true'`. The built-in `@nx/js:release-publish` executor handles this correctly and will skip actual publishing when `dryRun` is `true`. However, **custom `nx-release-publish` executors must implement `dryRun` support themselves** by checking either the `dryRun` option or the `NX_DRY_RUN` environment variable. {% /aside %} ```ts const publishResults = await releasePublish({ // Re-use the existing release graph from the releaseVersion // call (if applicable) to avoid recomputing in each subcommand releaseGraph, dryRun: true, verbose: true, // ... other options }); ``` You can optionally pass through the version data (e.g. if you are using a custom publish executor that needs to be aware of versions). It will then be provided to the publish executor options as `nxReleaseVersionData` and can be accessed in the publish executor options like any other option. ```ts const publishResults = await releasePublish({ releaseGraph, versionData: projectsVersionData, dryRun: true, verbose: true, }); ``` NOTE: Passing `versionData` to `releasePublish` is not required for the default @nx/js publish executor. It is recommended to use the publishResults to determine the overall success or failure of the release process, for example: ```ts process.exit( Object.values(publishResults).every((result) => result.code === 0) ? 0 : 1 ); ``` ## Example release script How you compose these functions in your release script is of course entirely up to you, and you may even want to break them up into multiple files depending on your use-case. The below is purely an example of you might compose them together into one holistic release script which uses `yargs` to parse script options from the command line (nx release does not require `yargs`, it is simply a common choice for this use-case, you can parse arguments however you wish). ```ts // scripts/release.ts import { releaseChangelog, releasePublish, releaseVersion } from 'nx/release'; import * as yargs from 'yargs'; (async () => { const options = await yargs .version(false) // don't use the default meaning of version in yargs .option('version', { description: 'Explicit version specifier to use, if overriding conventional commits', type: 'string', }) .option('dryRun', { alias: 'd', description: 'Whether or not to perform a dry-run of the release process, defaults to true', type: 'boolean', default: true, }) .option('verbose', { description: 'Whether or not to enable verbose logging, defaults to false', type: 'boolean', default: false, }) .parseAsync(); const { workspaceVersion, projectsVersionData, releaseGraph } = await releaseVersion({ specifier: options.version, dryRun: options.dryRun, verbose: options.verbose, }); await releaseChangelog({ releaseGraph, // Re-use the existing release graph to avoid recomputing in each subcommand versionData: projectsVersionData, version: workspaceVersion, dryRun: options.dryRun, verbose: options.verbose, }); // publishResults contains a map of project names and their exit codes const publishResults = await releasePublish({ releaseGraph, // Re-use the existing release graph to avoid recomputing in each subcommand dryRun: options.dryRun, verbose: options.verbose, }); process.exit( Object.values(publishResults).every((result) => result.code === 0) ? 0 : 1 ); })(); ``` To perform a dry-run of version `1.0.0`, you would therefore run the script like so: ```sh npx tsx scripts/release.ts --version 1.0.0 ``` (Or by using `ts-node` or any other tool you prefer to run TS scripts.) ## Using the `ReleaseClient` class The standalone functions that we covered in the previous sections are actually just bound methods of the `ReleaseClient` class that has been pre-instantiated for you. For an extra layer of control, you can import the `ReleaseClient` directly instead of the standalone functions and use your instance's methods for versioning, changelog generation, and publishing. The reason you might want to do this is configuration. The `ReleaseClient` constructor allows you to either override release configuration found in `nx.json`, or completely replace it. ```ts import { ReleaseClient } from 'nx/release'; const releaseClientWithMergedConfig = new ReleaseClient( { projects: ['project-1', 'project-2'], // ... more nx release config options }, false // Do NOT ignore nx.json config, merge whatever was given in the first parameter with it ); const releaseClientWithIsolatedConfig = new ReleaseClient( { projects: ['project-1', 'project-2'], // ... more nx release config options }, true // Ignore nx.json config, only the configuration given in the first parameter will be used ); ``` Ignoring the Nx Release configurations in `nx.json` can be useful for cases where you have a large, complex workspace and your script only needs to focus on a specific subset in a granular way. One such example would be a script that only focuses on changelog generation for a specific project or projects and does not want to invoke any versioning or publishing logic. ### Using the `ReleaseClient` Once you have the instantiated `ReleaseClient` class, the usage pattern is the same as with the standalone functions. ```ts import { ReleaseClient } from 'nx/release'; const releaseClient = new ReleaseClient({}); const { workspaceVersion, projectsVersionData, releaseGraph } = await releaseClient.releaseVersion({ // ... options }); const { workspaceChangelog, projectChangelogs } = await releaseClient.releaseChangelog({ releaseGraph, // ... other options }); const publishResults = await releaseClient.releasePublish({ releaseGraph, // ... other options }); ``` --- ## Publish in CI/CD Nx Release makes it easy to move your publishing process into your CI/CD pipeline across different package ecosystems. ## General concepts ### Automatically skip publishing locally When running `nx release`, after the version updates and changelog generation, you will be prompted with the following question: ```text {% frame="terminal" title="nx release" %} ... ? Do you want to publish these versions? (y/N) › ``` To move publishing into an automated pipeline, you will want to skip publishing when running [`nx release`](/docs/reference/nx-commands#nx-release) locally. To do this automatically, use the `--skip-publish` flag: ```text {% frame="terminal" title="nx release --skip-publish" %} ... Skipped publishing packages. ``` ### Use the publish subcommand Nx Release provides a publishing subcommand ([`nx release publish`](/docs/reference/nx-commands#nx-release-publish)) that performs just the publishing step. Use this in your CI/CD pipeline to publish the packages. ```text {% frame="terminal" title="nx release publish" %} NX Running target nx-release-publish for 3 projects: - pkg-1 - pkg-2 - pkg-3 ... ``` ## Publishing npm packages ### Example npm publish output ```text {% frame="terminal" title="nx release publish" %} NX Running target nx-release-publish for 3 projects: - pkg-1 - pkg-2 - pkg-3 ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run pkg-1:nx-release-publish 📦 @myorg/pkg-1@0.0.2 === Tarball Contents === 233B README.md 277B package.json 53B src/index.ts 61B src/lib/pkg-1.ts === Tarball Details === name: @myorg/pkg-1 version: 0.0.2 filename: testorg-pkg-1-0.0.2.tgz package size: 531 B unpacked size: 624 B shasum: {shasum} integrity: {integrity} total files: 12 Published to https://registry.npmjs.org with tag "latest" > nx run pkg-2:nx-release-publish 📦 @myorg/pkg-2@0.0.2 === Tarball Contents === 233B README.md 277B package.json 53B src/index.ts 61B src/lib/pkg-2.ts === Tarball Details === name: @myorg/pkg-2 version: 0.0.2 filename: testorg-pkg-2-0.0.2.tgz package size: 531 B unpacked size: 624 B shasum: {shasum} integrity: {integrity} total files: 12 Published to https://registry.npmjs.org with tag "latest" > nx run pkg-3:nx-release-publish 📦 @myorg/pkg-3@0.0.2 === Tarball Contents === 233B README.md 277B package.json 53B src/index.ts 61B src/lib/pkg-3.ts === Tarball Details === name: @myorg/pkg-3 version: 0.0.2 filename: testorg-pkg-3-0.0.2.tgz package size: 531 B unpacked size: 624 B shasum: {shasum} integrity: {integrity} total files: 12 Published to https://registry.npmjs.org with tag "latest" ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Successfully ran target nx-release-publish for 3 projects ``` ### npm publishing in GitHub actions A common way to automate publishing NPM packages is via GitHub Actions. An example of a publish workflow is as follows: ```yaml // ./.github/workflows/publish.yml name: Publish on: push: tags: - v*.*.* jobs: test: name: Publish runs-on: ubuntu-latest permissions: contents: read id-token: write # needed for provenance data generation timeout-minutes: 10 steps: - name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 filter: tree:0 - name: Install Node uses: actions/setup-node@v4 with: node-version: 20 registry-url: https://registry.npmjs.org/ - name: Install dependencies run: npm install shell: bash - name: Print Environment Info run: npx nx report shell: bash - name: Publish packages run: npx nx release publish shell: bash env: NODE_AUTH_TOKEN: ${{ secrets.NPM_ACCESS_TOKEN }} NPM_CONFIG_PROVENANCE: true ``` This workflow will install node, install npm dependencies, then run `nx release publish` to publish the packages. It will run on every push to the repository that creates a tag that matches the pattern `v*.*.*`. A release process using this workflow is as follows: 1. Run `nx release --skip-publish` locally. This will create a commit with the version and changelog updates, then create a tag for the new version. 2. Push the changes (including the new tag) to the remote repository with `git push && git push --tags`. 3. The publish workflow will automatically trigger and publish the packages to the npm registry. {% aside type="note" title="This template is designed for fixed versioning" %} The example workflow above triggers on a single tag pattern (`v*.*.*`), which works best when all packages share the same version (**fixed versioning strategy**). With fixed versioning, `nx release` creates one tag per release. If you're using **independent versioning** (where each project has its own version), see the [Considerations for Independent Versioning](#considerations-for-independent-versioning) section below. {% /aside %} ### Considerations for independent versioning When using independent versioning, `nx release` creates a separate tag for each project being released (e.g., `pkg-1@1.0.0`, `pkg-2@2.1.0`). This introduces some challenges with tag-triggered workflows: #### GitHub tag event limitation GitHub Actions has an important limitation: **workflows triggered by the `push` event will not run if more than 3 tags are created at once**. This means if you run `nx release` and it creates tags for 4 or more projects, the publish workflow will not be triggered. This limitation is documented in [GitHub workflow events documentation](https://docs.github.com/en/actions/reference/events-that-trigger-workflows#create). #### Alternative approaches for independent versioning There are several ways to work around this limitation: **Option 1: Use `workflow_dispatch` with Manual Trigger** Instead of triggering on tags, use a manual workflow dispatch after pushing your release: ```yaml // .github/workflows/publish.yml name: Publish on: workflow_dispatch: inputs: dry-run: description: 'Run in dry-run mode (no actual publishing)' required: false default: 'false' type: boolean jobs: publish: name: Publish runs-on: ubuntu-latest permissions: contents: read id-token: write steps: - name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 - name: Install Node uses: actions/setup-node@v4 with: node-version: 20 registry-url: https://registry.npmjs.org/ - name: Install dependencies run: npm install - name: Publish packages run: npx nx release publish env: NODE_AUTH_TOKEN: ${{ secrets.NPM_ACCESS_TOKEN }} NPM_CONFIG_PROVENANCE: true ``` With this approach: 1. Run `nx release --skip-publish` locally and push the commits and tags. 2. Manually trigger the workflow from the GitHub Actions UI after the push completes. **Option 2: Trigger on Push to Main Branch** Trigger the workflow when the release commit is pushed to your main branch: ```yaml on: push: branches: - main paths: - '**/package.json' ``` This approach requires additional logic in your workflow to determine if a release was just made (e.g., by checking the commit message or comparing versions). **Option 3: Push Tags in Batches** If you prefer tag-triggered workflows, push tags in smaller batches (3 or fewer at a time) to ensure each push triggers the workflow. This can be automated with a script that pushes tags individually or in small groups. ### Configure the NODE_AUTH_TOKEN The `NODE_AUTH_TOKEN` environment variable is needed to authenticate with the npm registry. In the above workflow, it is passed into the Publish packages step via a [GitHub Secret](https://docs.github.com/en/actions/reference/encrypted-secrets). #### Generate a NODE_AUTH_TOKEN for npm To generate the correct `NODE_AUTH_TOKEN` for the npmJS registry specifically, first login to [https://www.npmjs.com/](https://www.npmjs.com/). Select your profile icon, then navigate to "Access Tokens". Generate a new Granular Access Token. Ensure that the token has read and write access to both the packages you are publishing and their organization (if applicable). Copy the generated token and add it as a secret to your GitHub repository. #### Add the nODE_AUTH_TOKEN to GitHub secrets To add the token as a secret to your GitHub repository, navigate to your repository, then select "Settings" > "Secrets and Variables" > "Actions". Add a new Repository Secret with the name `NPM_ACCESS_TOKEN` and the value of the token you generated in the previous step. Note: The `NPM_ACCESS_TOKEN` name is not important other than that it matches the usage in the workflow: ```yaml - name: Publish packages run: npx nx release publish shell: bash env: NODE_AUTH_TOKEN: ${{ secrets.NPM_ACCESS_TOKEN }} NPM_CONFIG_PROVENANCE: true ``` ### npm provenance To verify your packages with [npm provenance](https://docs.npmjs.com/generating-provenance-statements), set the `NPM_CONFIG_PROVENANCE` environment variable to `true` in the step where `nx release publish` is performed. The workflow will also need the `id-token: write` permission to generate the provenance data: ```yaml jobs: test: name: Publish runs-on: ubuntu-latest permissions: contents: read id-token: write # needed for provenance data generation ``` ```yaml - name: Publish packages run: npx nx release publish shell: bash env: NODE_AUTH_TOKEN: ${{ secrets.NPM_ACCESS_TOKEN }} NPM_CONFIG_PROVENANCE: true ``` ## Publishing Docker Images {% badge variant="caution" text="experimental" /%} Docker support in Nx is currently experimental and may undergo breaking changes without following semantic versioning. {% aside type="note" title="Nx Cloud Agents Compatibility" %} Docker operations in `nx release` are currently supported in standard CI/CD environments like GitHub Actions, GitLab CI, and Jenkins. For Nx Cloud Agents compatibility, please contact [Nx Enterprise support](https://nx.dev/contact/sales) to explore available options for your team. {% /aside %} When using Nx Release with Docker images, the publishing process differs from npm packages. Docker images are built with the `npx nx run-many -t docker:build` command, which is the default for [`preVersionCommand`](/docs/guides/nx-release/build-before-versioning#build-before-docker-versioning) in `nx.json`. You may also run the build command manually before running `nx release`. After the images are built, they are tagged during the versioning phase, then pushed to a registry during the publish phase. ### Docker registry authentication Before publishing Docker images, ensure you're authenticated with your Docker registry: ```yaml // .github/workflows/publish.yml - name: Login to Docker Hub uses: docker/login-action@v2 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_TOKEN }} - name: Build and tag Docker images run: npx nx release version --dockerVersionScheme=production - name: Publish Docker images run: npx nx release publish ``` See the [`nx release version`](/docs/reference/nx-commands#nx-release-version) and [`nx release publish`](/docs/reference/nx-commands#nx-release-publish) CLI references for all available options. For changelogs, you can run [`nx release changelog`](/docs/reference/nx-commands#nx-release-changelog) `` locally with the new version from the pipeline. For example, if the new version is `2501.01.be49ad6` you would run `npx nx release changelog 2501.01.be49ad6`. This will create or update the `CHANGELOG.md` files in your projects. ### Using different registries Configure alternative registries in your `nx.json`: ```jsonc // nx.json { "release": { "docker": { "registryUrl": "ghcr.io", // GitHub Container Registry }, }, } ``` ### Example GitHub Actions Workflow for Docker ```yaml // .github/workflows/docker-publish.yml name: Docker Publish on: push: branches: [main] jobs: publish: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup Node.js uses: actions/setup-node@v3 with: node-version: 20 - name: Install dependencies run: npm ci - name: Build applications run: npx nx run-many -t build - name: Login to Docker Hub uses: docker/login-action@v2 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_TOKEN }} - name: Build and tag Docker images run: npx nx release version --dockerVersionScheme=production - name: Publish Docker images run: npx nx release publish ``` --- ## Using Nx Release with Rust This recipe guides you through versioning Rust libraries, generating changelogs, and publishing Rust crates in a monorepo with Nx Release. {% github_repository url="https://github.com/JamesHenry/release-js-and-rust" /%} {% aside type="caution" title="Currently requires legacy versioning" %} In Nx v21, the implementation details of versioning were rewritten to enhance flexibility and allow for better cross-ecosystem support. An automated migration was provided in Nx v21 to update your configuration to the new format when running `nx migrate`. During the lifecycle of Nx v21, you can still opt into the old versioning by setting `release.version.useLegacyVersioning` to `true`, which will keep the original configuration structure and behavior. In Nx v22, the legacy versioning implementation has been removed entirely, so this should only be done temporarily to ease the transition. Importantly, this recipe currently requires the use of legacy versioning, because the `@monodon/rust` plugin does not yet provide the necessary `VersionActions` implementation to support the new versioning behavior. This will be added in a minor release of Nx v21 and this recipe will be updated accordingly. {% /aside %} ## Initialize Nx release in your workspace ### Install Nx Ensure that Nx is installed in your monorepo. Check out the [Installation docs](/docs/getting-started/installation) for instructions on created a new Nx workspace or adding Nx to an existing project. ### Add the @monodon/rust plugin The [`@monodon/rust` package](https://github.com/Cammisuli/monodon) is required for Nx Release to manage and release Rust crates. Add it if it is not already installed: ```shell nx add @monodon/rust ``` ### Configure projects to release Nx Release uses Nx powerful [Project Graph](/docs/features/explore-graph) to understand your projects and their dependencies. If you want to release all of the projects in your workspace, such as when dealing with a series of Rust crates, no configuration is required. If you have a mixed workspace in which you also have some applications, e2e testing projects or other things you don't want to release, you can configure `nx release` to target only the projects you want to release. Configure which projects to release by adding the `release.projects` property to nx.json. The value is an array of strings, and you can use any of the same specifiers that are supported by `nx run-many`'s [projects filtering](/docs/reference/nx-commands#nx-run-many), such as explicit project names, Nx tags, directories and glob patterns, including negation using the `!` character. For example, to release just the projects in the `crates` directory: ```jsonc // nx.json { "release": { "projects": ["crates/*"], "version": { // Legacy versioning is currently required for the @monodon/rust plugin, see the note above for more details "useLegacyVersioning": true, }, }, } ``` ## Create the first release The first time you release with Nx Release in your monorepo, you will need to use the `--first-release` option. This tells Nx Release not to expect the existence of any git tags, changelog files, or published packages. {% aside type="note" title="Use the --dry-run option" %} The `--dry-run` option is useful for testing your configuration without actually creating a release. It is always recommended to run Nx Release once with `--dry-run` first to ensure everything is configured correctly. {% /aside %} To preview your first release, run: ```shell nx release --first-release --dry-run ``` ### Pick a new version Nx Release will prompt you to pick a version bump for all the crates in the release. By default, all crate versions are kept in sync, so the prompt only needs to be answered one time. If needed, you can [configure Nx to release projects independently](/docs/guides/nx-release/release-projects-independently). ```text {% frame="terminal" title="nx release --first-release --dry-run" %} NX Running release version for project: pkg-1 pkg-1 🔍 Reading data for crate "pkg-1" from crates/crates/pkg-1/Cargo.toml pkg-1 📄 Resolved the current version as 0.1.0 from crates/pkg-1/Cargo.toml ? What kind of change is this for the 3 matched projects(s)? … ❯ major premajor minor preminor patch prepatch prerelease Custom exact version ``` ### Preview the results After this prompt, the command will finish, showing you the preview of changes that would have been made if the `--dry-run` option was not passed. ```text {% frame="terminal" title="nx release --first-release --dry-run" %} NX Running release version for project: pkg-1 pkg-1 🔍 Reading data for crate "pkg-1" from crates/crates/pkg-1/Cargo.toml pkg-1 📄 Resolved the current version as 0.1.0 from crates/pkg-1/Cargo.toml ✔ What kind of change is this for the 3 matched projects(s)? · patch pkg-1 ✍️ New version 0.1.1 written to crates/crates/pkg-1/Cargo.toml NX Running release version for project: pkg-2 pkg-2 🔍 Reading data for crate "pkg-2" from crates/crates/pkg-2/Cargo.toml pkg-2 📄 Resolved the current version as 0.1.0 from crates/pkg-2/Cargo.toml pkg-2 ✍️ New version 0.1.1 written to crates/crates/pkg-2/Cargo.toml NX Running release version for project: pkg-3 pkg-3 🔍 Reading data for crate "pkg-3" from crates/crates/pkg-3/Cargo.toml pkg-3 📄 Resolved the current version as 0.1.0 from crates/pkg-3/Cargo.toml pkg-3 ✍️ New version 0.1.1 written to crates/crates/pkg-3/Cargo.toml UPDATE crates/pkg-1/Cargo.toml [dry-run] [package] name = "pkg-1" - version = "0.1.0" + version = "0.1.1" edition = "2021" UPDATE crates/pkg-2/Cargo.toml [dry-run] [package] name = "pkg-2" - version = "0.1.0" + version = "0.1.1" edition = "2021" UPDATE crates/pkg-3/Cargo.toml [dry-run] [package] name = "pkg-3" - version = "0.1.0" + version = "0.1.1" edition = "2021" NX Updating Cargo.lock file NX Staging changed files with git NX Previewing an entry in CHANGELOG.md for v0.1.1 CREATE CHANGELOG.md [dry-run] + ## 0.1.1 (2024-02-29) + + This was a version bump only, there were no code changes. NX Staging changed files with git NX Committing changes with git NX Tagging commit with git NX Skipped publishing packages. NOTE: The "dryRun" flag means no changes were made. ``` ### Run without `--dry-run` If the preview looks good, run the command again without the `--dry-run` option to actually create the release. ```shell nx release --first-release ``` The command will proceed as before, prompting for a version bump and showing a preview of the changes. However, this time, it will prompt you to publish the crates to the remote registry. If you say no, the publishing step will be skipped. If you say yes, the command will publish the crates to https://crates.io. ```text {% frame="terminal" title="nx release --first-release" %} ... ✔ Do you want to publish these versions? (y/N) · true NX Running target nx-release-publish for 3 projects: - pkg-1 - pkg-2 - pkg-3 ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— (...cargo publish output here...) ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Successfully ran target nx-release-publish for 3 projects ``` ## Manage git operations By default, Nx Release will stage all changes it makes with git. This includes updating `Cargo.toml` files, creating changelog files, and updating the `Cargo.lock` file. After staging the changes, Nx Release will commit the changes and create a git tag for the release. ### Customize the commit message and tag pattern The commit message created by Nx Release defaults to 'chore(release): publish {version}', where `{version}` will be dynamically interpolated with the relevant value based on your actual release, but can be customized with the `release.git.commitMessage` property in nx.json. The structure of the git tag defaults to `v{version}`. For example, if the version is `1.2.3`, the tag will be `v1.2.3`. This can be customized by setting the `release.releaseTag.pattern` property (Nx 22+) or `release.releaseTagPattern` property (Nx < 22) in nx.json. For this same example, if you want the commit message to be 'chore(release): 1.2.3' and the tag to be `release/1.2.3`, you would configure nx.json like this: {% tabs syncKey="nx-release-configuration" %} {% tabitem label="Nx 22+" %} ```json // nx.json { "release": { "releaseTag": { "pattern": "release/{version}" }, "git": { "commitMessage": "chore(release): {version}" } } } ``` {% /tabitem %} {% tabitem label="Nx < 22" %} ```json // nx.json { "release": { "releaseTagPattern": "release/{version}", "git": { "commitMessage": "chore(release): {version}" } } } ``` {% /tabitem %} {% /tabs %} When using release groups in which the member projects are versioned together, you can also leverage `{releaseGroupName}` and it will be interpolated appropriately in the commit/tag that gets created for that release group. ## Future releases After the first release, the `--first-release` option will no longer be required. Nx Release will expect to find git tags and changelog files for each package. Future releases will also generate entries in `CHANGELOG.md` based on the changes since the last release. Nx Release will parse the `feat` and `fix` type commits according to the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) specification and sort them into appropriate sections of the changelog. An example of these changelogs can be seen on the [Nx releases page](https://github.com/nrwl/nx/releases). --- ## Release Docker Images This guide walks you through setting up Nx Release to version and publish Docker images from your monorepo. {% youtube title="What is Nx?" src="https://www.youtube.com/embed/TOPxKJXUaqw" /%} ## Prerequisites Before starting, ensure you have: 1. Docker installed and running locally 2. Run `docker login` to authenticate with your Docker registry (e.g., `docker login docker.io` for Docker Hub) ## Install Nx Docker plugin ```shell nx add @nx/docker ``` This command adds the `@nx/docker` plugin to your `nx.json` so that projects with a `Dockerfile` is automatically configured with Docker targets (e.g. `docker:build` and `docker:run`). ## Create basic Node.js backend {% badge text="optional" /%} If you don't already have a backend application, create one using `@nx/node`: ```shell nx add @nx/node nx g @nx/node:app apps/api --docker --framework=express ``` This generates a Node.js application with a Dockerfile like the following: ```dockerfile // apps/api/Dockerfile FROM docker.io/node:lts-alpine ENV HOST=0.0.0.0 ENV PORT=3000 WORKDIR /app RUN addgroup --system api && \ adduser --system -G api api COPY dist app/ COPY package.json app/ RUN chown -R api:api . # You can remove this install step if you build with `--bundle` option. # The bundled output will include external dependencies. RUN npm --prefix api --omit=dev -f install CMD [ "node", "app" ] ``` You should be able to run the following commands to compile the application and create a Docker image: ```shell nx build api nx docker:build api ``` ## Set up a new release group Configure Docker applications in a separate release group called `apps` so it does not conflict with any existing release projects. To learn more about release groups, see the [Release Groups](/docs/guides/nx-release/release-groups) guide. {% tabs syncKey="nx-release-configuration" %} {% tabitem label="Nx 22+" %} ```jsonc // nx.json { "release": { "releaseTag": { "pattern": "release/{projectName}/{version}", }, "groups": { "apps": { "projects": ["api"], "projectsRelationship": "independent", "docker": { // This should be true to skip versioning with other tools like NPM or Rust crates. "skipVersionActions": true, // You can also use a custom registry like `ghcr.io` for GitHub Container Registry. // `docker.io` is the default so you could leave this out for Docker Hub. "registryUrl": "docker.io", // The pre-version command is run before versioning, useful for verifying the Docker image. "groupPreVersionCommand": "echo BEFORE VERSIONING", }, "changelog": { "projectChangelogs": true, }, }, }, }, } ``` {% /tabitem %} {% tabitem label="Nx < 22" %} ```jsonc // nx.json { "release": { "releaseTagPattern": "release/{projectName}/{version}", "groups": { "apps": { "projects": ["api"], "projectsRelationship": "independent", "docker": { // This should be true to skip versioning with other tools like NPM or Rust crates. // Only set this if you are not using {versionActionsVersion} in your docker version scheme "skipVersionActions": true, // You can also use a custom registry like `ghcr.io` for GitHub Container Registry. // `docker.io` is the default so you could leave this out for Docker Hub. "registryUrl": "docker.io", // The pre-version command is run before versioning, useful for verifying the Docker image. "groupPreVersionCommand": "echo BEFORE VERSIONING", }, "changelog": { "projectChangelogs": true, }, }, }, }, } ``` {% /tabitem %} {% /tabs %} The `release.groups.apps.docker.skipVersionActions` option should be set to `true` to skip versioning for other tooling such as NPM or Rust crates. The `release.groups.apps.projectsRelationship` is set to `independent` and `release.groups.apps.changelog` is set to `projectChangelogs` so that each application within the group maintains its own release cadence and changelog. The `release.groups.apps.docker.groupPreVersionCommand` is an optional command that runs before the versioning step, allowing you to perform any pre-version checks such as image verification before continuing the release. ## Set up app repository Docker images have to be pushed to a repository, and this must be set on each application you want to release. This must be set as `release.docker.repositoryName` in the project's `project.json `or `package.json` file. For example, from the previous `apps/api` Node.js application, you can set the `nx.release.docker.repositoryName` in `package.json`. ```json {% meta="{6-10}" %} // apps/api/package.json { "name": "@acme/api", "version": "0.0.1", "nx": { "release": { "docker": { "repositoryName": "acme/api" } } // ... } } ``` Or if you don't have a `package.json` (e.g. for non-JS projects), set it in `project.json`: ```json {% meta="{6-10}" %} // apps/api/project.json { "name": "api", "root": "apps/api", "projectType": "application", "release": { "docker": { "repositoryName": "acme/api" } } // ... } ``` You should replace `acme` with your organization or username for the Docker registry that you are logged into. {% aside title="Docker Image Reference Override" type="note" %} The `release.docker.registryUrl` option can be used to override the Docker registry URL. This is useful if you want to push to a private registry like GitHub Container Registry. If you need even more control during CI/CD pipelines, for example if you target different registries for different environments, you can use the `NX_DOCKER_IMAGE_REF` environment variable. Note that the value you set this to will need to include the `repositoryName` as well. e.g. `NX_DOCKER_IMAGE_REF=ghcr.io/acme/api:2508.16.1`. Versioning will continue to work as normal. {% /aside %} ## Your first Docker release Dry run your first Docker release with calendar versioning. See the [`nx release`](/docs/reference/nx-commands#nx-release) CLI reference for all available options. ```shell nx release --dockerVersionScheme=production --first-release --dry-run ``` When you are ready, run the release command again without `--dry-run`: ```shell nx release --dockerVersionScheme=production --first-release ``` When prompted to publish the image, choose `yes`, or you can pass the `--yes` flag to skip the prompt. This will: - Build your Docker images - Tag them with a calendar version (e.g., `2501.01.a1b2c3d`) - Update the app's changelog (e.g. `apps/api/CHANGELOG.md`) - Update git tags (you can check with `git --no-pager tag --sort=-version:refname | head -5`) - Push the image to the configured Docker registry (e.g., `docker.io/acme/api:2501.01.a1b2c3d`) ### Understanding calendar versioning There are many different implementations of Calendar versioning but they consist of four main segments: - **Major**: The first number in the version and the most common calendar-based component. - **Minor**: The second number in the version, also usually calendar-based. - **Micro**: The third and usually final number in the version, sometimes referred to as a "patch" or "build" number. - **Modifier**: An optional text tag, such as "alpha", "dev", "hotfix" By default, Nx Release uses the following patterns for calendar versioning - **Production**: `YYMM.DD.SHA` - **Hotfix**: `YYMM.DD.SHA-hotfix` Where the following tokens are replaced: - `YYMM`: Year and month - `DD`: Day of the month - `SHA`: Short commit hash Note: The timezone is UTC to ensure consistency across different environments. {% aside title="Default Versioning Schemes" type="note" %} Using `SHA` for the micro version may not be the best choice for your workflow. If you are using a CI/CD pipeline, you may want to use a build number instead. See the [Customizing Version Schemes](#customizing-version-schemes) section for more details. {% /aside %} ## Future releases After the first release, you can run `nx release` without the `--first-release` flag. If you do not specify `--dockerVersionScheme`, then you will be prompted to choose one: - `production` - for regular releases from the `main` or stable branch - `hotfix` - for bug fixes from a hotfix branch These are the default schemes that come with Nx Release. These schemes support workflows where you have a stable branch that developers continuously integrate with, and a hotfix branch reserved for urgent production fixes. ### Using explicit Docker versions If you need to specify an exact Docker version instead of using version schemes, you can use the `--dockerVersion` flag: ```shell nx release --dockerVersion=1.2.3 ``` This will bypass the version scheme logic entirely and tag your Docker images with the exact version you specify. This is useful when you need to: - Align Docker versions with external versioning systems - Override the calendar-based versioning for specific releases - Set custom version tags that don't follow the standard patterns ### Customizing version schemes You can customize Docker version schemes in your `nx.json` to match your deployment workflow. The version patterns support several interpolation tokens: ```json // nx.json { "release": { "docker": { "versionSchemes": { "production": "{currentDate|YYMM.DD}.{env.BUILD_NUMBER}", "staging": "{currentDate|YYMM.DD}-staging.{shortCommitSha}", "ci": "{env.BUILD_NUMBER}-{shortCommitSha}" } } } } ``` The above configuration swaps `hotfix` scheme for `staging` and adds a `ci` scheme that uses an environment variable. You can customize this list to fit your needs, and you can also change the patterns for each scheme. Version patterns can include environment variables using the `{env.VAR_NAME}` syntax, allowing you to inject CI/CD pipeline information like build numbers or deployment environments directly into your Docker tags. You can also include the semantic version generated during version actions using `{versionActionsVersion}`. For example: ```json "production": "{projectName}-{versionActionsVersion}" ``` If you use `{versionActionsVersion}`, do not enable `docker.skipVersionActions` (or include the project in `skipVersionActions`) because the placeholder cannot be resolved when version actions are skipped. See the [`docker.versionScheme` documentation](/docs/reference/nx-json#version-scheme-syntax) for more details on how to customize the tag pattern. --- ## Release Groups Nx supports workspaces of any size and scale, and that means that projects can often be worked on in the same Nx workspace that have very different release requirements. Nx release supports the concept of release groups to allow you to configure different subsets of projects in different ways. Importantly, projects in different release groups can still depend on each other, and nx release can automatically handle updating dependencies and dependents across any number of group boundaries. {% aside type="note" title="Default Group" %} Technically, even if you have no nx release configuration at all, you are always dealing with release groups. Behind the scenes, if no release groups are configured, Nx will automatically create an implicit `__default__` release group for you that includes all projects in the workspace that are able to be released. When you use the `"projects"` property in the `"release"` config at the top level, you are really adjusting the projects that are present in that `__default__` group. {% /aside %} ## Understanding release groups Release groups provide a way to organize your projects based on their release requirements. Each group can have its own: - **Projects Relationship** - Whether projects within the group are versioned independently or in lock step (fixed) - **Version configuration** - Custom conventional commits, version prefixes, and more - **Changelog configuration** - Different changelog formats and locations - **Release tag** - Various options related to the git tags that influence the release process - **Docker configuration** - Specific Docker versioning schemes ## Creating release groups To create release groups, define them in your `nx.json` file under the `release.groups` property: ```jsonc // nx.json { "release": { "groups": { "backend": { "projects": ["api", "auth-service", "payment-service"], "projectsRelationship": "fixed", }, "frontend": { "projects": ["web-app", "mobile-app"], "projectsRelationship": "independent", }, }, }, } ``` ### Project selection You can specify projects for a group using project matchers that you may already be familiar with from `nx run-many` command filters: - **Explicit project names**: `"projects": ["project-a", "project-b"]` - **Glob patterns**: `"projects": ["packages/shared-*"]` - **Tag references**: `"projects": ["tag:npm-public"]` - **Negation**: `"projects": ["!ignore-me"]` {% aside type="note" title="Project Uniqueness" %} Each project can only belong to one release group. If you try to assign a project to multiple groups, Nx will throw an error during configuration validation. {% /aside %} ## Projects relationship The `projectsRelationship` property determines how projects within a group are released: ### Fixed When `projectsRelationship` is set to `"fixed"` (the default): - All projects in the group share the same version number - When one project requires a version bump, all projects in the group are versioned together - Dependencies between projects in the group are always automatically updated - Each project will receive a changelog entry, with a specific (configurable) message for those projects that were only bumped to align with the group version ```jsonc // nx.json { "release": { "groups": { "shared-libraries": { "projects": ["ui-components", "utils", "data-access"], // this is also the default and can be omitted "projectsRelationship": "fixed", // ... other group configuration options ... }, }, }, } ``` ### Independent When `projectsRelationship` is set to `"independent"`: - Each project in the group maintains its own version - Projects are versioned only when they have changes (apart from the influence of ["updateDependents" configuration](#update-dependents), learn more below) - Changelog entries are generated only when there are direct or indirect ("updateDependents") changes to the project ```jsonc // nx.json { "release": { "groups": { "microservices": { "projects": ["user-service", "order-service", "inventory-service"], "projectsRelationship": "independent", // ... other group configuration options ... }, }, }, } ``` ## Group-specific configuration Each release group can override the root `"release"` configuration with its own settings: ### Version configuration Customize versioning behavior per group: ```jsonc { "release": { "version": { "conventionalCommits": false, "versionPrefix": "~", "updateDependents": "always", }, "groups": { "npm-packages": { "projects": ["package-*"], "version": { // These properties override the root version //configuration for this specific group "conventionalCommits": true, "versionPrefix": "^", "updateDependents": "auto", }, }, }, }, } ``` ### Changelog configuration Configure changelog generation per group: ```jsonc { "release": { "changelog": { "projectChangelogs": { "file": false, // ... other changelog configuration options ... }, }, "groups": { "public-apis": { "projects": ["api-*"], // overrides at the group level "changelog": { "file": "{projectRoot}/CHANGELOG.md", "createRelease": "github", "renderer": "@my-org/custom-changelog-renderer", }, }, }, }, } ``` ### Release tag patterns Customize how relevant git tags should be discovered and created for each group independently: ```jsonc { "release": { "groups": { "backend": { "projects": ["backend-*"], "releaseTag": { "pattern": "backend-{version}", "requireSemver": true, }, }, "frontend": { "projects": ["frontend-*"], "releaseTag": { "pattern": "frontend-{version}", }, }, }, }, } ``` ## Update dependents For independently versioned projects, regardless of whether they are in the same release group or not, we will have to consider what happens when those projects depend on one another. ```jsonc // nx.json { "release": { "groups": { "group1": { // project-a depends on project-b, even though it // happens to be in a different release group here "projects": ["project-a"], }, "group2": { // project-a depends on project-b, even though it // happens to be in a different release group here "projects": ["project-b"], }, }, }, } ``` Nx can handle this cascade of updates automatically, across any number of release group boundaries, and this behavior is configurable via the `version.updateDependents` option. See the [Update Dependents](/docs/guides/nx-release/update-dependents) guide for more details. ## More examples ### Mixed relationship groups You can have different relationship types for different groups in the same workspace: ```jsonc { "release": { "groups": { "platform": { "projects": ["core", "common", "shared"], "projectsRelationship": "fixed", }, "applications": { "projects": ["app-*"], "projectsRelationship": "independent", }, }, }, } ``` ### Group-specific pre-version commands Run different build or preparation commands for each group: ```jsonc { "release": { "groups": { "compiled-packages": { "projects": ["lib-*"], "version": { "groupPreVersionCommand": "nx run-many -t build --projects=...", }, }, "documentation": { "projects": ["docs-*"], "version": { "groupPreVersionCommand": "nx run-many -t generate-docs --projects=...", }, }, }, }, } ``` ### Version plans with groups When using [version plans](/docs/guides/nx-release/file-based-versioning-version-plans), you can target specific groups: ```jsonc { "release": { "groups": { "backend": { "projects": ["api-*"], "versionPlans": true, }, "frontend": { "projects": ["ui-*"], // version plans is not enabled here and // it's not set at the root level either }, }, }, } ``` ## Processing order Nx Release processes groups in topological order based on their dependencies: 1. Groups with no dependencies are processed first 2. Groups that depend on already-processed groups are processed next 3. Within each group, projects are also processed in topological order This ensures that: - Dependencies are always versioned before their dependents - Version updates cascade correctly through the `ReleaseGraph` that is constructed behind the scenes NOTE: Circular dependencies are not recommended, but can be tolerated by nx release, as it can sometimes be unavoidable. ## Using filters with groups You can filter which groups or projects to release. See the [`nx release`](/docs/reference/nx-commands#nx-release) CLI reference for all available options. ```bash # Release only a specific group nx release --groups=backend # Release specific independent projects across any groups nx release --projects=api,web-app # Combine with dry-run to preview nx release --groups=frontend --dry-run ``` {% aside type="caution" title="Fixed Groups and Filters" %} When filtering projects in a fixed release group, you must include all projects in that group. You cannot release a subset of projects from a fixed group as they must be versioned together. {% /aside %} ## Next steps - Learn about [conventional commits](/docs/guides/nx-release/automatically-version-with-conventional-commits) for automatic versioning - Explore [version plans](/docs/guides/nx-release/file-based-versioning-version-plans) for file based versioning - Configure [custom registries](/docs/guides/nx-release/configure-custom-registries) for publishing - Set up [CI/CD integration](/docs/guides/nx-release/publish-in-ci-cd) for automated releases --- ## Release TypeScript/JavaScript Packages to NPM This guide walks you through setting up Nx Release to version, generate changelogs, and publish TypeScript/JavaScript packages to NPM from your monorepo. {% linkcard title="Free Course: Versioning and Releasing NPM packages with Nx" href="https://www.epicweb.dev/tutorials/versioning-and-releasing-npm-packages-with-nx" /%} ## Setting up your NPM release workflow ### Install Nx Ensure that Nx is installed in your monorepo. Check out the [Installation docs](/docs/getting-started/installation) for instructions on created a new Nx workspace or adding Nx to an existing project. ### Add the JavaScript plugin The [`@nx/js` package](/docs/technologies/typescript/introduction) is required for Nx Release to manage and release JavaScript packages. Add it if it is not already installed: ```shell nx add @nx/js ``` ### Configure projects to release Nx Release uses Nx powerful [Project Graph](/docs/features/explore-graph) to understand your projects and their dependencies. If you want to release all of the projects in your workspace, such as when dealing with a series of npm library packages, no configuration is required. If you have a mixed workspace in which you also have some applications, e2e testing projects or other things you don't want to release, you can configure `nx release` to target only the projects you want to release. Configure which projects to release by adding the `release.projects` property to nx.json. The value is an array of strings, and you can use any of the same specifiers that are supported by `nx run-many`'s [projects filtering](/docs/reference/nx-commands#nx-run-many), such as explicit project names, Nx tags, directories and glob patterns, including negation using the `!` character. For example, to release just the projects in the `packages` directory: ```json // nx.json { "release": { "projects": ["packages/*"] } } ``` ## Create the first release The first time you release with Nx Release in your monorepo, you will need to use the `--first-release` option. This tells Nx Release not to expect the existence of any git tags, changelog files, or published packages. {% aside type="note" title="Use the --dry-run option" %} The `--dry-run` option is useful for testing your configuration without actually creating a release. It is always recommended to run Nx Release once with `--dry-run` first to ensure everything is configured correctly. {% /aside %} To preview your first release, run: ```shell nx release --first-release --dry-run ``` ### Pick a new version Nx Release will prompt you to pick a version bump for all the packages in the release. By default, all package versions are kept in sync, so the prompt only needs to be answered one time. If needed, you can [configure Nx to release projects independently](/docs/guides/nx-release/release-projects-independently). ```text {% frame="terminal" title="nx release --first-release --dry-run" %} ? What kind of change is this for the 3 matched projects(s)? … ❯ major premajor minor preminor patch prepatch prerelease Custom exact version ``` ### Preview the results After this prompt, the command will finish, showing you the preview of changes that would have been made if the `--dry-run` option was not passed. ```text {% frame="terminal" title="nx release --first-release --dry-run" %} NX Running release version for project: pkg-1 pkg-1 📄 Resolved the current version as 0.0.1 from manifest: packages/pkg-1/package.json pkg-1 ❓ Applied semver relative bump "major", from the prompted specifier, to get new version 1.0.0 pkg-1 ✍️ New version 1.0.0 written to manifest: packages/pkg-1/package.json NX Running release version for project: pkg-2 pkg-2 📄 Resolved the current version as 0.0.1 from manifest: packages/pkg-2/package.json pkg-2 ❓ Applied version 1.0.0 directly, because the project is a member of a fixed release group containing pkg-1 pkg-2 ✍️ New version 1.0.0 written to manifest: packages/pkg-2/package.json NX Running release version for project: pkg-3 pkg-3 📄 Resolved the current version as 0.0.1 from manifest: packages/pkg-3/package.json pkg-3 ❓ Applied version 1.0.0 directly, because the project is a member of a fixed release group containing pkg-1 pkg-3 ✍️ New version 1.0.0 written to manifest: packages/pkg-3/package.json UPDATE packages/pkg-1/package.json [dry-run] "name": "@myorg/pkg-1", - "version": "0.0.1", + "version": "0.0.2", "dependencies": { "tslib": "^2.3.0", - "@myorg/pkg-2": "0.0.1" + "@myorg/pkg-2": "0.0.2" }, UPDATE packages/pkg-2/package.json [dry-run] "name": "@myorg/pkg-2", - "version": "0.0.1", + "version": "0.0.2", "dependencies": { UPDATE packages/pkg-3/package.json [dry-run] "name": "@myorg/pkg-3", - "version": "0.0.1", + "version": "0.0.2", "dependencies": { NX Updating npm lock file NX Staging changed files with git NOTE: The "dryRun" flag means no changes were made. NX Previewing an entry in CHANGELOG.md for v0.0.2 CREATE CHANGELOG.md [dry-run] + ## 0.0.2 (2024-01-23) + + This was a version bump only, there were no code changes. NX Staging changed files with git NOTE: The "dryRun" flag means no changelogs were actually created. NX Committing changes with git NX Tagging commit with git Skipped publishing packages. ``` ### Run without `--dry-run` If the preview looks good, run the command again without the `--dry-run` option to actually create the release. ```shell nx release --first-release ``` The command will proceed as before, prompting for a version bump and showing a preview of the changes. However, this time, it will prompt you to publish the packages to NPM. If you say no, the publishing step will be skipped. If you say yes, the command will publish the packages to the NPM registry. ```text {% frame="terminal" title="nx release --first-release" %} ... ✔ Do you want to publish these versions? (y/N) · true NX Running target nx-release-publish for 3 projects: - pkg-1 - pkg-2 - pkg-3 ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— > nx run pkg-1:nx-release-publish 📦 @myorg/pkg-1@0.0.2 === Tarball Contents === 233B README.md 277B package.json 53B src/index.ts 61B src/lib/pkg-1.ts === Tarball Details === name: @myorg/pkg-1 version: 0.0.2 filename: testorg-pkg-1-0.0.2.tgz package size: 531 B unpacked size: 624 B shasum: {shasum} integrity: {integrity} total files: 12 Published to https://registry.npmjs.org with tag "latest" > nx run pkg-2:nx-release-publish 📦 @myorg/pkg-2@0.0.2 === Tarball Contents === 233B README.md 277B package.json 53B src/index.ts 61B src/lib/pkg-2.ts === Tarball Details === name: @myorg/pkg-2 version: 0.0.2 filename: testorg-pkg-2-0.0.2.tgz package size: 531 B unpacked size: 624 B shasum: {shasum} integrity: {integrity} total files: 12 Published to https://registry.npmjs.org with tag "latest" > nx run pkg-3:nx-release-publish 📦 @myorg/pkg-3@0.0.2 === Tarball Contents === 233B README.md 277B package.json 53B src/index.ts 61B src/lib/pkg-3.ts === Tarball Details === name: @myorg/pkg-3 version: 0.0.2 filename: testorg-pkg-3-0.0.2.tgz package size: 531 B unpacked size: 624 B shasum: {shasum} integrity: {integrity} total files: 12 Published to https://registry.npmjs.org with tag "latest" ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Successfully ran target nx-release-publish for 3 projects ``` ## Manage git operations By default, Nx Release will stage all changes it makes with git. This includes updating `package.json` files, creating changelog files, and updating the `package-lock.json` file. After staging the changes, Nx Release will commit the changes and create a git tag for the release. ### Customize the commit message and tag pattern The commit message created by Nx Release defaults to 'chore(release): publish {version}', where `{version}` will be dynamically interpolated with the relevant value based on your actual release, but can be customized with the `release.git.commitMessage` property in nx.json. The structure of the git tag defaults to `v{version}`. For example, if the version is `1.2.3`, the tag will be `v1.2.3`. This can be customized by setting the `release.releaseTag.pattern` property (Nx 22+) or `release.releaseTagPattern` property (Nx < 22) in nx.json. For this same example, if you want the commit message to be 'chore(release): 1.2.3' and the tag to be `release/1.2.3`, you would configure nx.json like this: {% tabs syncKey="nx-release-configuration" %} {% tabitem label="Nx 22+" %} ```json // nx.json { "release": { "releaseTag": { "pattern": "release/{version}" }, "git": { "commitMessage": "chore(release): {version}" } } } ``` {% /tabitem %} {% tabitem label="Nx < 22" %} ```json // nx.json { "release": { "releaseTagPattern": "release/{version}", "git": { "commitMessage": "chore(release): {version}" } } } ``` {% /tabitem %} {% /tabs %} When using release groups in which the member projects are versioned together, you can also leverage `{releaseGroupName}` and it will be interpolated appropriately in the commit/tag that gets created for that release group. ## Future releases After the first release, the `--first-release` option will no longer be required. Nx Release will expect to find git tags and changelog files for each package. It will also use `npm view` to look up the current version of packages before publishing, ensuring that the package has not already been published and therefore avoid any conflict errors, meaning you can run the same publish action multiple times without any negative side-effects. Future releases will also generate entries in `CHANGELOG.md` based on the changes since the last release. Nx Release will parse the `feat` and `fix` type commits according to the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) specification and sort them into appropriate sections of the changelog. An example of these changelogs can be seen on the [Nx releases page](https://github.com/nrwl/nx/releases). --- ## Release Projects Independently Nx Release supports releasing projects independently. This is useful when you have a monorepo with projects that are not released on the same schedule. You can also group projects into multiple release groups for increased flexibility and control. Learn more about [release groups](/docs/guides/nx-release/release-groups). ## Configure independent releases To configure independent releases, add the following property to your `nx.json` file: ```json // nx.json { "release": { "projectsRelationship": "independent" } } ``` ## Differences from fixed releases Nx release will behave differently when configured for independent releases. ### Prompt for multiple version bumps When configured for independent releases, Nx Release will prompt for a version bump for each project that is being released. This allows the version of each project to differ over time. ### Create a git tag for each project Since each project can have a different version, Nx Release will create a git tag for each project that is being released. By default, the tag for each project will follow the pattern `{projectName}@{version}`. For example, if the `pkg-1` project is being released with version `1.1.0`, its git tag will be `pkg-1@1.1.0`. This can still be changed with the `release.releaseTag.pattern` property (Nx 22+) or `release.releaseTagPattern` (Nx < 22) in `nx.json`, but be sure to include `{projectName}` in the pattern so that each generated tag is unique. For example, to generate the tags `release/pkg-1/1.1.0` and `release/pkg-2/1.2.1` for the `pkg-1` and `pkg-2` projects respectively, you would use the following configuration in nx.json: {% tabs syncKey="nx-release-configuration" %} {% tabitem label="Nx 22+" %} ```json // nx.json { "release": { "releaseTag": { "pattern": "release/{projectName}/{version}" } } } ``` {% /tabitem %} {% tabitem label="Nx < 22" %} ```json // nx.json { "release": { "releaseTagPattern": "release/{projectName}/{version}" } } ``` {% /tabitem %} {% /tabs %} See the [`releaseTag.pattern` documentation](/docs/reference/nx-json#release-tag) for more details on how to customize the tag pattern. ### Different commit message structure Even though Nx Release creates a git tag for each project, it will still create a single commit for the entire release. The commit message will still include all of the projects being released with their corresponding version. For example: ```text chore(release): publish - project: pkg-1 1.1.0 - project: pkg-2 1.2.1 - project: pkg-3 2.5.7 ``` ### Changelogs Nx Release will no longer generate and update a workspace level `CHANGELOG.md` file when configured for independent releases. If you still want changelog generation, you will need to enable project level changelogs. These are similar to the workspace level changelog, but they are generated for each project individually and only contain changes for that specific project. They can be configured with the `release.changelog.projectChangelogs` property in `nx.json`. ```json // nx.json { "release": { "changelog": { "projectChangelogs": true } } } ``` Just like with [fixed releases](/docs/guides/nx-release/release-npm-packages), you can preview changes to the changelog files by running Nx Release with the `--dry-run` option. ## Use the projects filter One of the key benefits of independent releases is the ability to release only a subset of projects. Nx Release supports this with the `--projects` option. See the [`nx release`](/docs/reference/nx-commands#nx-release) CLI reference for all available options. The value is an array of strings, and you can use any of the same specifiers that are supported by `nx run-many`'s [projects filtering](/docs/reference/nx-commands#nx-run-many), such as explicit project names, Nx tags, directories and glob patterns, including negation using the `!` character. A few examples: Release only the `pkg-1` and `pkg-2` projects: ```shell nx release --projects=pkg-1,pkg-2 ``` Release all projects in the `server` directory: ```shell nx release --projects=server/* ``` Release all projects except those in the `ui` directory: ```shell nx release --projects='!ui/*' ``` All other projects in the workspace will be ignored and only those that match the filter will be versioned, have their changelogs updated, and published. ## Update dependents For independently versioned projects, we will have to consider what happens when those projects depend on one another. ```json // nx.json { "release": { "projectsRelationship": "independent", // The projects are independently versioned, but project-a depends on // project-b, so we need to consider the side-effects of updating project-b "projects": ["project-a", "project-b"] } } ``` Nx can handle this cascade of updates automatically, across any number of release group boundaries, and this behavior is configurable via the `version.updateDependents` option. See the [Update Dependents](/docs/guides/nx-release/update-dependents) guide for more details. --- ## Update Dependents When versioning independently maintained projects, we will have to consider what happens when those projects depend on one another. For example, `project-a` might be version `1.0.0` and `project-b` might be version `2.0.0`, where `project-a` depends on `project-b`: `project-a -> project-b`. **In other words, `project-a` is a dependent of `project-b`, and `project-b` is a dependency of `project-a`.** This means that whenever we update `project-b` we need to consider the side-effects of that on `project-a`. There will now be a dependency reference, e.g. in a manifest file such as `package.json` for the TypeScript/JavaScript ecosystem, that needs to be updated to reflect the new version of `project-b`. Nx can handle this cascade of updates automatically, across any number of [Release Group](/docs/guides/nx-release/release-groups) boundaries, and this behavior is configurable via the `version.updateDependents` option. ### Update dependents configuration - **`"always"`** (introduced in v22 and now the default): Always update dependents when a dependency is versioned, regardless of which group they belong to, or what filters are applied to the release command/programmatic API. - **`"auto"`**: (old default) Update dependents within the same release group, and only when not filtered out by `--projects` or `--groups` - **`"never"`**: Never automatically update dependents When Nx release detects that a side-effectful bump needs to be made, e.g. in our example to `project-a`, it will update any dependency references in manifest files (e.g. `project-a/package.json`), and bump `project-a`'s own version to the next appropriate patch version. For example, if `project-b` is bumped to version `2.1.0`, `project-a` will be updated to depend on `project-b@2.1.0`, and `project-a` will be bumped to version `1.0.1`. ```json // Before { "name": "project-a", "version": "1.0.0", "dependencies": { "project-b": "2.0.0" } } ``` ```json // After { "name": "project-a", "version": "1.0.1", // Side-effectful patch of project-a "dependencies": { "project-b": "2.1.0" // New version of project-b } } ``` --- ## Update Your Local Registry Setup to use Nx Release Nx will create a `tools/start-local-registry.ts` script for starting a local registry and publishing packages to it in preparation for running end to end tests. If you have an existing `tools/start-local-registry.ts` script from a previous version of Nx, you should update it to use Nx Release to publish packages to the local registry. This will ensure that newly generated libraries are published appropriately when running end to end tests. ## The previous version The previous version of the `tools/start-local-registry.ts` script used publish targets on each project to publish the packages to the local registry. This is no longer necessary with Nx Release. You can identify the previous version by the `nx run-many` command that publishes the packages: ```typescript /** * This script starts a local registry for e2e testing purposes. * It is meant to be called in jest's globalSetup. */ import { startLocalRegistry } from '@nx/js/plugins/jest/local-registry'; import { execFileSync } from 'child_process'; export default async () => { // local registry target to run const localRegistryTarget = '@demo-plugin-1800/source:local-registry'; // storage folder for the local registry const storage = './tmp/local-registry/storage'; global.stopLocalRegistry = await startLocalRegistry({ localRegistryTarget, storage, verbose: false, }); const nx = require.resolve('nx/bin/nx'); execFileSync( nx, ['run-many', '--targets', 'publish', '--ver', '0.0.0-e2e', '--tag', 'e2e'], { env: process.env, stdio: 'inherit' } ); }; ``` If your script looks like this, you should update it. ## The updated version The updated version of the `tools/start-local-registry.ts` script uses Nx Release to publish the packages to the local registry. This is done by running `releaseVersion` and `releasePublish` functions from `nx/release`. Your updated script should look like this: ```typescript /** * This script starts a local registry for e2e testing purposes. * It is meant to be called in jest's globalSetup. */ import { startLocalRegistry } from '@nx/js/plugins/jest/local-registry'; import { execFileSync } from 'child_process'; import { releasePublish, releaseVersion } from 'nx/release'; export default async () => { // local registry target to run const localRegistryTarget = '@demo-plugin-1800/source:local-registry'; // storage folder for the local registry const storage = './tmp/local-registry/storage'; global.stopLocalRegistry = await startLocalRegistry({ localRegistryTarget, storage, verbose: false, }); await releaseVersion({ specifier: '0.0.0-e2e', stageChanges: false, gitCommit: false, gitTag: false, firstRelease: true, versionActionsOptionsOverrides: { skipLockFileUpdate: true, }, }); await releasePublish({ tag: 'e2e', firstRelease: true, }); }; ``` --- ## Updating Version References in Manifest Files The versioning stage of Nx Release is customizable and programming language agnostic, but some of its capabilities are dictated by the tooling you are using. This is particularly true when it comes to updating version references in manifest files, such as `package.json`. Nx provides the TypeScript/JavaScript (and therefore `package.json`) functionality out of the box, so that is what will be covered in more detail in this recipe. For other ecosystems, please see the documentation of the respective plugins. An important characteristic of Nx release is that it does not directly manipulate your packages in memory before releasing them. This maintains complete transparency between you and the tooling being leveraged to publish your packages, such as `npm publish` or `pnpm publish`, which are leveraged automatically by Nx Release during its publishing phase. The relevance of this will become clear for [Scenario 4 below](#scenario-4-i-want-to-update-package-versions-directly-in-my-source-files-but-use-local-dependency-references-via-fileworkspace). {% aside type="note" title="Breaking Changes in Nx v21" %} In Nx v21, the implementation details of versioning were rewritten to enhance flexibility and allow for better cross-ecosystem support. An automated migration was provided in Nx v21 to update your configuration to the new format when running `nx migrate`. The following examples shows the Nx v21 and later configuration format, you can view the v20 version of the website to see the legacy format. {% /aside %} ## Scenario 1: I want to update semantic version numbers directly in my source package.json files This is the simplest scenario, and default behavior of Nx Release. If you have a TypeScript/JavaScript project which lives at e.g. `packages/my-project` with its package.json at the root of the project, you can run `nx release` or use the programmatic API and it will update the version number and all relevant intra-workspace dependency references in `packages/my-project/package.json` to the new version(s). For example, with the following project structure: {% filetree %} - packages/ - my-project/ - package.json - my-other-project-in-the-monorepo/ - package.json {% /filetree %} And starting point for package.json sources: ```json // packages/my-project/package.json { "name": "my-project", "version": "0.1.1", "dependencies": { "my-other-project-in-the-monorepo": "0.1.1" } } ``` ```json // packages/my-other-project-in-the-monorepo/package.json { "name": "my-other-project-in-the-monorepo", "version": "0.1.1" } ``` When running `nx release` and applying a patch release, the following changes will be made to the source package.json files: ```json // packages/my-project/package.json { "name": "my-project", "version": "0.1.2", "dependencies": { "my-other-project-in-the-monorepo": "0.1.2" } } ``` ```json // packages/my-other-project-in-the-monorepo/package.json { "name": "my-other-project-in-the-monorepo", "version": "0.1.2" } ``` By default, the changes will be staged and committed unless git operations are disabled. ## Scenario 2: I want to publish from a custom dist directory and update references in my both my source and dist package.json files Nx Release has the concept of a "manifest root", which is different than the project root. The manifest root is the directory from which the project is versioned. By default, the manifest root is the project root detected by Nx as we have seen in Scenario 1 above, but the manifest root can be configured independently to be one or more other locations than the project root. As of Nx v21, multiple manifest roots can be configured using the `release.version.manifestRootsToUpdate` option, resulting in multiple manifest files (such as `package.json`) being updated at once for a single project during a the versioning phase. If, for example, we want to build our projects to a centralized `dist/` directory in the Nx workspace, and update both the source and dist package.json files when versioning, we can tell Nx Release to discover it for the versioning and publishing steps by adding the following configuration to the `nx.json` file, or the `project.json` file of relevant projects: ```jsonc // nx.json { "release": { // Ensure that versioning works in both the source and dist directories "version": { // path structures for both the source and dist directories, where {projectRoot} and {projectName} are available placeholders that will be interpolated by Nx "manifestRootsToUpdate": [ "{projectRoot}", // We use the object form of the manifestRootsToUpdate to specify that we want to update the dist package.json files and not preserve the local dependency references (if not using pnpm or bun) { "path": "dist/packages/{projectName}", "preserveLocalDependencyProtocols": false, // (NOT NEEDED WHEN USING pnpm or bun) because we need to ensure our dist package.json files are valid for publishing and the local dependency references such as "workspace:" and "file:" are removed }, ], }, }, "targetDefaults": { // Ensure that publishing works from the dist directory // The nx-release-publish target is added implicitly behind the scenes by Nx Release, and we can therefore configure it in targetDefaults "nx-release-publish": { "options": { // the packageRoot property is specific the TS/JS nx-release-publish implementation, other ecosystem plugins may have different options "packageRoot": "dist/packages/{projectName}", // path structure for your dist directory, where {projectRoot} and {projectName} are available placeholders that will be interpolated by Nx }, }, }, } ``` ## Scenario 3: I want to publish from a custom dist directory and not update references in my source package.json files A slight modification of Scenario 2 above, where we want to publish from a custom dist directory and not update references in our source package.json files. {% aside type="caution" title="The source control tracked package.json files are no longer the source of truth for the package version" %} Because we are no longer updating the version references in the source package.json files, the source control tracked package.json files are no longer the source of truth for the package version. We need to reference git tags or the latest value in the registry as the source of truth for the package version instead. We will also need to handle intra-workspace dependency references in the source package.json files differently using file/workspace references, which will be covered below. {% /aside %} Because our source package.json files are no longer updated during versioning, we will need to handle intra-workspace dependency references in the source package.json files differently. The way to achieve this is by using local `file:` or `workspace:` references in the source package.json files. For example, using our packages from Scenario 1 above, if we want to reference the `my-other-project-in-the-monorepo` project from `my-project`, we can update the source package.json file as follows: ```jsonc // packages/my-project/package.json { "name": "my-project", // note there is no version number in the source package.json file because it will never be updated "dependencies": { "my-other-project-in-the-monorepo": "workspace:*", // or "file:../my-other-project-in-the-monorepo", depending on your preference and which package manager you are using }, } ``` If the package manager we are using is not using pnpm or bun (See Scenario 4 below), we will need to let Nx release know that we want to overwrite the workspace reference with the actual version number when publishing, because since Nx v21 it will preserve them by default. We can do this by setting the `release.version.preserveLocalDependencyProtocols` option to `false` in the `nx.json` file: ```jsonc // nx.json { "release": { // Ensure that versioning works only in the dist directory "version": { "manifestRootsToUpdate": ["dist/packages/{projectName}"], // path structure for your dist directory, where {projectRoot} and {projectName} are available placeholders that will be interpolated by Nx "currentVersionResolver": "git-tag", // or "registry", because we are no longer referencing our source package.json as the source of truth for the current version "preserveLocalDependencyProtocols": false, // (NOT NEEDED WHEN USING pnpm or bun) because we need to ensure our dist package.json files are valid for publishing and the local dependency references are removed }, }, "targetDefaults": { // Ensure that publishing works from the dist directory // The nx-release-publish target is added implicitly behind the scenes by Nx Release, and we can therefore configure it in targetDefaults "nx-release-publish": { "options": { "packageRoot": "dist/packages/{projectName}", // path structure for your dist directory, where {projectRoot} and {projectName} are available placeholders that will be interpolated by Nx }, }, }, } ``` After applying a patch version, our dist package.json will therefore ultimately look like this: ```jsonc // dist/packages/my-project/package.json { "name": "my-project", "version": "0.1.2", // the version number is applied "dependencies": { "my-other-project-in-the-monorepo": "0.1.2", // the dependency reference is updated from the workspace reference to the actual version number (if not using pnpm or bun) }, } ``` This package.json is now valid and ready to be published to the registry with any package manager. ## Scenario 4: I want to update package versions directly in my source files, but use local dependency references via file/workspace {% aside type="caution" title="This scenario is currently only fully supported when your package manager is pnpm or bun" %} pnpm and bun are the only package managers that provide a publish command that both supports dynamically swapping the `file:` and `workspace:*` references with the actual version number at publish time, and provides the customization needed for us to wrap it. `yarn npm publish` does support the replacements but is very limited on customization options. {% /aside %} This is a more advanced scenario because it removes the clean separation of concerns between versioning and publishing. The reason for this is that the `file:` and `workspace:*` references simply have to be replaced with actual version numbers before they are written to the registry, otherwise they will break when a user tries to install the package. If versioning does not replace them, publishing needs to. As mentioned at the start of this recipe, Nx Release intentionally does not manipulate your packages in memory during publishing, so this scenario is only supported when your package manager provides publishing functionality which dynamically swaps the local references. **Currently this is only supported by pnpm and bun.** As of Nx v21, by default, `release.version.preserveLocalDependencyProtocols` is set to `true`, which means that `file:` and `workspace:*` references are preserved. For example, using this source package.json file, when applying a patch release: ```jsonc // packages/my-project/package.json { "name": "my-project", "version": "0.1.2", "dependencies": { "my-other-project-in-the-monorepo": "workspace:*", }, } ``` Nx release will see this and update the "version" number to `0.1.3`, and leave the `workspace:*` reference alone: ```jsonc // packages/my-project/package.json { "name": "my-project", "version": "0.1.3", // our version number is updated as expected "dependencies": { // our workspace dependency reference is preserved "my-other-project-in-the-monorepo": "workspace:*", }, } ``` Again, this is not in a valid state to be published to the registry, and so the publishing step will need to handle this. **This is only supported by pnpm and bun**, in which case Nx Release invokes `pnpm publish` or `bun publish` instead of `npm publish` behind the scenes during publishing, and you will receive a clear error if you attempt to use such a package.json with npm or yarn. --- ## Tasks & Caching {% index_page_cards path="guides/tasks--caching" /%} --- ## Change Cache Location By default the cache is stored locally in `.nx/cache`. Cache results are stored for a week before they get deleted. You can customize the cache location in the `nx.json` file: ```json // nx.json { "cacheDirectory": "/tmp/mycache" } ``` --- ## Configure Inputs for Task Caching When Nx [computes the hash for a given operation](/docs/concepts/how-caching-works), it takes into account the `inputs` of the target. The `inputs` are a list of file sets, runtime inputs, and environment variables that affect the output of the target. If any of the `inputs` change, the cache is invalidated and the target is re-run. Nx errs on the side of caution when using inputs. Ideally, the "perfect" configuration of inputs will allow Nx to never re-run something when it does not need to. In practice though, it is better to play it safe and include more than strictly necessary in the inputs of a task. Forgetting to consider something during computation hash calculation may lead to negative consequences for end users. Start safe and fine-tune your inputs when there are clear opportunities to improve the cache hit rate. For an overview of all the possible [types of inputs](/docs/reference/inputs) and how to reuse sets of inputs as [named inputs](/docs/reference/inputs#named-inputs), see the reference documentation. {% aside type="caution" title="Directory Paths Require Trailing Slash or Glob" %} When specifying a directory as an input, you must use a trailing slash (`/`) or a glob pattern. For example, `{projectRoot}/src/` or `{projectRoot}/src/**/*` will match all files in the `src` directory, but `{projectRoot}/src` (without trailing slash) will not match any files. This differs from `outputs`, which support naked directory paths. {% /aside %} Throughout this recipe, the following project structure of a simple workspace will be used as an example to help understand inputs better. {% graph height="450px" %} ```json { "projects": [ { "name": "myreactapp", "type": "app", "data": { "tags": [] } }, { "name": "shared-ui", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "myreactapp": [ { "source": "myreactapp", "target": "shared-ui", "type": "static" } ], "shared-ui": [] }, "workspaceLayout": { "appsDir": "", "libsDir": "" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false } ``` {% /graph %} ## View the inputs of a task You can view the configuration for a task of a project by adding the `--graph` flag when running the command: ```shell nx build myreactapp --graph ``` This will show the task graph executed by Nx when running the command. Clicking the task will open a tooltip which lists out all of the inputs of the task. A button within the tooltip will also reveal more details about the configuration for the project which the task belongs to. Doing so will show a view such as the one below: {% project_details%} ```json { "project": { "name": "myreactapp", "type": "app", "data": { "root": "apps/myreactapp", "targets": { "build": { "options": { "cwd": "apps/myreactapp", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": [ "production", "^production", { "externalDependencies": ["vite"] } ], "outputs": ["{workspaceRoot}/dist/apps/myreactapp"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "serve": { "options": { "cwd": "apps/myreactapp", "command": "vite serve", "continuous": true }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "preview": { "options": { "cwd": "apps/myreactapp", "command": "vite preview" }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "serve-static": { "executor": "@nx/web:file-server", "options": { "buildTarget": "build", "continuous": true }, "configurations": {} }, "test": { "options": { "cwd": "apps/myreactapp", "command": "vitest run" }, "cache": true, "inputs": [ "default", "^production", { "externalDependencies": ["vitest"] } ], "outputs": ["{workspaceRoot}/coverage/apps/myreactapp"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "lint": { "cache": true, "options": { "cwd": "apps/myreactapp", "command": "eslint ." }, "inputs": [ "default", "{workspaceRoot}/.eslintrc.json", "{workspaceRoot}/apps/myreactapp/.eslintrc.json", "{workspaceRoot}/tools/eslint-rules/**/*", { "externalDependencies": ["eslint"] } ], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["eslint"] } } }, "name": "myreactapp", "$schema": "../../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "apps/myreactapp/src", "projectType": "application", "tags": [], "implicitDependencies": [], "metadata": { "technologies": ["react"] } } }, "sourceMap": { "root": ["apps/myreactapp/project.json", "nx/core/project-json"], "targets": ["apps/myreactapp/project.json", "nx/core/project-json"], "targets.build": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.build.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.cache": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.dependsOn": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.inputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.outputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.build.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.serve.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.preview.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.preview.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.executor": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.serve-static.options.buildTarget": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.test.command": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.options": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.cache": ["apps/myreactapp/vite.config.ts", "@nx/vite/plugin"], "targets.test.inputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.outputs": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.test.options.cwd": [ "apps/myreactapp/vite.config.ts", "@nx/vite/plugin" ], "targets.lint": ["apps/myreactapp/project.json", "@nx/eslint/plugin"], "targets.lint.command": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.cache": ["apps/myreactapp/project.json", "@nx/eslint/plugin"], "targets.lint.options": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.inputs": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "targets.lint.options.cwd": [ "apps/myreactapp/project.json", "@nx/eslint/plugin" ], "name": ["apps/myreactapp/project.json", "nx/core/project-json"], "$schema": ["apps/myreactapp/project.json", "nx/core/project-json"], "sourceRoot": ["apps/myreactapp/project.json", "nx/core/project-json"], "projectType": ["apps/myreactapp/project.json", "nx/core/project-json"], "tags": ["apps/myreactapp/project.json", "nx/core/project-json"] } } ``` {% /project_details %} Nx Console has a button which will show a preview of this screen when a project level configuration file (`project.json` or `package.json`) is opened in the IDE. Read more at [Nx Console Project Details View](/docs/guides/nx-console/console-project-details). Another way of accessing this information is to run `nx show project myreactapp --web` and the view above will be opened in a browser. Use this tool to help understand what inputs are being used by Nx in your workspace. {% aside title="Note" type="note" %} If no `inputs` are specified at all, Nx will default to looking at all files of a project and its dependencies. This is a rather cautious approach. This might cause Nx to re-run a task in some cases where the cache could have been used instead but it will always give you correct output. {% /aside %} ## Configure inputs The tasks you run in your workspace will likely already have `inputs` defined. Be sure to [view the existing inputs](#viewing-the-inputs-of-a-task) and start from there. Inputs of a task are configured in the `inputs` array on the target. This can be done in several different places: - Nx Plugins often [infer inputs for tasks](/docs/concepts/inferred-tasks) which run other tools. - In doing so, they will also define some reasonable defaults for the `inputs` of those tasks. - The `inputs` array in the `targetDefaults` for a set of targets in `nx.json`. - The `inputs` array for a specific target in the project configuration file. {% aside title="Copy the existing inputs before modifying inputs for a task" %} To override the `inputs` of a task, start by copying over the entire array shown when [viewing the project details](#viewing-the-inputs-of-a-task) and then add/modify/remove inputs as needed. {% /aside %} As you configure `inputs`, keep the project details screen open and it will refresh as changes are made. Check to make sure that the intended configuration is shown. ### Workspace level inputs [Target Defaults](/docs/reference/nx-json#target-defaults) defined in `nx.json` apply to a set of targets. Defining `inputs` here one time will apply to a set of similar targets. ```jsonc {% meta="{5}" %} // nx.json { "targetDefaults": { "build": { "inputs": ["production", "^production"], }, }, } ``` The above specifies that all targets with the name `build` will use the `inputs` specified. This configuration will override any `inputs` inferred by Nx Plugins as you have more direct control in your `nx.json` than the behavior of the Nx Plugin. The configuration defined here completely overwrites any `inputs` inferred by Nx Plugins and is not merged in any way. This configuration may be overridden by configuration in project-specific configuration files. ### Project level inputs Defining `inputs` of a target in `project.json` or `package.json` will apply only to tasks of the specific project. {% tabs %} {% tabitem label="project.json" %} ```jsonc {% meta="{6}" %} // apps/myreactapp/project.json { "name": "myreactapp", "targets": { "build": { "inputs": ["production", "^production"], }, }, } ``` {% /tabitem %} {% tabitem label="package.json" %} ```jsonc {% meta="{9}" %} // apps/myreactapp/package.json { "name": "myreactapp", "dependencies": {}, "devDependencies": {}, ... "nx": { "targets": { "build": { "inputs": ["production", "^production"] } ... } } } ``` {% /tabitem %} {% /tabs %} The above specifies that the `build` target of the `myreactapp` project will use the `inputs` specified. This configuration will override any `inputs` inferred by Nx Plugins as well as any `inputs` defined in the [Target Defaults](/docs/reference/nx-json#target-defaults) in the `nx.json` file as this is more specific than those other methods of configuring `inputs`. The configuration defined here completely overwrites any `inputs` inferred by Nx Plugins or in target defaults and is not merged in any way. ## Common inputs ### Test and config files Often, projects include some files with runtime behavior and other files for unit testing. When running the `build` task, we do not want Nx to consider test files so updating the test files does not invalidate the cache for `build` tasks. Plugins which define compile or bundling tasks such as `@nx/webpack/plugin` and `@nx/vite/plugin` will use the following inputs: ```jsonc "inputs": [ "production", // All files in a project excluding test files "^production" // Inputs of a dependencies which may affect behavior of projects which depend on them ] ``` Plugins which define testing tasks such as `@nx/cypress/plugin`, `@nx/playwright/plugin`, `@nx/jest/plugin` and `@nx/vite/plugin` will infer the following inputs for tasks: ```jsonc "inputs": [ "default", // All files in a project including test files "^production" // Inputs of a dependencies which may affect behavior of projects which depend on them ] ``` Given the above configurations, exclude the test and config files from the `production` named input: ```jsonc {% meta="{5-9}" %} // nx.json { "namedInputs": { "default": ["{projectRoot}/**/*", "sharedGlobals"], "production": [ "default", "!{projectRoot}/jest.config.ts", "!{projectRoot}/**/?(*.)+(spec|test).ts", ], }, } ``` With the above named inputs, Nx will behave in the following way: - When only test files are changed, Nx will restore previous compilation results from the cache and re-run the tests for the projects containing the test files - When any production files are changed, Nx will re-run the tests for the project as well as any projects which depend on it ### Specifying dependency file patterns directly Instead of defining a named input and referencing it with `^`, you can directly specify file patterns to consider from dependency projects using the `^{projectRoot}` syntax: ```jsonc // nx.json { "targetDefaults": { "build": { "inputs": [ "production", "^{projectRoot}/src/**/*.ts", // Only consider .ts source files from dependencies ], }, }, } ``` This is useful when you want fine-grained control over which files from dependencies affect a task's cache without creating a named input. For example, a `build` task might only need to consider `.ts` files from its dependencies rather than all production files. You can also use the object format with the `dependencies` property: ```jsonc { "inputs": [ "production", { "fileset": "{projectRoot}/src/**/*.ts", "dependencies": true }, ], } ``` ### Consider the version of a language for all tasks Many times, the version of the programming language being used will affect the behavior of all tasks for the workspace. A runtime input can be added to the `sharedGlobals` named input to consider it for the hash of every task. For example, to consider the version of Node.js in the hash of every task, add `node --version` as an input. ```jsonc {% meta="{5}" %} // nx.json { "namedInputs": { "default": ["{projectRoot}/**/*", "sharedGlobals"], "sharedGlobals": [{ "runtime": "node --version" }], }, } ``` --- ## Configure Outputs for Task Caching Whenever Nx runs a cacheable task, it will store the results of that task in the cache. When Nx runs the task again, if the [inputs for that task](/docs/guides/tasks--caching/configure-inputs) have not changed, it will restore the results from the cache instead of spending the time to run the task again. ## Types of outputs ### Terminal output The terminal output of a task is replayed whenever a task is pulled from cache. Nx will always cache the terminal output of tasks which are cached. ### Output files Targets can define which files are produced when a task is run. Nx will cache these files so that it can restore them when the task is pulled from cache. These outputs files can be specified in several ways: ```jsonc "outputs": [ "{projectRoot}/dist/libs/mylib", // A directory "{workspaceRoot}/dist/{projectRoot}", // A directory based on the project's root "{workspaceRoot}/dist/{projectName}", // A directory based on the project's name "{workspaceRoot}/test-results.xml", // A file "{projectRoot}/dist/libs/mylib/**/*.js", // A glob pattern matching a set of files "{options.outputPath}", // A path defined in the options of a task ] ``` All outputs explicitly specifying paths must be prefixed with either `{projectRoot}` or `{workspaceRoot}` to distinguish where the path is resolved from. `{workspaceRoot}` should only appear in the beginning of an `output` but `{projectRoot}` and `{projectName}` can be specified later in the `output` to interpolate the root or name of the project into the output location. Outputs can also be determined from the `options` of running a task via the `{options.[propertyName]}` syntax. This is useful when an option for the task determines the output location and could be modified when the task is run. This path is resolved from the root of the workspace. If an output file or directory does not exist, it will be ignored. ## View outputs of a task The outputs of a task can be viewed by adding the `--graph` flag to the command: ```shell nx build myapp --graph ``` This will open the task graph in the browser. Clicking on a task in the graph will open a tooltip with a link to see details about the project. View the project's configuration to see a list of the outputs which are defined for each target. ## Configure outputs The tasks you run in your workspace will likely already have `outputs` defined. Be sure to [view the existing outputs](#viewing-outputs-of-a-task) and start from there. Nx Plugins often [infer outputs for tasks](/docs/concepts/inferred-tasks) which run other tools. Nx Plugins will look at the configuration files and/or command-line-arguments of the tools in your workspace to understand the outputs of running those tools. In most cases, this inference will be inline with the outputs of the tool. Nx will reflect changes to the configuration or command-line arguments of your tools without any additional changes. In some cases, Nx plugins may not infer the outputs of a task as you expect, they can be configured in the `outputs` array on the target. This can be done in several different places: - The `outputs` array in the `targetDefaults` for a set of targets in `nx.json`. - The `outputs` array for a specific target in the project configuration file. {% aside title="Copy the existing outputs before modifying outputs for a task" %} To override the `outputs` of a task, start by copying over the entire array shown when [viewing the project details](#viewing-the-outputs-of-a-task) and then add/modify/remove outputs as needed. {% /aside %} As you configure `outputs`, keep the project details screen open and it will refresh as changes are made. Check to make sure that the intended configuration is shown. ### Workspace level outputs [Target Defaults](/docs/reference/nx-json#target-defaults) defined in `nx.json` apply to a set of targets. Defining `outputs` here one time will apply to a set of similar targets. ```jsonc {% meta="{5}" %} // nx.json { "targetDefaults": { "build": { "outputs": ["{projectRoot}/dist"], }, }, } ``` The above specifies Nx will cache the `dist` directory under all project roots for all targets with the name `build`. This configuration will override any `outputs` inferred by Nx Plugins as you have more direct control in your `nx.json` than the behavior of the Nx Plugin. The configuration defined here completely overwrites any `outputs` inferred by Nx Plugins and is not merged in any way. This configuration may be overwritten by configuration in project-specific configuration files. {% aside title="Warning" type="caution" %} Specifying the same output location for multiple tasks often causes unintentional behavior. While sometimes this is intentional, try and ensure that a set of targets will yield unique output locations for the tasks belonging to different projects. Use the `{projectRoot}` and `{projectName}` notation to include unique characteristics of a project in the output. {% /aside %} ### Project level outputs Defining `outputs` of a target in `project.json` or `package.json` will apply only to tasks of the specific project. {% tabs %} {% tabitem label="project.json" %} ```jsonc {% meta="{6}" %} // apps/myreactapp/project.json { "name": "myreactapp", "targets": { "build": { "outputs": ["{projectRoot}/dist"], }, }, } ``` {% /tabitem %} {% tabitem label="package.json" %} The `package.json` file may include configuration for a specific Nx project. Defining `outputs` of a target here will apply only to tasks of the specific project. ```jsonc {% meta="{10}" %} // apps/myreactapp/package.json { "name": "myreactapp", "dependencies": {}, "devDependencies": {}, ... "nx": { "targets": { "build": { "outputs": ["{projectRoot}/dist"] } ... } } } ``` {% /tabitem %} {% /tabs %} The above specifies that the `build` target of the `myreactapp` project will use the `outputs` specified. This configuration will override any `outputs` inferred by Nx Plugins as well as any `outputs` defined in the [Target Defaults](/docs/reference/nx-json#target-defaults) in the `nx.json` file as this is more specific than those other methods of configuring `outputs`. The configuration defined here completely overwrites any `outputs` inferred by Nx Plugins or in target defaults and is not merged in any way. --- ## Migrate to Inferred Tasks (Project Crystal) In this recipe, you'll learn how to migrate an existing Nx workspace from using executors in `project.json` to using [inferred tasks](/docs/concepts/inferred-tasks). The main benefits of migrating to inferred tasks are - reducing the amount of configuration needed in `project.json` - inferring the correct cache settings based on the tool configuration files - [splitting tasks (Atomizer)](/docs/features/ci-features/split-e2e-tasks) for plugins that support it {% youtube src="https://youtu.be/wADNsVItnsM" title="Project Crystal" /%} For the best experience, we recommend that you [migrate](/docs/features/automate-updating-dependencies) to the latest Nx version before continuing. At minimum, you should be on Nx version 19.6. ```shell npx nx migrate latest ``` ## Migrate all plugins You can use the `infer-targets` generator to quickly migrate all available plugins to use inferred tasks. See the sections below for more details on the individual plugins' migration processes. ```shell npx nx g infer-targets ``` The generator will automatically detect all available `convert-to-inferred` generators and run the ones you choose. If you only want to try it on a single project, pass the `--project` option. ## Migrate a single plugin Most of the official plugins come with a `convert-to-inferred` generator. This generator will - register the inference plugin in the `plugins` section of `nx.json` - migrate executor options into the tool's configuration files (where applicable) - clean up `project.json` to remove targets and options that are unnecessary To get started, run `nx g convert-to-inferred`, and you'll be prompted to choose a plugin to migrate. ```text {% title="npx nx g convert-to-inferred" frame="terminal" %} ? Which generator would you like to use? … @nx/eslint:convert-to-inferred @nx/playwright:convert-to-inferred @nx/vite:convert-to-inferred None of the above ``` {% aside type="note" title="Third-party plugins" %} For third-party plugins that provide `convert-to-inferred` generators, you should pick the `None of the above` option and type in the name of the package manually. Alternatively, you can also provide the package explicitly with `nx g :convert-to-inferred`. {% /aside %} We recommend that you check that the configurations are correct before continuing to the next plugin. If you only want to try it on a single project, pass the `--project` option. ## Understand the migration process The `convert-to-inferred` generator removes uses of executors from the corresponding plugin. For example, if `@nx/vite` is migrated, then uses of [`@nx/vite:build`](/docs/technologies/build-tools/vite/executors#build), [`@nx/vite:dev-server`](/docs/technologies/build-tools/vite/executors#dev-server), [`@nx/vite:preview-server`](/docs/technologies/build-tools/vite/executors#preview-server), and [`@nx/vite:test`](/docs/technologies/build-tools/vite/executors#test) executors will be removed. Target and configuration names are maintained for each project in their `project.json` files. A target may be removed from `project.json` if everything is inferred--that is, options and configurations are not customized. To get the full project details (including all inferred tasks), run: ```shell npx nx show project ``` For example, if we migrated the `@nx/vite` plugin for a single app (i.e. `nx g @nx/vite:convert-to-inferred --project demo`), then running `nx show project demo` will show a screen similar to the following. {% project_details title="Test" height="300px" %} ```json { "project": { "name": "demo", "data": { "root": " apps/demo", "projectType": "application", "targets": { "serve": { "executor": "nx:run-commands", "options": { "command": "vite dev", "continuous": true } }, "build": { "executor": "nx:run-commands", "inputs": ["production", "^production"], "outputs": ["{projectRoot}/dist"], "options": { "command": "vite build" } } } } }, "sourceMap": { "targets": ["apps/demo/vite.config.ts", "@nx/vite"], "targets.serve": ["apps/demo/vite.config.ts", "@nx/vite"], "targets.build": ["apps/demo/vite.config.ts", "@nx/vite"] } } ``` {% /project_details %} You'll notice that the `serve` and `build` tasks are running the [Vite CLI](https://vite.dev/guide/cli.html) and there are no references to Nx executors. Since the targets directly invoke the Vite CLI, any options that may be passed to it can be passed via Nx commands. e.g. `nx serve demo --cors --port 8888` enables CORs and uses port `8888` using [Vite CLI options](https://vite.dev/guide/cli.html#options) The same CLI setup applies to other plugins as well. - `@nx/cypress` calls the [Cypress CLI](https://docs.cypress.io/guides/guides/command-line) - `@nx/playwright` calls the [Playwright CLI](https://playwright.dev/docs/test-cli) - `@nx/webpack` calls the [Webpack CLI](https://webpack.js.org/api/cli/) - etc. Read the recipe on [passing args to commands](/docs/guides/tasks--caching/pass-args-to-commands) for more information. ### Configuration file changes There may also be changes to the configuration files used by the underlying tool. The changes come with comments to explain them, and may also provide next steps for you to take. One common change is to add support for different configuration options. For example, if we have an existing Vite app with the following build target: ```json // project.json "build": { "executor": "@nx/vite:build", "options": { "mode": "development" }, "defaultConfiguration": "production", "configurations": { "development": {}, "production": {}, "ci": {} } } ``` Where we have `development`, `production`, and `ci` configurations. Then running `nx g @nx/vite:convert-to-inferred` will result in these lines added to `vite.config.ts`. ```ts {% meta="{7-16}" %} // vite.config.ts /// import { defineConfig } from 'vite'; import react from '@vitejs/plugin-react'; import { nxViteTsPaths } from '@nx/vite/plugins/nx-tsconfig-paths.plugin'; // These options were migrated by @nx/vite:convert-to-inferred from the project.json file. const configValues = { default: {}, development: {}, production: {}, ci: {} }; // Determine the correct configValue to use based on the configuration const nxConfiguration = process.env.NX_TASK_TARGET_CONFIGURATION ?? 'default'; const options = { ...configValues.default, ...(configValues[nxConfiguration] ?? {}), }; export default defineConfig({ root: __dirname, cacheDir: '../../node_modules/.vite/apps/demo', // ... }); ``` The configuration changes ensure that passing `--configuration` still work for the target. Differences in options can be added to the `configValues` object, and the right value is determined using the `NX_TASK_TARGET_CONFIGURATION` [environment variable](/docs/reference/environment-variables). Again, there may be other types of changes so read the comments to understand them. ### Register the Plugin with Nx Lastly, you can inspect the `nx.json` file to see a new `plugins` entry. For `@nx/vite`, there should be an entry like this: ```json // nx.json { "plugin": "@nx/vite/plugin", "options": { "buildTargetName": "build", "serveTargetName": "serve", "previewTargetName": "preview", "testTargetName": "test", "serveStaticTargetName": "serve-static" } } ``` You may change the target name options to change how Nx adds them to the project. For example, if you use `"serveTargetName": "dev"` then you would run `nx dev demo` rather than `nx serve demo` for your Vite project. ## Verify the migration The migrations maintain the same targets and configurations for each project, thus to verify it you should run the affected targets. For example - for `@nx/vite` you should check the `build`, `serve`, and `test` targets - for `@nx/playwright` you should check the `e2e` targets - for `@nx/eslint` you should check the `lint` target - etc. Remember that the target names are defined in the plugin configuration in `nx.json`. Make sure that the tasks are all passing before migrating another plugin. ## Enable atomizer (task splitting) These plugins come with the [Atomizer](/docs/features/ci-features/split-e2e-tasks) feature. - `@nx/cypress` - `@nx/jest` - `@nx/gradle` - `@nx/playwright` The Atomizer splits potentially slow tasks into separate tasks per file. This feature along with [task distribution](/docs/features/ci-features/distribute-task-execution) can speed up CI by distributing the split tasks among many agents. To enable Atomizer, make sure that you are [connected to Nx Cloud](/docs/guides/nx-cloud/setup-ci), and that you have distribution enabled in CI. Some plugins require extra configuration to enable Atomizer, so check the [individual plugin documentation page](/docs/plugin-registry) for more details. {% call_to_action title="Connect to Nx Cloud" icon="nxcloud" description="Enable task distribution and Atomizer" url="/docs/guides/nx-cloud/setup-ci" /%} ## Troubleshooting If you run into any issues during the migration, refer to the [troubleshooting guide](/docs/troubleshooting/troubleshoot-convert-to-inferred). --- ## Defining a Task Pipeline Running a specific task like `build` in a monorepo usually involves running multiple commands. If you want to learn more about the concept of a task pipeline and its importance in a monorepo, have a look at [the What is a Task Pipeline page](/docs/concepts/task-pipeline-configuration). {% youtube src="https://youtu.be/_U4hu6SuBaY?si=rSclPBdRh7P_xZ_f" title="Define a task pipeline" /%} ## Define dependencies between tasks You can define dependencies among tasks by using the `dependsOn` property: ```json // nx.json { ... "targetDefaults": { "build": { "dependsOn": ["^build"] } } } ``` ## Per project vs global Task dependencies can be [defined globally](/docs/reference/nx-json#target-defaults) for all projects in `nx.json` file: ```json // nx.json { ... "targetDefaults": { "build": { "dependsOn": ["^build"] } } } ``` Or they can be [defined per-project](/docs/reference/project-configuration#dependson) in the `project.json` or `package.json` files. If for example you have a `prebuild` step for a given project, you can define that relationship as follows: {% tabs %} {% tabitem label="package.json" %} ```json // apps/myapp/package.json { "name": "myapp", "dependencies": {}, "devDependencies": {}, ... "nx": { "targets": { "build": { "dependsOn": [ "prebuild" ] } } } } ``` {% /tabitem %} {% tabitem label="project.json" %} ```json // apps/myreactapp/project.json { "name": "myreactapp", ... "targets": { "prebuild": { "command": "echo Prebuild" }, "build": { "command": "echo Build", "dependsOn": ["prebuild"] } } } ``` {% /tabitem %} {% /tabs %} ## Continuous task dependencies If a task has a dependency that never exits, then the task will never start. To support this scenario, you can mark the dependency as a [continuous task](/docs/reference/project-configuration#continuous). Labeling a task as continuous tells Nx to not wait for the process to exit, and it will be run alongside its dependents. ```json // apps/myapp/project.json { "targets": { "serve": { "continuous": true } } } ``` The `continuous` option is most useful for running development servers. For example, the `e2e` task depends on a continuous `serve` task that starts the server to be tested againts. ## Visualize task dependencies You can also visualize the actual task graph (alongside the projects) using [Nx graph](/docs/features/explore-graph). This can be useful for debugging purposes. To view the task graph in your browser, run: ```shell npx nx graph ``` And then select "Tasks" from the top-left dropdown, choose the target (e.g. `build`, `test`,..) and either show all tasks or select a specific project you're interested in. Here's an example of the `playwright` Nx plugin `build` target (in the [Nx repo](https://github.com/nrwl/nx)). ![Task graph of the Playwright Nx plugin in the nx repo being rendered in the browser](../../../../assets/guides/running-tasks/task-graph-playwright-nx.webp) Alternatively you can use the [Nx Console](/docs/getting-started/editor-setup) extension in VSCode or IntelliJ, right-click on the project and select: ![Selecting "Focus task in Nx Graph" from the context menu in VS Code](../../../../assets/guides/running-tasks/task-graph-context-menu.webp) It'll then visualize within the IDE: ![Task graph of the Playwright Nx plugin in the nx repo being rendered in VS Code](../../../../assets/guides/running-tasks/task-graph-vscode.webp) --- ## Pass Args to Commands When you have an [inferred task](/docs/concepts/inferred-tasks) (or an explicitly defined task using the [`nx:run-commands` executor](/docs/guides/tasks--caching/run-commands-executor#2-update)) running a command, there'll come a time when you'll need to pass args to the underlying command being run. This recipe explains how you can achieve that. ## Example project and tasks For this recipe we'll use a project with the following `project.json` file: ```json // apps/my-app/project.json { "sourceRoot": "apps/my-app/src", "projectType": "application", "targets": {} } ``` And the following [final configuration](/docs/reference/project-configuration): {% project_details title="Project Details View" %} ```json { "project": { "name": "my-app", "type": "app", "data": { "name": "my-app", "root": "apps/my-app", "sourceRoot": "apps/my-app/src", "projectType": "application", "targets": { "build": { "options": { "cwd": "apps/my-app", "command": "vite build" }, "cache": true, "dependsOn": ["^build"], "inputs": [ "production", "^production", { "externalDependencies": ["vite"] } ], "outputs": ["{workspaceRoot}/dist/apps/my-app"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "serve": { "options": { "cwd": "apps/my-app", "command": "vite serve" }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "preview": { "options": { "cwd": "apps/my-app", "command": "vite preview" }, "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } }, "test": { "options": { "cwd": "apps/my-app", "command": "vitest run" }, "cache": true, "inputs": [ "default", "^production", { "externalDependencies": ["vitest"] } ], "outputs": ["{workspaceRoot}/coverage/apps/my-app"], "executor": "nx:run-commands", "configurations": {}, "metadata": { "technologies": ["vite"] } } }, "tags": [], "implicitDependencies": [] } }, "sourceMap": { "root": ["apps/my-app/project.json", "nx/core/project-json"], "targets": ["apps/my-app/project.json", "nx/core/project-json"], "targets.build": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.build.command": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.build.options": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.build.cache": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.build.dependsOn": [ "apps/my-app/vite.config.ts", "@nx/vite/plugin" ], "targets.build.inputs": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.build.outputs": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.build.options.cwd": [ "apps/my-app/vite.config.ts", "@nx/vite/plugin" ], "targets.serve": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.serve.command": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.serve.options": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.serve.options.cwd": [ "apps/my-app/vite.config.ts", "@nx/vite/plugin" ], "targets.preview": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.preview.command": [ "apps/my-app/vite.config.ts", "@nx/vite/plugin" ], "targets.preview.options": [ "apps/my-app/vite.config.ts", "@nx/vite/plugin" ], "targets.preview.options.cwd": [ "apps/my-app/vite.config.ts", "@nx/vite/plugin" ], "targets.test": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.test.command": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.test.options": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.test.cache": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.test.inputs": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.test.outputs": ["apps/my-app/vite.config.ts", "@nx/vite/plugin"], "targets.test.options.cwd": [ "apps/my-app/vite.config.ts", "@nx/vite/plugin" ], "name": ["apps/my-app/project.json", "nx/core/project-json"], "$schema": ["apps/my-app/project.json", "nx/core/project-json"], "sourceRoot": ["apps/my-app/project.json", "nx/core/project-json"], "projectType": ["apps/my-app/project.json", "nx/core/project-json"], "tags": ["apps/my-app/project.json", "nx/core/project-json"] } } ``` {% /project_details %} We'll focus on the `build` target of the project. If you expand the `build` target in the Project Details View above, you'll see that it runs the command `vite build`. In the next sections, we'll see how to provide `--assetsInlineLimit=2048` and `--assetsDir=static/assets` args to that command. ## Pass args in the `project.json` task configuration To statically pass some extra args to a specific project, you can update its `project.json` file. You can do it by either providing the args as individual options or by providing the `args` option: {% tabs %} {% tabitem label="Setting args directly as options" %} {% aside type="note" title="Providing command args as options" %} Support for providing command args as options was added in **Nx v18.1.1**. {% /aside %} ```json {% meta="{6-11}" %} // apps/my-app/project.json { "sourceRoot": "apps/my-app/src", "projectType": "application", "targets": { "build": { "options": { "assetsInlineLimit": 2048, "assetsDir": "static/assets" } } } } ``` {% /tabitem %} {% tabitem label="Setting the \"args\" option" %} ```json {% meta="{6-12}" %} // apps/my-app/project.json { "sourceRoot": "apps/my-app/src", "projectType": "application", "targets": { "build": { "options": { "args": ["--assetsInlineLimit=2048", "--assetsDir=static/assets"] // it also accepts a single string: // "args": "--assetsInlineLimit=2048 --assetsDir=static/assets" } } } } ``` {% /tabitem %} {% /tabs %} {% aside type="caution" title="Precedence" %} Args specified in the `args` option take precedence and will override any arg specified as an option with the same name. So, defining both `"args": ["--assetsDir=static/assets"]` and `"assetsDir": "different/path/to/assets"` will result in Nx running the command with `--assetsDir=static/assets`. {% /aside %} ## Pass args in the `targetDefaults` for the task To provide the same args for all projects in the workspace, you need to update the task configuration in the `nx.json` file. Similar to the previous section, you can do it by either providing the args as individual options or by providing the `args` option: {% tabs %} {% tabitem label="Setting args directly as options" %} ```json // nx.json { "targetDefaults": { "build": { "options": { "assetsInlineLimit": 2048, "assetsDir": "static/assets" } } } } ``` {% /tabitem %} {% tabitem label="Setting the \"args\" option" %} ```json // nx.json { "targetDefaults": { "build": { "options": { "args": ["--assetsInlineLimit=2048", "--assetsDir=static/assets"] } } } } ``` {% /tabitem %} {% /tabs %} {% aside type="caution" title="Be careful" %} If multiple targets with the same name run different commands (or use different executors), do not set options in `targetDefaults`. Different commands would accept different options, and the target defaults will apply to all targets with the same name regardless of the command they run. If you were to provide options in `targetDefaults` for them, the commands that don't expect those options could throw an error. {% /aside %} ## Pass args when running the command in the terminal To pass args in a one-off manner when running a task, you can also provide them as individual options or by providing the `--args` option when running the task: {% tabs %} {% tabitem label="Providing args directly as options" %} ```shell nx build my-app --assetsInlineLimit=2048 --assetsDir=static/assets ``` {% /tabitem %} {% tabitem label="Providing the \"--args\" option" %} ```shell nx build my-app --args="--assetsInlineLimit=2048 --assetsDir=static/assets" ``` {% /tabitem %} {% /tabs %} {% aside type="caution" title="Conflicting options" %} If you provide an arg with the same name as an Nx CLI option (e.g. `--configuration`) or an `nx:run-commands` option (e.g. `--env`), the arg will be parsed as an option for the Nx CLI or the executor and won't be forwarded to the underlying command. You should provide the arg using the `-args` option in such cases. You can also provide an arg with the same name to both the Nx CLI and the underlying command. For example, to run the `ci` configuration of a `test` target that runs the command `detox test` and pass the `--configuration` arg to the command, you can run: ```shell nx test mobile-e2e --configuration=ci --args="--configuration=ios.sim.release" ``` {% /aside %} --- ## Reduce Repetitive Configuration Nx can help you dramatically reduce the lines of configuration code that you need to maintain. Lets say you have three libraries in your repository - `lib1`, `lib2` and `lib3`. The folder structure looks like this: {% filetree %} - repo/ - libs/ - lib1/ - tsconfig.lib.json - project.json - lib2/ - tsconfig.lib.json - project.json - lib3/ - tsconfig.lib.json - project.json - nx.json {% /filetree %} ## Initial configuration settings All three libraries have a similar project configuration. Here is what their `project.json` files look like: {% tabs %} {% tabitem label="lib1" %} ```json // libs/lib1/project.json { "name": "lib1", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "libs/lib1/src", "projectType": "library", "targets": { "build": { "executor": "@nx/js:tsc", "outputs": ["{options.outputPath}"], "options": { "outputPath": "dist/libs/lib1", "main": "libs/lib1/src/index.ts", "tsConfig": "libs/lib1/tsconfig.lib.json", "assets": ["libs/lib1/*.md", "libs/lib1/src/images/*"] } }, "lint": { "executor": "@nx/eslint:lint", "outputs": ["{options.outputFile}"], "options": { "lintFilePatterns": ["libs/lib1/**/*.ts"] } }, "test": { "executor": "@nx/jest:jest", "outputs": ["{workspaceRoot}/coverage/{projectRoot}"], "options": { "jestConfig": "libs/lib1/jest.config.ts", "passWithNoTests": true }, "configurations": { "ci": { "ci": true, "codeCoverage": true } } } }, "tags": [] } ``` {% /tabitem %} {% tabitem label="lib2" %} ```json // libs/lib2/project.json { "name": "lib2", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "libs/lib2/src", "projectType": "library", "targets": { "build": { "executor": "@nx/js:tsc", "outputs": ["{options.outputPath}"], "options": { "outputPath": "dist/libs/lib2", "main": "libs/lib2/src/index.ts", "tsConfig": "libs/lib2/tsconfig.lib.json", "assets": ["libs/lib2/*.md"] } }, "lint": { "executor": "@nx/eslint:lint", "outputs": ["{options.outputFile}"], "options": { "lintFilePatterns": ["libs/lib2/**/*.ts"] } }, "test": { "executor": "@nx/jest:jest", "outputs": ["{workspaceRoot}/coverage/{projectRoot}"], "options": { "jestConfig": "libs/lib2/jest.config.ts", "passWithNoTests": true, "testTimeout": 10000 }, "configurations": { "ci": { "ci": true, "codeCoverage": true } } } }, "tags": [] } ``` {% /tabitem %} {% tabitem label="lib3" %} ```json // libs/lib3/project.json { "name": "lib3", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "libs/lib3/src", "projectType": "library", "targets": { "build": { "executor": "@nx/js:tsc", "outputs": ["{options.outputPath}"], "options": { "outputPath": "dist/libs/lib3", "main": "libs/lib3/src/index.ts", "tsConfig": "libs/lib3/tsconfig.lib.json", "assets": ["libs/lib3/*.md"] } }, "lint": { "executor": "@nx/eslint:lint", "outputs": ["{options.outputFile}"], "options": { "lintFilePatterns": ["libs/lib3/**/*.ts"] } }, "test": { "executor": "@nx/jest:jest", "outputs": ["{workspaceRoot}/coverage/{projectRoot}"], "options": { "jestConfig": "libs/lib3/jest.config.ts", "passWithNoTests": true }, "configurations": { "ci": { "ci": true, "codeCoverage": true } } } }, "tags": [] } ``` {% /tabitem %} {% /tabs %} If you scan through these three files, they look very similar. The only differences aside from the project paths are that `lib1` has different assets defined for the `build` target and `lib2` has a `testTimeout` set for the `test` target. ## Reduce configuration with targetDefaults Let's use [the `targetDefaults` property](/docs/reference/nx-json#target-defaults) in `nx.json` to reduce some of this duplicate configuration code. ```json // nx.json { "targetDefaults": { "build": { "executor": "@nx/js:tsc", "outputs": ["{options.outputPath}"], "options": { "outputPath": "dist/{projectRoot}", "main": "{projectRoot}/src/index.ts", "tsConfig": "{projectRoot}/tsconfig.lib.json", "assets": ["{projectRoot}/*.md"] } }, "lint": { "executor": "@nx/eslint:lint", "outputs": ["{options.outputFile}"], "options": { "lintFilePatterns": ["{projectRoot}/**/*.ts"] } }, "test": { "executor": "@nx/jest:jest", "outputs": ["{workspaceRoot}/coverage/{projectRoot}"], "options": { "jestConfig": "{projectRoot}/jest.config.ts", "passWithNoTests": true }, "configurations": { "ci": { "ci": true, "codeCoverage": true } } } } } ``` Now the `project.json` files can be reduced to this: {% tabs %} {% tabitem label="lib1" %} ```json // libs/lib1/project.json { "name": "lib1", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "libs/lib1/src", "projectType": "library", "targets": { "build": { "options": { "assets": ["libs/lib1/*.md", "libs/lib1/src/images/*"] } }, "lint": {}, "test": {} }, "tags": [] } ``` {% /tabitem %} {% tabitem label="lib2" %} ```json // libs/lib2/project.json { "name": "lib2", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "libs/lib2/src", "projectType": "library", "targets": { "build": {}, "lint": {}, "test": { "options": { "testTimeout": 10000 } } }, "tags": [] } ``` {% /tabitem %} {% tabitem label="lib3" %} ```json // libs/lib3/project.json { "name": "lib3", "$schema": "../../node_modules/nx/schemas/project-schema.json", "sourceRoot": "libs/lib3/src", "projectType": "library", "targets": { "build": {}, "lint": {}, "test": {} }, "tags": [] } ``` {% /tabitem %} {% /tabs %} {% aside type="caution" title="Target defaults" %} This recipe assumes every target with the same name uses the same executor. If you have targets with the same name using different executors and you're providing target defaults for executor options, don't place the executor options under a default target using the target name as the key. Instead, separate target default configurations can be added using the executors as the keys, each with their specific configuration. {% /aside %} ## Ramifications This change adds 33 lines of code to `nx.json` and removes 84 lines of code from the `project.json` files. That's a net reduction of 51 lines of code. And you'll get more benefits from this strategy the more projects you have in your repo. Reducing lines of code is nice, but just like using the DRY principle in code, there are other benefits: - You can easily change the default settings for the whole repository in one location. - When looking at a single project, it is clear how it differs from the defaults. {% aside type="caution" title="Don't Over Do It" %} You need to be careful to only put configuration settings in the `targetDefaults` that are actually defaults for the whole repository. If you have to make exceptions for most of the projects in your repository, then that setting probably should not be a default. {% /aside %} --- ## Run Root-Level NPM Scripts with Nx {% youtube src="https://www.youtube.com/embed/PRURABLaS8s" title="Run root-level NPM scripts with Nx" /%} There are often tasks in a codebase that apply to the whole codebase rather than a single project. Nx can run npm scripts directly from the root `package.json`. Let's say your root `package.json` looks like this: ```json // package.json { "name": "myorg", "scripts": { "docs": "node ./generateDocsSite.js" } } ``` We want to be able to run the `docs` script using Nx to get caching and other benefits. ## Setup To make Nx aware of the root `package.json` scripts, add an `"nx": {}` property to the root `package.json` ```json // package.json { "name": "myorg", "nx": {}, "scripts": { "docs": "node ./generateDocsSite.js" } } ``` ## Running a root-level target Once Nx is aware of your root-level scripts, you can run them the same way you would run any other target. Just use the name of your root `package.json` as the project name, or you can omit the project name and Nx will use the project in the current working directory as the default. For our example, you would run: ```text {% title="nx docs" frame="terminal" %} > nx run myorg:docs yarn run v1.22.19 $ node ./generateDocsSite.js Documentation site generated in /docs ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Successfully ran target docs for project myorg (5s) ``` ## Configuring a root-level target You can also configure the `inputs` and `outputs` or task pipelines for root-level targets the same way you would for any other target. Our fully configured example would look like this: ```jsonc // package.json { "name": "myorg", "nx": { // Nx can't infer the project dependency from the docs script, // so we manually create a dependency on the store app "implicitDependencies": ["store"], "targets": { "docs": { // generates docs from source code of all dependencies "inputs": ["^production"], // the docs site is created under /docs "outputs": ["{workspaceRoot}/docs"], }, }, }, "scripts": { "docs": "node ./generateDocsSite.js", }, } ``` To cache the `docs` target, you can set `cache: true` on the `docs` target shown in the `package.json` above. Your output would then look as follows: ```text {% title="nx docs" frame="terminal" %} > nx run myorg:docs [existing outputs match the cache, left as is] yarn run v1.22.19 $ node ./generateDocsSite.js Documentation site generated in /docs ———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— NX Successfully ran target docs for project myorg (31ms) Nx read the output from the cache instead of running the command for 1 out of 1 tasks. ``` Read more about [caching task results](/docs/features/cache-task-results) and fine-tuning caching with [task inputs](/docs/guides/tasks--caching/configure-inputs). ## Keep using NPM to run scripts rather than Nx You can keep using `npm run docs` instead of the new `npx nx docs` version and still leverage the caching. To achieve this you need to wrap your command with `nx exec` s.t. it can be piped through Nx. ```json // package.json { "name": "myorg", "nx": {}, "scripts": { "docs": "nx exec -- node ./generateDocsSite.js" } } ``` Read more in the [Nx exec docs](/docs/reference/nx-commands#nx-exec). --- ## Running Custom Commands You can easily run any command with the Nx toolchain. The main benefit is to make the [operation cacheable](/docs/concepts/how-caching-works), [distributable](/docs/features/ci-features/distribute-task-execution) as well as being able to use it [with Nx affected commands](/docs/features/ci-features/affected). ## 1. Define the terminal command to be run The command we want to run for each project is: ```shell make hello ``` With this `Makefile` in the root of the project: ```make hello: echo "Hello, world!" ``` ## 2. Update `project.json` For each project for which you want to enable `make`, add a target in its `project.json`: ```jsonc // project.json // ... "targets": { "make": { "command": "make hello" } // ... } ``` For more information (e.g. how to forward arguments), see the [run-commands api doc](/docs/reference/nx/executors#run-commands). ## 3. Run the command To run the command, use the usual Nx syntax: ```shell nx run my-app:make ``` or ```shell nx make my-app ``` You can also run the `make` command for all projects it has been defined on: ```shell nx run-many -t make ``` Or just on a subset of projects that are [affected by a given change](/docs/features/ci-features/affected): ```shell nx affected -t make ``` --- ## Run Tasks in Parallel If you want to increase the number of processes running tasks to, say, 5 (by default, it is 3), pass the following: ```shell npx nx build myapp --parallel=5 ``` You can also set parallel based on the percentage of the number of logical CPUs. ```shell npx nx build myapp --parallel=50% ``` Note, you can also change the default in `nx.json`, like this: {% tabs %} {% tabitem label="Nx >= 17" %} ```json // nx.json { "parallel": 5 } ``` {% /tabitem %} {% tabitem label="Nx < 17" %} ```json // nx.json { "tasksRunnerOptions": { "default": { "runner": "nx/tasks-runners/default", "options": { "parallel": 5 } } } } ``` {% /tabitem %} {% /tabs %} --- ## Remote Cache Remote caching shares build results across your team and CI so you don't repeat work. You can use Nx Cloud for a fully managed solution or self-host with one of the available plugins. {% aside type="note" title="Nx Cloud: Managed Remote Cache" %} Recommended for everyone. - [Fully managed multi-tier remote caching with Nx Replay](/docs/features/ci-features/remote-cache) - [Both secure and fast](https://nx.dev/enterprise/security) - Generous free plan You'll also get access to advanced CI features: - [Automated distribution of tasks across machines with Nx Agents](/docs/features/ci-features/distribute-task-execution) - [Automated splitting of tasks (including e2e tests) with Nx Atomizer](/docs/features/ci-features/split-e2e-tasks) - [Detection and re-running of flaky tasks](/docs/features/ci-features/flaky-tasks) - [Self-healing CI and other AI features](https://nx.dev/ai) [Get Started](https://cloud.nx.app) {% /aside %} {% aside type="note" title="Nx Enterprise" %} Recommended for large organizations. Includes everything from Nx Cloud, plus: - Work hand-in-hand with the Nx team for continual improvement - Run on the Nx Cloud servers in any region or run fully self-contained, on-prem - SOC 2 type 1 and 2 compliant and comes with single-tenant, dedicated EU region hosting as well as on-premise [Reach out for an Enterprise trial](https://nx.dev/enterprise/trial) {% /aside %} ## Self-hosted cache Great for proof of concepts and small teams. {% aside type="caution" title="Bucket-based caches are vulnerable to poisoning and often prohibited in organizations" %} CREEP (CVE-2025-36852) is a critical vulnerability in bucket-based self-hosted remote caches that allows anyone with PR access to poison production builds. Many organizations are unaware of this security risk. [Learn more](https://nx.dev/blog/creep-vulnerability-build-cache-security) All packages below (along with other bucket-based remote cache implementations) are listed in the CVE and are not allowed in many organizations. {% /aside %} All packages are free but require an activation key. Getting a key is a fully automated, self-service process that happens during package installation. Install any of the following with `nx add`: | Package | Storage | Install command | | -------------------------------------------------------------------------------------- | ---------------------------- | ---------------------------- | | [`@nx/s3-cache`](/docs/reference/remote-cache-plugins/s3-cache/overview) | Amazon S3 bucket | `nx add @nx/s3-cache` | | [`@nx/gcs-cache`](/docs/reference/remote-cache-plugins/gcs-cache/overview) | Google Cloud Storage | `nx add @nx/gcs-cache` | | [`@nx/azure-cache`](/docs/reference/remote-cache-plugins/azure-cache/overview) | Azure Blob Storage | `nx add @nx/azure-cache` | | [`@nx/shared-fs-cache`](/docs/reference/remote-cache-plugins/shared-fs-cache/overview) | Shared file system directory | `nx add @nx/shared-fs-cache` | The `nx add` command installs the package, configures your workspace, and walks you through generating an activation key. The key is saved to `.nx/key/key.ini` and should be committed to your repository. In CI or public repositories, set the `NX_KEY` environment variable instead. If you don't have a key yet, run `nx register` to generate one. If your existing key is expired or invalid, delete `.nx/key/key.ini` and run `nx register` again. In CI, verify that the `NX_KEY` environment variable is set and matches the key in `.nx/key/key.ini`. > Why require an activation key? It simply helps us know and support our users. If you prefer not to provide this information, you can also [build your own cache server](#build-your-own-caching-server). ## Build your own caching server Starting in Nx version 20.8, you can build your own caching server using the OpenAPI specification below. This allows you to create a custom remote cache server tailored to your specific needs. The server manages all aspects of the remote cache, including storage, retrieval, and authentication. Implementation is up to you, but the server must adhere to the OpenAPI specification below to ensure compatibility with Nx caching mechanism. The endpoints transfer tar archives as binary data. Note that while the underlying data format may change in future Nx versions, the OpenAPI specification should remain stable. You can implement your server in any programming language or framework, as long as it adheres to the OpenAPI spec. ### Open API specification ```json // Nx 20.8+ { "openapi": "3.0.0", "info": { "title": "Nx custom remote cache specification.", "description": "Nx is an AI-first monorepo platform that connects everything from your editor to CI. Helping you deliver fast, without breaking things.", "version": "1.0.0" }, "paths": { "/v1/cache/{hash}": { "put": { "description": "Upload a task output", "operationId": "put", "security": [ { "bearerToken": [] } ], "responses": { "200": { "description": "Successfully uploaded the output" }, "401": { "description": "Missing or invalid authentication token.", "content": { "text/plain": { "schema": { "type": "string", "description": "Error message provided to the Nx CLI user" } } } }, "403": { "description": "Access forbidden. (e.g. read-only token used to write)", "content": { "text/plain": { "schema": { "type": "string", "description": "Error message provided to the Nx CLI user" } } } }, "409": { "description": "Cannot override an existing record" } }, "parameters": [ { "in": "header", "description": "The file size in bytes", "required": true, "schema": { "type": "number" }, "name": "Content-Length" }, { "name": "hash", "description": "The task hash corresponding to the uploaded task output", "in": "path", "required": true, "schema": { "type": "string" } } ], "requestBody": { "content": { "application/octet-stream": { "schema": { "type": "string", "format": "binary" } } } } }, "get": { "description": "Download a task output", "operationId": "get", "security": [ { "bearerToken": [] } ], "responses": { "200": { "description": "Successfully retrieved cache artifact", "content": { "application/octet-stream": { "schema": { "type": "string", "format": "binary", "description": "An octet stream with the content." } } } }, "403": { "description": "Access forbidden", "content": { "text/plain": { "schema": { "type": "string", "description": "Error message provided to the Nx CLI user" } } } }, "404": { "description": "The record was not found" } }, "parameters": [ { "name": "hash", "in": "path", "required": true, "schema": { "type": "string" } } ] } } }, "components": { "securitySchemes": { "bearerToken": { "type": "http", "description": "Auth mechanism", "scheme": "bearer" } } } } ``` ### Usage notes To use your custom caching server, set the `NX_SELF_HOSTED_REMOTE_CACHE_SERVER` environment variable. The following environment variables also affect behavior: - `NX_SELF_HOSTED_REMOTE_CACHE_ACCESS_TOKEN`: The authentication token to access the cache server. - `NODE_TLS_REJECT_UNAUTHORIZED`: Set to `0` to disable TLS certificate validation. ### Migrating from custom tasks runners You might have used Nx now-deprecated custom task runners API in these scenarios: - To implement custom self-hosted caching: use one of the implementations listed above - To inject custom behavior before and after running tasks: use our new API with dedicated pre and post hooks To learn more about migrating from custom task runners, [please refer to this detailed guide](/docs/reference/deprecated/custom-tasks-runner). ## Looking for conformance or code owners? If you previously used Nx Powerpack for conformance rules or code ownership, those features are now part of [Nx Enterprise](/docs/enterprise). See: - [Conformance](/docs/enterprise/conformance) — write and enforce language-agnostic rules across your workspace - [Owners](/docs/enterprise/owners) — define code ownership at the project level and auto-generate CODEOWNERS files --- ## Skip Task Caching There are times when you might want to bypass the [caching mechanism](/docs/features/cache-task-results), either locally or remotely. ## Skip caching To skip caching for a specific task, use the `--skip-nx-cache` flag. This can be useful when you want to ensure that a task runs fresh, without using cached results. ```shell npx nx build --skip-nx-cache ``` This will avoid using any local or remote cache. If using Nx Cloud, the run will not be recorded either. ## Skip remote caching from Nx Cloud To skip remote caching provided by Nx Cloud, use the `--no-cloud` flag. This ensures that the task does not use cached results from Nx Cloud. ```shell npx nx build --no-cloud ``` It will **still use the local cache if available**. If using Nx Cloud the run will not be recorded either. ## Reset To clear the cache and metadata about the workspace and shuts down the Nx Daemon you can use the [reset](/docs/reference/nx-commands#nx-reset) command: ```shell npx nx reset ``` --- ## Terminal UI In version 21, Nx provides an interactive UI in the terminal to help you view the results of multiple tasks that are running in parallel. {% youtube src="https://youtu.be/ykaMAh83fPM" title="New Terminal UI for Nx" /%} {% aside type="note" title="Windows Compatibility" %} The initial Nx 21 release disables the Terminal UI on Windows. We are currently working on Windows support, so stay tuned. {% /aside %} ## Enable/disable the terminal UI If your terminal and environment are supported then the Terminal UI will be enabled by default when you run any tasks with `nx run`/`nx run-many`/`nx affected` in Nx v21 and later. The Terminal UI will not be used in CI environments. To manually control the Terminal UI, you can: - Use the `--no-tui` flag when running commands: `nx run-many -t build --no-tui` - Use the `--tui` flag to explicitly enable it: `nx run-many -t build --tui` - Set `NX_TUI=false` in your environment variables - Set `tui.enabled` to `false` in your `nx.json` configuration file ```json // nx.json { "tui": { "enabled": false } } ``` ## Configure the terminal UI There are also some configuration options that control the way the terminal UI behaves. ### Auto-exit By default, the Terminal UI will automatically exit 3 seconds after all relevant tasks have finished running. You can adjust this behavior in the following ways: - Set `"tui.autoExit"` to a number to change the number of seconds to wait before auto-exiting. - Set `"tui.autoExit"` to `false` to disable auto-exiting and keep the Terminal UI open until you manually exit it with `+c`. - Set `"tui.autoExit"` to `true` to exit automatically immediately after all tasks have finished running. ```json // nx.json { "tui": { "autoExit": 3 // Equivalent of the default behavior: auto-exit after 3 seconds } } ``` ## Use the terminal UI The terminal UI is entirely controlled through keyboard shortcuts. You can view a list of the available shortcuts by typing `?`: ![Terminal UI Help](../../../../assets/guides/running-tasks/tui-help.png) You can use these commands to hide and show up to 2 tasks at a time, filter the listed tasks and interact with tasks that are prompting for user input. --- ## Workspace Watching {% youtube src="https://youtu.be/0eVplUl1zBE?si=KtmiyRm1AcYc01td" title="Workspace watching" /%} Nx can watch your workspace and execute commands based on project or files changes. Imagine the following project graph with these projects: {% graph height="450px" %} ```json { "projects": [ { "type": "lib", "name": "main-lib", "data": { "tags": [] } }, { "type": "lib", "name": "lib", "data": { "tags": [] } }, { "type": "lib", "name": "lib2", "data": { "tags": [] } }, { "type": "lib", "name": "lib3", "data": { "tags": [] } } ], "groupByFolder": false, "dependencies": { "main-lib": [ { "target": "lib", "source": "main-lib", "type": "direct" }, { "target": "lib2", "source": "main-lib", "type": "direct" }, { "target": "lib3", "source": "main-lib", "type": "direct" } ], "lib": [], "lib2": [], "lib3": [] }, "workspaceLayout": { "appsDir": "apps", "libsDir": "libs" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} Traditionally, if you want to rebuild your projects whenever they change, you would have to set up an ad-hoc watching system to watch each project. Rather than setting up a watch manually, Nx can be used to watch projects and execute a command whenever they change. With the following command, Nx is told to watch all projects, and execute `nx run $NX_PROJECT_NAME:build` for each change. ```shell nx watch --all -- nx run \$NX_PROJECT_NAME:build ``` {% aside type="note" title="Escaping" %} Note the backslash (`\`) before the `$`. This is needed so that your shell doesn't automatically interpolate the variables. There are also some quirks if this command is ran with a package manager. [Find out how to run this command with those managers here.](#running-nx-watch-with-package-managers) {% /aside %} {% aside type="note" title="Windows" %} If you're running this command on Windows Powershell (not WSL), the environment variables need to be wrapped in `%`. For example: ```shell nx watch --all -- nx run %NX_PROJECT_NAME%:build ``` {% /aside %} Now every time a package changes, Nx will run the build. If multiple packages change at the same time, Nx will run the callback for each changed project. Then if additional changes happen while a command is in progress, Nx will batch those changes, and execute them once the current command completes. ## Watch environment variables Nx will run the watch callback command with the `NX_PROJECT_NAME` and `NX_FILE_CHANGES` environment variables set. - `NX_PROJECT_NAME` will be the name of the project. - `NX_FILE_CHANGES` will be a list of files that changed formatted in stdin (ie, if `file1.txt`, and `file2.txt` change, `NX_FILE_CHANGES` will be `file1.txt file2.txt`. This allows you to pass the list of files to other commands that accept this format.) ### Running Nx watch with package managers In the examples above, the `nx watch` command was run directly in the terminal. Usually environments aren't set up to include node_module bins automatically in the shell path, so using the package manager's run/exec command is used. For example, `npx`, `yarn`, `pnpm run`. When running `npx nx watch --all -- echo \$NX_PROJECT_NAME`, (or equivalent), the watch command may not execute as expected. For example, the environment variables seem to be blank. Below are the ways to run the watch with each package manager. #### pnpm ```shell pnpm nx watch --all -- echo \$NX_PROJECT_NAME ``` #### yarn ```shell yarn nx -- watch --all -- echo \$NX_PROJECT_NAME ``` #### npx ```shell npx -c 'nx watch --all -- echo \$NX_PROJECT_NAME' ``` ## Additional use cases ### Watching for specific projects To watch for specific projects and echo the changed files, run this command: ```shell nx watch --projects=app1,app2 -- echo \$NX_FILE_CHANGES ``` ### Watching for dependent projects To watch for a project and it's dependencies, run this command: ```shell nx watch --projects=app1 --includeDependentProjects -- echo \$NX_PROJECT_NAME ``` ### Rebuilding dependent projects while developing an application In a monorepo setup, your application might rely on several libraries that need to be built before they can be used in the application. While the [task pipeline](/docs/guides/tasks--caching/defining-task-pipeline) automatically handles this during builds, you'd want the same behavior during development when serving your application with a dev server. To watch and rebuild the dependent libraries of an application, use the following command: ```shell nx watch --projects=my-app --includeDependentProjects -- nx run-many -t build -p \$NX_PROJECT_NAME --exclude=my-app ``` `--includeDependentProjects` ensures that any changes to projects your application depends on trigger a rebuild, while `--exclude=my-app` skips rebuilding the app itself since it's already being served by the development server. --- ## Tips and Tricks {% index_page_cards path="guides/tips-n-tricks" /%} --- ## Advanced Update Process This guide describes advanced scenarios when it comes to updating Nx and the workspaces dependencies. It starts with a summary of the [standard update process](/docs/features/automate-updating-dependencies) and continues with those advanced use cases. ## Updating to the latest Nx version The following steps are a summary of the [standard update process](/docs/features/automate-updating-dependencies). For more information on each step, please visit that page. ### Step 1: Updating dependencies and generating migrations First, run the `migrate` command: ```shell nx migrate latest # same as nx migrate nx@latest ``` This performs the following changes: - Updates the versions of the relevant packages in the `package.json` file. - Generates a `migrations.json` if there are pending migrations. ### Step 2: Running migrations The next step in the process involves using the `migrate` command to apply the migrations that were generated in the `migrations.json` file in the previous step. You can do so by running: ```shell nx migrate --run-migrations ``` All changes to your source code will be unstaged and ready for you to review and commit yourself. ### Step 3: Cleaning up After you run all the migrations, you can remove `migrations.json` and commit any outstanding changes. ## Recommendations ### One major version at a time, small steps Migrating Jest, Cypress, ESLint, React, Angular, Next, and more is a difficult task. All the tools change at different rates, and they can conflict with each other. In addition, every workspace is different. Even though our goal is for you to update any version of Nx to a newer version of Nx in a single go, sometimes it doesn't work. The recommended process is to update, at most, one major version at a time. Say you want to migrate from Nx 17.1.0 to Nx 18.2.4. The following steps are more likely to work comparing to `nx migrate 18.2.4`. - Run `nx migrate 17.3.2` to update the latest version in the 17.x branch. - Run `nx migrate --run-migrations`. - Next, run `nx migrate 18.2.4`. - Run `nx migrate --run-migrations`. {% aside type="caution" title="Angular updates" %} If your workspace uses Angular, this becomes a requirement rather than a recommendation. The Angular packages maintain migrations for a single major version at a time. If you try to update over multiple major versions, only the migrations for the latest major version will be applied. This can lead to issues in your workspace. {% /aside %} ## Managing migration steps When you run into problems running the `nx migrate --run-migrations` command, here are some solutions to break the process down into manageable steps. ### Make changes easier to review by committing after each migration runs Depending on the size of the update (e.g. migrating between major versions is likely to require more significant changes than migrating between feature releases), and the size of the workspace, the overall `nx migrate` process may generate a lot of changes which then need to be reviewed. Particularly if there are then manual changes which need to be made in addition to those made by `nx migrate`, it can make the associated PR harder to review because of not being able to distinguish between what was changed automatically and what was changed manually. If you pass `--create-commits` to the `--run-migrations` command, Nx will automatically create a dedicated commit for each successfully completed migration, for example: ```shell nx migrate --run-migrations --create-commits ``` Your git history will then look something like the following: ```text {% title="nx migrate --run-migrations --create-commits" frame="terminal" %} git log commit 8c862c780106ab8736985c01de1477309a403548 Author: YOUR_GIT_USERNAME Date: Thu Apr 14 18:35:44 2022 +0400 chore: [nx migration] name-of-the-second-migration-which-ran commit eb83bca97927af26aae731a2cf51ad62cc75efa3 Author: YOUR_GIT_USERNAME Date: Thu Apr 14 18:35:44 2022 +0400 chore: [nx migration] name-of-the-first-migration-which-ran etc ``` By default, nx will apply the prefix of `chore: [nx migration] ` to each commit in order to clearly identify it, but you can also customize this prefix by passing `--commit-prefix` to the command: ```shell nx migrate --run-migrations --create-commits --commit-prefix="chore(core): AUTOMATED - " ``` ```text {% title="git log" frame="terminal" %} commit 8c862c780106ab8736985c01de1477309a403548 Author: YOUR_GIT_USERNAME Date: Thu Apr 14 18:35:44 2022 +0400 chore(core): AUTOMATED - name-of-the-second-migration-which-ran commit eb83bca97927af26aae731a2cf51ad62cc75efa3 Author: YOUR_GIT_USERNAME Date: Thu Apr 14 18:35:44 2022 +0400 chore(core): AUTOMATED - name-of-the-first-migration-which-ran etc ``` ### Customizing which migrations run by altering `migrations.json` For small projects, running all the migrations at once often succeeds without any issues. For large projects, more flexibility is sometimes needed, and this is where having the separation between generating the migrations to be run, and actually running them, really shines. All you need to do is amend the JSON file in whatever way makes sense based on your circumstances, for example: - You may have to skip a migration. - You may want to run one migration at a time to address minor issues. - You may want to reorder migrations. - You may want to run the same migration multiple time if the process takes a long time and you had to rebase. Because you can run `nx migrate --run-migrations` as many times as you want, you can achieve all of that by commenting out and reordering items in `migrations.json`. The migration process can take a long time, depending on the number of migrations, so it is useful to commit the migrations file with the partially-updated repo alongside any changes which were created by previously completed migrations. You can even provide a custom location for the migrations file if you wish, you simply pass it to the `--run-migrations` option: ```shell nx migrate --run-migrations=migrations.json ``` ## Choosing optional package updates to apply While in most cases you want to be up to date with Nx and the dependencies it manages, sometimes you might need to stay on an older version of such a dependency. For example, you might want to update Nx to the latest version but keep Angular on **v15.x.x** and not update it to **v16.x.x**. For such scenarios, `nx migrate` allows you to choose what to update using the `--interactive` flag. {% aside type="note" title="Optional package updates" %} You can't choose to skip any arbitrary package update. To ensure that a plugin works well with older versions of a given package, the plugin must support it. Therefore, Nx plugin authors define what package updates are optional. {% /aside %} {% aside type="caution" title="Taking control of package updates" %} While opting out of applying some package updates is supported by Nx, please keep in mind that you are effectively taking control of those package updates and opting out of Nx managing them. This means you'll need to keep up with the version requirements for those packages and those that depend on them. You'll also need to consider more things when updating them at some point [as explained later](#updating-dependencies-that-are-behind-the-versions-nx-manages). {% /aside %} ### Interactively opting out of package updates To opt out of package updates, you need to run the migration in interactive mode: ```shell nx migrate latest --interactive ``` As the migration runs and collects the package updates, you'll be prompted to apply optional package updates, and you can choose what to do based on your needs. The `package.json` will be updated and the `migrations.json` will be generated considering your responses to those prompts. ### Updating dependencies that are behind the versions Nx manages Once you have skipped some optional updates, there'll come a time when you'll want to update those packages. To do so, you'll need to generate the package updates and migrations from the Nx version that contained those skipped updates. Say you skipped updating Angular to **v16.x.x**. That package update was meant to happen as part of the `@nx/angular@16.1.0` update, but you decided to skip it at the time. The recommended way to collect the migrations from such an older version is to run the following: ```shell nx migrate latest --from=nx@16.0.0 --exclude-applied-migrations ``` A couple of things are happening there: - The `--from=nx@16.0.0` flag tells the `migrate` command to use the version **16.0.0** as the installed version for the `nx` package and all the first-party Nx plugins. Note we use a version lower than the one where the update was meant to happen. This is to account for the fact that the update is normally targeted to a prerelease version for testing it before the final release. - The `--exclude-applied-migrations` flag tells the `migrate` command not to collect migrations that should have been applied on previous updates. So, the above command will effectively collect any package update and migration meant to run if your workspace had `nx@16.0.0` installed while excluding those that should have been applied before. You can provide a different older version to collect migrations from. {% aside type="caution" title="Automatically excluding previously applied migrations" %} Automatically excluding previously applied migrations doesn't consider migrations manually removed from the `migrations.json` in previous updates. If you've manually removed migrations in the past and want to run them, don't pass the `--exclude-applied-migrations` and collect all previous migrations. {% /aside %} ### Identifying the Nx version to migrate from to collect previously skipped updates After running the migrations in interactive mode and opting-out of some package updates, a message is printed to the terminal with the command to run later to collect and apply those skipped updates. For example, if you skipped updating Angular to **v16.0.0**, this is the output you'll see (simplified for brevity): ```shell nx migrate latest --interactive Fetching meta data about packages. It may take a few minutes. ... ✔ Do you want to update to TypeScript v5.0? (Y/n) · false ✔ Do you want to update the Angular version to v16? (Y/n) · false NX The migrate command has run successfully. - package.json has been updated. - migrations.json has been generated. NX Next steps: - Make sure package.json changes make sense and then run 'pnpm install --no-frozen-lockfile', - Run 'pnpm exec nx migrate --run-migrations' - You opted out of some migrations for now. Write the following command down somewhere to apply these migrations later: - nx migrate 16.5.3 --from nx@16.1.0-beta.0 --exclude-applied-migrations - To learn more go to https://nx.dev/recipes/other/advanced-update ``` You can see in the "Next steps" section a suggested command to run to apply the skipped package updates. Make sure to store that information somewhere so you can later remember from which version you need to run the migration to apply the skipped package updates. Please note the suggested command is only based on a particular run of the `nx migrate` command. If you've skipped package updates in previous runs, you'll need to use the oldest version you've stored that you haven't yet run the migration from. If you don't have the command and need to find out from which version to run the migration, you can take a look at the `migrations.json` file of the relevant package. For example, if you skipped updating Angular to **v16.0.0**, you can take a look at the `migrations.json` file of the `@nx/angular` package and you'll find the following: ```jsonc // node_modules/@nx/angular/migrations.json { // ... "packageJsonUpdates": { // ... "16.1.0": { "version": "16.1.0-beta.1", "x-prompt": "Do you want to update the Angular version to v16?", "requires": { "@angular/core": ">=15.2.0 <16.0.0", }, "packages": { "@angular/core": { "version": "~16.0.0", "alwaysAddToPackageJson": true, }, // ... }, }, // ... }, } ``` You can see the `16.1.0-beta.1` version is the one that contains the update for `@angular/core` to **~16.0.0**. That's the version you need to run the migration from to apply the package update. ## Other advanced capabilities ### Overriding versions Sometimes, you may want to use a different version of a package than what Nx recommends. To do that, specify the package and version: ```shell nx migrate latest --to="jest@22.0.0,cypress@3.4.0" ``` By default, Nx uses currently installed packages to calculate what migrations need to run. To override them, override the version: ```shell nx migrate latest --to="@nx/jest@12.0.0" ``` {% aside type="caution" title="Overriding versions" %} By choosing a version different of what Nx recommend you might use a package version that might have not been tested for a given Nx version. This might lead to unexpected issues. {% /aside %} ### Reverting a failed update Updates are best done on a clean git history so that it can be easily reversed if something fails. We try our best to make sure migrations do not fail but if one does, **please report it** on [GitHub](https://www.github.com/nrwl/nx/issues/new/). If an update fails for any reason, you can revert it as you do any other set of changes: ```shell git reset --hard # Reset any changes git clean -fd # Delete newly added files and directories ``` {% aside type="caution" title="--create-commits" %} If using `--create-commits`, you will need to first retrieve the SHA of the commit before your first automated migration commit in order to jump back to the point before the migrations ran, e.g. `git reset --hard YOUR_APPROPRIATE_SHA_HERE`) {% /aside %} --- ## Disable Graph Links Created from Analyzing Source Files If you want to disable detecting dependencies from source code and want to only use the dependencies as defined in `package.json` (the same way yarn does), you can add the following configuration to your `nx.json` file: ```json // nx.json { "pluginsConfig": { "@nx/js": { "analyzeSourceFiles": false } } } ``` ## Default The default setting for Nx repos is `"analyzeSourceFiles": true`. The assumption is that if there is a real link in the code between projects, you want to know about it. For Lerna repos, the default value is `false` in order to maintain backward compatibility with the way Lerna has always calculated dependencies. --- ## Configuring Browser Support The official Nx plugins rely on [browserslist](https://github.com/browserslist/browserslist) for configuring application browser support. This affects builds, both production and development, and will decide on which transformations will be run on the code when built. In general, the more modern your applications browser support is, the smaller the filesize as the code can rely on modern API's being present and not have to ship polyfills or shimmed code. By default, applications generated from official Nx generators ship an aggressively modern browser support config, in the form of a `.browserslistrc` file in the root of the application with the following contents. ```text last 1 Chrome version last 1 Firefox version last 2 Edge major versions last 2 Safari major version last 2 iOS major versions Firefox ESR not IE 9-11 ``` This configuration is used for many tools including babel, autoprefixer, postcss, and more to decide which transforms are necessary on the source code when producing built code to run in the browser. For additional information regarding the format and rule options, please see: https://github.com/browserslist/browserslist#queries ## Debugging browser support Sometimes broad configurations like `> 0.5%, not IE 11` can lead to surprising results, due to supporting browsers like Opera Mini or Android UC browser. To see what browsers your configuration is supporting, run `npx browserslist` in the application's directory to get an output of browsers and versions to support. ```text {% title="npx browserslist" frame="terminal" %} and_chr 61 chrome 83 edge 83 edge 81 firefox 78 firefox 68 ie 11 ios_saf 13.4-13.5 ios_saf 13.3 ios_saf 13.2 ios_saf 13.0-13.1 ios_saf 12.2-12.4 ios_saf 12.0-12.1 safari 13.1 safari 13 safari 12.1 safari 12 ``` Alternatively, if your support config is short you can just add it as a string param on the CLI: ```shell npx browserslist '> 0.5%, not IE 11' ``` --- ## Define Environment Variables Environment variables are global system variables accessible by all the processes running under the Operating System (OS). Environment variables are useful to store system-wide values such as the directories to search for executable programs (PATH), OS version, Network Information, and custom variables. These env variables are passed at build time and used at the runtime of an app. ## Set environment variables By default, Nx will load any environment variables you place in the following files: 1. `[project-root]/.env.[target-name].[target-configuration-name].local` 2. `[project-root]/.env.[target-name].[target-configuration-name]` 3. `[project-root]/.[target-name].[target-configuration-name].local.env` 4. `[project-root]/.[target-name].[target-configuration-name].env` 5. `[project-root]/.env.[target-configuration-name].local` 6. `[project-root]/.env.[target-configuration-name]` 7. `[project-root]/.[target-configuration-name.local.env` 8. `[project-root]/.[target-configuration-name].env` 9. `[project-root]/.env.[target-name].local` 10. `[project-root]/.env.[target-name]` 11. `[project-root]/.[target-name].local.env` 12. `[project-root]/.[target-name].env` 13. `[project-root]/.env.local` 14. `[project-root]/.local.env` 15. `[project-root]/.env` 16. `.env.[target-name].[target-configuration-name].local` 17. `.env.[target-name].[target-configuration-name]` 18. `.[target-name].[target-configuration-name].local.env` 19. `.[target-name].[target-configuration-name].env` 20. `.env.[target-configuration-name].local` 21. `.env.[target-configuration-name]` 22. `.[target-configuration-name].local.env` 23. `.[target-configuration-name].env` 24. `.env.[target-name].local` 25. `.env.[target-name]` 26. `.[target-name].local.env` 27. `.[target-name].env` 28. `.env.local` 29. `.local.env` 30. `.env` Or shorter we will use the following order: - [project-root]/[target-name].[target-configuration-name] - [project-root]/[target-configuration-name] - [project-root]/[target-name] - [project-root]/{general environment variables} - [target-name].[target-configuration-name] - [target-configuration-name] - [target-name] - {general environment variables} {% aside type="caution" title="Order is important" %} Nx will move through the above list, ignoring files it can't find, and loading environment variables into the current process for the ones it can find. If it finds a variable that has already been loaded into the process, it will ignore it. It does this for two reasons: 1. Developers can't accidentally overwrite important system level variables (like `NODE_ENV`) 2. Allows developers to create `.env.local` or `.local.env` files for their local environment and override any project defaults set in `.env` 3. Allows developers to create target specific `.env.[target-name]` or `.[target-name].env` to overwrite environment variables for specific targets. For instance, you could increase the memory use for node processes only for build targets by setting `NODE_OPTIONS=--max-old-space-size=4096` in `.build.env` For example: 1. `apps/my-app/.env.local` contains `NX_PUBLIC_API_URL=http://localhost:3333` 2. `apps/my-app/.env` contains `NX_PUBLIC_API_URL=https://api.example.com` 3. Nx will first load the variables from `apps/my-app/.env.local` into the process. When it tries to load the variables from `apps/my-app/.env`, it will notice that `NX_PUBLIC_API_URL` already exists, so it will ignore it. We recommend nesting your **app** specific `env` files in `apps/your-app`, and creating workspace/root level `env` files for workspace-specific settings (like the [Nx Cloud token](/docs/guides/nx-cloud/access-tokens)). {% /aside %} ### Environment variables for atomized targets Atomized targets have their names created dynamically, typically using the file names as a suffix. This makes it difficult to define environment variable files for them. For atomized targets, Nx will instead search for atomized target's parent and non-atomized target. Instead of looking for a file named e.g. `.env.e2e-ci--path/to/atomized/file`, Nx will instead look for: - `.env.e2e-ci` and - `.env.e2e` The above order with combinations of targets, configurations and path priority remainst the same: - [project-root]/[parent-target-name].[target-configuration-name] - [project-root]/[non-atomized-target-name].[target-configuration-name] - [project-root]/[target-configuration-name] - [project-root]/[parent-target-name] - [project-root]/[non-atomized-target-name] - [project-root]/{general environment variables} - [target-name].[target-configuration-name] - [target-configuration-name] - [parent-target-name] - [non-atomized-target-name] - {general environment variables} ### Environment variables for configurations Nx will only load environment variable files for a particular configuration if that configuration is defined for a task, even if you specify that configuration name from the command line. So if there is no `development` configuration defined for the `app`'s `build` task, the following command will use `.env.build` instead of `.env.build.development`: ```shell nx build app --configuration development ``` In order to have Nx actually use the `.env.build.development` environment variables, the `development` configuration needs to be set for the task (even if it is empty). ```jsonc {% meta="{5-7}" %} // apps/app/project.json { "targets": { "build": { // ... "configurations": { "development": {}, }, }, }, } ``` ### Point to custom env files If you want to load variables from `env` files other than the ones listed above: 1. Use the [env-cmd](https://www.npmjs.com/package/env-cmd) package: `env-cmd -f .qa.env nx serve` 2. Use [dotenvx](https://github.com/dotenvx/dotenvx): `dotenvx run --env-file=.qa.env -- nx serve` 3. Use the `envFile` option of the [run-commands](/docs/guides/tasks--caching/run-commands-executor#envfile) builder and execute your command inside of the builder ### Ad-hoc variables You can also define environment variables in an ad-hoc manner using support from your OS and shell. **Unix systems** In Unix systems, we need to set the environment variables before calling a command. Let's say that we want to define an API URL for the application to use: ```shell NX_PUBLIC_API_URL=http://localhost:3333 nx build myapp ``` **Windows (cmd.exe)** ```shell set "NX_PUBLIC_API_URL=http://localhost:3333" && nx build myapp ``` **Windows (Powershell)** ```shell ($env:NX_PUBLIC_API_URL = "http://localhost:3333") -and (nx build myapp) ``` --- ## Optimize Testing with Feature-Based Testing Feature-based testing is a strategy that co-locates tests with the features they verify. This approach helps you test only what's changed, reducing unnecessary test execution and improving feedback loops. ## The problem: monolithic test projects Many projects have a single, large test project that depends on the entire application, such as e2e projects. While this setup ensures tests run when any dependency changes, it also means **all tests run even when only one subset of the app changes**. {% aside type="tip" title="Beyond E2E Tests" %} While this guide uses end-to-end (e2e) tests as examples, the same strategy applies to any type of testing—integration tests, component tests, or even large unit test suites. The principles of feature-based testing work wherever you have monolithic test projects that could benefit from being split by feature. {% /aside %} Consider a typical setup where all e2e tests live in a single project at the top of the graph: {% graph height="400px" %} ```json { "projects": [ { "name": "fancy-app-e2e", "type": "app", "data": { "tags": [] } }, { "name": "fancy-app", "type": "app", "data": { "tags": [] } }, { "name": "shared-ui", "type": "lib", "data": { "tags": [] } }, { "name": "feat-cart", "type": "lib", "data": { "tags": [] } }, { "name": "feat-products", "type": "lib", "data": { "tags": [] } } ], "dependencies": { "fancy-app-e2e": [ { "source": "fancy-app-e2e", "target": "fancy-app", "type": "implicit" } ], "fancy-app": [ { "source": "fancy-app", "target": "feat-products", "type": "static" }, { "source": "fancy-app", "target": "feat-cart", "type": "static" } ], "shared-ui": [], "feat-cart": [ { "source": "feat-cart", "target": "shared-ui", "type": "static" } ], "feat-products": [ { "source": "feat-products", "target": "shared-ui", "type": "static" } ] }, "workspaceLayout": { "appsDir": "", "libsDir": "" }, "affectedProjectIds": ["feat-cart", "fancy-app-e2e", "fancy-app"], "groupByFolder": false, "showAffectedWithNodes": true } ``` {% /graph %} In this example, when `feat-cart` changes, **all tests in `fancy-app-e2e` run**, which includes tests for `feat-products` along with other unrelated features. This happens because `fancy-app-e2e` depends on the entire application. Since these features have minimal overlap, you can optimize testing by splitting the monolithic test project into smaller, feature-scoped test projects. ## The solution: Feature-scoped testing Instead of keeping all tests in one large project, break them down by feature and co-locate them with the feature libraries they test. This way, only the tests for changed features run. ### How to implement feature-Based testing To set up feature-based testing, add test configurations directly to your feature projects. Nx provides plugins to automate and speed up the test configuration for common testing tools. Here are some guides for using each plugins generators: - [Playwright](/docs/technologies/test-tools/playwright/introduction#add-playwright-e2e-to-an-existing-project) - [Cypress](/docs/technologies/test-tools/cypress/introduction#configure-cypress-for-an-existing-project) - [Vitest](/docs/technologies/build-tools/vite/generators#configuration) - [Jest](/docs/technologies/test-tools/jest/introduction#add-jest-to-a-project) If there isn't a generator for your testing tool of choice, you can manually set up the configuration on each feature project. This includes adding relevant configuration files for the testing framework and adding the test target (e.g. `test`, `e2e`) to the project's `project.json` or `package.json`. Typically these can be copied and slightly modified from the existing top-level monolithic project that is being split apart. With this setup, when you run `nx affected -t e2e`, only the tests for changed features will execute. For example, when `feat-cart` changes, only `feat-cart:e2e` runs and `feat-products:e2e` does not run since it wasn't affected. ## Best practices ### Combining with automated task splitting (Atomizer) Typically, teams enable [Atomizer, also known as task splitting](/docs/features/ci-features/split-e2e-tasks), for a quick win to improve CI times when using [Nx Agents](/docs/features/ci-features/distribute-task-execution). Combining both strategies yields the best results. Here's how they complement each other: - **Feature-based testing** ensures only relevant feature tests run when code changes - **Atomizer** splits each feature's test suite into individual file-level tasks that can be distributed across multiple CI agents. For example, if `feat-cart` has 10 test files and `feat-products` has 15 test files, when you change the cart feature: 1. Feature-based testing runs only `feat-cart:e2e-ci` (skipping `feat-products:e2e-ci`) 2. Atomizer splits `feat-cart:e2e-ci` into 10 parallel tasks, one per test file 3. These tasks get distributed across your CI agents for faster execution Learn more about [setting up Automated Task Splitting](/docs/features/ci-features/split-e2e-tasks). ### Keep the top-Level test project Don't delete your top level test project, `fancy-app-e2e` in this example. Instead, repurpose it for: - **Smoke tests**: Quick sanity checks that the app starts and critical paths work - **Cross-feature integration tests**: Tests that verify multiple features work together - **End-to-end user journeys**: Tests that span multiple features This gives you a balanced testing strategy: focused feature tests that run frequently, plus comprehensive integration tests when needed. ### Running tests in parallel When running tests from multiple features in parallel, be mindful of shared resources. Since all feature tests run against the same application instance, avoid conflicts by: - **Using unique test data**: Don't rely on specific database records or application state - **Managing ports**: Configure each test to use different ports, or let the test framework find free ports automatically - For Cypress, use the [`--port` flag](/docs/technologies/test-tools/cypress/executors) to specify or auto-detect ports - For Playwright, the `webServerAddress` can be dynamically assigned - **Isolating state**: Use test-specific user accounts, temporary data, or cleanup between tests ### Running affected tests With feature-based testing, you can leverage Nx affected commands to run only the tests that matter: ```shell # Run all affected tests based on your changes nx affected -t test # Run affected e2e tests nx affected -t e2e ``` This ensures you're only testing what changed, whether locally or in CI. --- ## Identify Dependencies Between Folders As projects grow in size, you often need to split out a particular folder in that project into its own library. In order to do this properly, you need to: 1. Generate a new library to set up all the config files 2. Move the code from the existing folder into the new library 3. Clean up paths that were broken when you moved the code If you're not sure which code you want to split into a new library, this can be a tedious process to repeat multiple times. Here is a technique to use during the exploration phase to help identify which code makes sense to separate into its own library. {% aside type="note" title="Requires Nx 15.3" %} Nx 15.3 introduced nested projects, which are necessary for Nx to be aware of `project.json` files inside of an existing project. {% /aside %} ## Set up 1. Identify the folders that might make sense to separate into their own library 2. In each folder, create a `project.json` file with the following contents: ```json // project.json { "name": "[name_of_the_folder]" } ``` ## Analysis Now, run `nx graph` to view the dependencies between the folders. In the full web view (not in the graph below), you can click on the dependency lines to see which specific files are creating those dependencies. Here is a graph that was created when doing this exercise on the [Angular Jump Start](https://github.com/DanWahlin/Angular-JumpStart) repo. To reproduce this graph yourself, download the repo, run `nx init` and then add `project.json` files to the folders under `/src/app`. {% graph height="450px" %} ```json { "hash": "9713539543f19c5299e56715e78c576a40b91056b9cbb4e42118780cfcd22b5e", "projects": [ { "name": "customers", "type": "lib", "data": { "tags": [] } }, { "name": "customer", "type": "lib", "data": { "tags": [] } }, { "name": "orders", "type": "lib", "data": { "tags": [] } }, { "name": "shared", "type": "lib", "data": { "tags": [] } }, { "name": "about", "type": "lib", "data": { "tags": [] } }, { "name": "login", "type": "lib", "data": { "tags": [] } }, { "name": "core", "type": "lib", "data": { "tags": [] } }, { "name": "playground", "type": "app", "data": { "tags": [] } }, { "name": "angular-jumpstart", "type": "app", "data": { "tags": [] } } ], "dependencies": { "customers": [ { "source": "customers", "target": "shared", "type": "static" }, { "source": "customers", "target": "core", "type": "static" } ], "customer": [ { "source": "customer", "target": "shared", "type": "static" }, { "source": "customer", "target": "core", "type": "static" } ], "orders": [ { "source": "orders", "target": "core", "type": "static" }, { "source": "orders", "target": "shared", "type": "static" } ], "shared": [], "about": [], "login": [ { "source": "login", "target": "core", "type": "static" }, { "source": "login", "target": "shared", "type": "static" } ], "core": [ { "source": "core", "target": "shared", "type": "static" }, { "source": "core", "target": "angular-jumpstart", "type": "static" } ], "playground": [ { "source": "playground", "target": "angular-jumpstart", "type": "static" }, { "source": "playground", "target": "core", "type": "static" }, { "source": "playground", "target": "customer", "type": "static" }, { "source": "playground", "target": "customers", "type": "static" }, { "source": "playground", "target": "orders", "type": "static" }, { "source": "playground", "target": "about", "type": "static" }, { "source": "playground", "target": "login", "type": "static" }, { "source": "playground", "target": "shared", "type": "static" } ], "angular-jumpstart": [] }, "workspaceLayout": { "appsDir": "projects", "libsDir": "projects" }, "affectedProjectIds": [], "focus": null, "groupByFolder": false, "exclude": [] } ``` {% /graph %} ## Clean up {% aside type="caution" title="DO NOT COMMIT" %} Do not commit these empty `project.json` files. They remove files from the cache inputs of the parent project without creating new test or build targets in place to cover those files. So testing and building will not be triggered correctly when those files change. {% /aside %} 1. Delete the empty `project.json` files. 2. Make new libraries for any folders that were marked to be extracted into new libraries. --- ## Include All package.json Files as Projects As of Nx 15.0.11, we only include any `package.json` file that is referenced in the `workspaces` property of the root `package.json` file in the graph. (`lerna.json` for Lerna repos or `pnpm-workspaces.yml` for pnpm repos) If you would prefer to add all `package.json` files as projects, add the following configuration to the `nx.json` file: ```json // nx.json { "plugins": ["nx/plugins/package-json"] } ``` --- ## Including Assets in Your Build All the official Nx executors with an `assets` option have the same syntax. There are two ways to identify assets to be copied into the output bundle: 1. Specify assets with a regex string. This will copy files over in the same folder structure as the source files. 2. Use the object format to redirect files into different locations in the output bundle. ```jsonc // project.json "build": { "executor": "@nx/js:tsc", // or any other Nx executor that supports the `assets` option "options": { // shortened... "assets": [ // Copies all the markdown files at the root of the project to the root of the output bundle "path-to-my-project/*.md", { "input": "./path-to-my-project/src", // look in the src folder "glob": "**/!(*.ts)", // for any file (in any folder) that is not a typescript file "output": "./src" // put those files in the src folder of the output bundle }, { "input": "./path-to-my-project", // look in the project folder "glob": "executors.json", // for the executors.json file "output": "." // put the file in the root of the output bundle } ] } } ``` --- ## Keep Nx Versions in Sync If your Nx plugin versions do not match the version of `nx` in your repository, you may encounter some difficulties when debugging errors. To get your Nx plugins back in sync, follow the steps below: 1. Identify all the official Nx plugins that are used in your repo. This includes `nx` and any packages in the `@nx/` organization scope, except for plugins that are still in [nx-labs](https://github.com/nrwl/nx-labs). Also, `nx-cloud` does not need to match the other package versions. 2. Run `nx report` and identify the minimum and maximum version numbers for all the packages that need to by in sync. 3. Run `nx migrate --from=[minimumVersion] --to=[maximumVersion]`. Note that all the official Nx plugin migration generators are designed to be idempotent - meaning that running them multiple times is equivalent to running them once. This allows you to run the migrations for all plugins without being concerned about re-running a migration that was already run. Review the [nx migrate](/docs/features/automate-updating-dependencies) documentation for more options. ## Prevention To ensure that the Nx plugin versions do not get out of sync, make sure to: - always run `nx add ` when installing an Nx official plugin instead of using the package manager to install it - always run `nx migrate ` when updating versions instead of manually updating the `package.json` file --- ## Convert from a Standalone Repository to a Monorepo You can always add another app to a standalone repository the same way you would in a monorepo. But at some point, you may want to move the primary app out of the root of your repo because the repo is no longer primarily focused on that one app. There are other apps that are equally important and you want the folder structure to align with the new reality. {% youtube src="https://youtu.be/ztNpLf2Zl-c?si=u0CfLAx_tpioZ3Vu" title="Graduating your Standalone Nx Repo to a Monorepo" width="100%" /%} ## Run the generator The `convert-to-monorepo` generator will attempt to convert a standalone repo to a monorepo. ```shell nx g convert-to-monorepo ``` If you need to do the conversion manually, you can follow the steps below. ## Manual conversion strategy For this recipe, we'll assume that the root-level app is named `my-app`. The high-level process we'll go through to move the app involves four stages: 1. Create a new app named `temp` under `apps/temp`. 2. Move source and config files from `my-app` into `apps/temp`. 3. Delete the files for `my-app`. 4. Rename `apps/temp` to `apps/my-app` ## Steps 1. If there is a `tsconfig.json` file in the root, rename it to `tsconfig.old.json` This step is to make sure that a `tsconfig.base.json` file is generated by the app generator in the next step. 2. Create a new app using the appropriate plugin under `apps/temp` ```shell nx g app apps/temp ``` 3. Move the `/src` (and `/public`, if present) folders to `apps/temp/`, overwriting the folders already there. 4. For each config file in `apps/temp`, copy over the corresponding file from the root of the repo. It can be difficult to know which files are root-level config files and which files are project-specific config files. Here is a non-exhaustive list of config files to help distinguish between the two. | Type of Config | Files | | --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | _Root-level_ | `.eslintignore`, `.eslintrc.base.json`, `.gitignore`, `.prettierignore`, `jest.config.ts`, `jest.preset.js`, `.prettierrc`, `nx.json`, `package.json`, `tsconfig.base.json` | | _Project-level_ | `.eslintrc.json`, `index.html`, `project.json`, `jest.config.app.ts`, `tsconfig.app.json`, `tsconfig.json`, `tsconfig.spec.json`, `vite.config.ts`, `webpack.config.js` | {% aside title="jest.config.app.ts" type="note" %} `jest.config.app.ts` in the root should be renamed to `jest.config.ts` when moved to `apps/temp`. Also update the `jestConfig` option in `project.json` to point to `jest.config.ts` instead of `jest.config.app.ts`. {% /aside %} 5. Update the paths of the project-specific config files that were copied into `apps/temp`. Here is a non-exhaustive list of properties that will need to be updated to have the correct path: | Config File(s) | Properties to Check | | ---------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `project.json` | `$schema`, `sourceRoot`, `root`, `outputPath`, `reportsDirectory`, `cypressConfig`, `lintFilePatterns`, `index`, `main`, `tsConfig`, `assets`, `styles`, `jestConfig` | | `tsconfig.json`, `tsconfig.app.json`, `tsconfig.spec.json` | `extends`, `outDir`, `files` | | `.eslintrc.json` | `extends` | | `jest.config.ts` | `preset`, `coverageDirectory` | | `vite.config.ts` | `cacheDir`, `root`, `dir` | 6. Doublecheck that all the tasks defined in the `apps/temp/project.json` file still work. ```shell nx build temp nx test temp nx lint temp ``` 7. Move the `/e2e/src` folder to `/apps/temp-e2e`, overwriting the folder already there 8. For each config file in `apps/temp-e2e`, copy over the corresponding file from the root of the repo. Update the paths for these files in the same way you did for the `my-app` config files. 9. Update the `/apps/temp-e2e/project.json` `implicitDependencies` to be `temp` instead of `my-app` 10. Doublecheck that all the tasks defined in the `apps/temp-e2e/project.json` file still work. ```shell nx lint temp-e2e nx e2e temp-e2e ``` 11. Delete all the project specific config files in the root and under `e2e` 12. Once the `project.json` file has been deleted in the root, rename `temp-e2e` to `my-app-e2e` and rename `temp` to `my-app` ```shell nx g move --projectName=temp-e2e --destination=my-app-e2e nx g move --projectName=temp --destination=my-app ``` 13. Update the `defaultProject` in `nx.json` if needed 14. Check again that all the tasks still work and that the `nx graph` displays what you expect. --- ## Using Yarn Plug'n'Play with Nx Plug'n'Play (PnP) is an innovative installation strategy for Node that tries to solve the challenges of using `node_modules` for storing installed packages: - slow installation process - slow node runtime cold start - expensive packages diffing when adding new packages - no ability to restrict access and enforce hoisting/nesting Instead of using the `node_modules` folder, `PnP` creates a map from package names to hoisted package versions and creates dependency edges between packages. The packages are kept in `zip` archives which makes caching and restoring faster. Read more about `PnP` on [Yarn's official docs](https://yarnpkg.com/features/pnp). ## Switching to Yarn v2+ (aka Berry) When you run `create-nx-workspace` with the optional `--pm=yarn` flag, Nx uses the default yarn version set in your global `.yarnrc.yml` file (usually in the root of your user folder). To check your current yarn version, run: ```shell yarn --version ``` If the version is in the `1.x.x` range, the workspace will be created with yarn classic. To migrate an existing yarn classic workspace to the modern version of yarn, also known as `Berry`, run: ```shell {% title="~/workspace" frame="terminal" %} yarn set version stable ``` This command will update `.yarnrc.yml` with the path to the local yarn `bin` of the correct version and will set the `packageManager` property in the root `package.json`: ```jsonc // package.json { // ... "packageManager": "yarn@3.6.1", } ``` ## Switching to PnP Once you are on the modern version of yarn, you can use the following command to switch to `PnP`: ```shell {% title="~/workspace" frame="terminal" %} yarn config set nodeLinker pnp ``` Your `.yarnrc.yml` will now have `nodeLinker` set to a proper method: ```yml // .yarnrc.yml nodeLinker: pnp ``` Once the config is changed you need to run the install again: ```shell {% title="~/workspace" frame="terminal" %} yarn install ``` Running install generates a `.pnp.cjs` file that contains a mapping of external packages and strips all the packages from the `node_modules`. ## Dealing with inaccessible dependencies When using Yarn Berry, Nx will set the `nodeLinker` property to a backward compatible `node-modules` value by default. This is to ensure your repository works out of the box. Some packages come with broken dependency requirements so using strict `PnP` can lead to broken builds. This results in errors similar to the one below: ```shell {% title="yarn nx build my-project" frame="terminal" %} Error: [BABEL]: @babel/plugin-transform-react-jsx tried to access @babel/core (a peer dependency) but it isn't provided by your application; this makes the require call ambiguous and unsound. Required package: @babel/core ``` The problem above is something we need to fix by ensuring the requested peer dependency exists and is available: ```shell {% title="~/workspace" frame="terminal" %} yarn add -D @babel/core ``` Sometimes, simply adding a missing package doesn't work. In those cases, the only thing we can do is contact the author of the package causing the issues and notify them of the missing dependency requirement. The alternative is to switch back to using the `node-modules` node linker. # Extending Nx --- ## Extending Nx {% index_page_cards path="extending-nx" /%} --- ## Compose Executors An executor is just a function, so you can import and invoke it directly, as follows: ```typescript // example-executor.ts import printAllCaps from 'print-all-caps'; export default async function ( options: Schema, context: ExecutorContext ): Promise<{ success: true }> { // do something before await printAllCaps({ message: 'All caps' }); // do something after } ``` This only works when you know what executor you want to invoke. Sometimes, however, you need to invoke a target. For instance, the e2e target is often configured like this: ```json // project.json { "e2e": { "builder": "@nx/cypress:cypress", "options": { "cypressConfig": "apps/myapp-e2e/cypress.json", "tsConfig": "apps/myapp-e2e/tsconfig.e2e.json", "devServerTarget": "myapp:serve" } } } ``` In this case we need to invoke the target configured in devSeverTarget. We can do it as follows: ```typescript // example-executor.ts async function* startDevServer( opts: CypressExecutorOptions, context: ExecutorContext ) { const { project, target, configuration } = parseTargetString( opts.devServerTarget ); for await (const output of await runExecutor<{ success: boolean; baseUrl?: string; }>( { project, target, configuration }, { watch: opts.watch, }, context )) { if (!output.success && !opts.watch) throw new Error('Could not compile application files'); yield opts.baseUrl || (output.baseUrl as string); } } ``` The `runExecutor` utility will find the target in the configuration, find the executor, construct the options (as if you invoked it in the terminal) and invoke the executor. Note that `runExecutor` always returns an iterable instead of a promise. ## Devkit helper functions | Property | Description | | ------------------------ | -------------------------------------------------------------- | | logger | Wraps `console` to add some formatting | | getPackageManagerCommand | Returns commands for the package manager used in the workspace | | parseTargetString | Parses a target string into `{project, target, configuration}` | | readTargetOptions | Reads and combines options for a given target | | runExecutor | Constructs options and invokes an executor | See more helper functions in the [Devkit API Docs](/docs/reference/devkit) ## Using RxJS observables The Nx devkit only uses language primitives (promises and async iterables). It doesn't use RxJS observables, but you can use them and convert them to a `Promise` or an async iterable. You can convert `Observables` to a `Promise` with `toPromise`. ```typescript import { of } from 'rxjs'; export default async function (opts) { return of({ success: true }).toPromise(); } ``` You can use the [`rxjs-for-await`](https://www.npmjs.com/package/rxjs-for-await) library to convert an `Observable` into an async iterable. ```typescript import { of } from 'rxjs'; import { eachValueFrom } from 'rxjs-for-await'; export default async function (opts) { return eachValueFrom(of({ success: true })); } ``` --- ## Composing Generators Generators are useful individually, but reusing and composing generators allows you to build whole workflows out of simpler building blocks. ## Using Nx devkit generators Nx Devkit generators can be imported and invoked like any javascript function. They often return a `Promise`, so they can be used with the `await` keyword to mimic synchronous code. Because this is standard javascript, control flow logic can be adjusted with `if` blocks and `for` loops as usual. ```typescript import { libraryGenerator } from '@nx/js'; export default async function (tree: Tree, schema: any) { await libraryGenerator( tree, // virtual file system tree { name: schema.name } // options for the generator ); } ``` ## Using jscodeshift codemods Codemods created for use with [`jscodeshift`](https://github.com/facebook/jscodeshift) can be used within Nx Devkit generators using the `visitNotIgnoredFiles` helper function. This way you can compose codemods with other generators while retaining `--dry-run` and Nx Console compatibilities. ```typescript import { Tree, visitNotIgnoredFiles } from '@nx/devkit'; import { applyTransform } from 'jscodeshift/src/testUtils'; import arrowFunctionsTransform from './arrow-functions'; // The schema path can be an individual file or a directory export default async function (tree: Tree, schema: { path: string }): any { visitNotIgnoredFiles(tree, schema.path, (filePath) => { const input = tree.read(filePath).toString(); const transformOptions = {}; const output = applyTransform( { default: arrowFunctionsTransform, parser: 'ts' }, transformOptions, { source: input, path: filePath } ); tree.write(filePath, output); }); } ``` --- ## Creating an Install Package {% youtube src="https://www.youtube.com/embed/ocllb5KEXZk" title="Build your own CLI" width="100%" /%} Starting a new project should be as seamless as possible. In the JavaScript ecosystem, the idea of bootstrapping a new project with a single command has become a must-have for providing a good DX. So much that all the major package managers have a dedicated feature already built-in: if you publish a package named `create-{x}`, it can be invoked via any of the following: - npx create-{x} - yarn create {x} - npm init {x} - pnpm init {x} These packages are used to set up a new project in some form. Customizing your initial project setup is already possible with an [Nx Preset generator](/docs/extending-nx/create-preset). By creating and shipping a generator named `preset` in your Nx plugin, you can then pass it via the `--preset` flag to the `create-nx-workspace` command: ```shell npx create-nx-workspace --preset my-plugin ``` This allows you to take full control over the shape of the generated Nx workspace. You might however want to have your own `create-{x}` package, whether that is for marketing purposes, branding or better discoverability. Starting with Nx 16.5 you can now have such a `create-{x}` package generated for you. ## Generating a "Create package" There are a few methods to create a package that will work with `create-nx-workspace`'s public API to setup a new workspace that uses your Nx plugin. You can setup a new Nx plugin workspace and immediately pass the `--create-package-name`: ```shell npx create-nx-plugin my-plugin --create-package-name create-my-plugin ``` Alternatively, if you already have an existing Nx plugin workspace, you can run the following generator to set up a new create package: ```shell nx g create-package create-my-plugin --project my-plugin ``` ## Customize your create package You'll have 2 packages that are relevant in your workspace: - The create package (e.g. `create-my-plugin`) - The plugin package (e.g. `my-plugin`) Let's take a look at the code that was scaffolded out for your `create-my-plugin` package: ```typescript // packages/create-my-plugin/bin/index.ts #!/usr/bin/env node import { createWorkspace } from 'create-nx-workspace'; async function main() { const name = process.argv[2]; // TODO: use libraries like yargs or enquirer to set your workspace name if (!name) { throw new Error('Please provide a name for the workspace'); } console.log(`Creating the workspace: ${name}`); // This assumes "my-plugin" and "create-my-plugin" are at the same version // eslint-disable-next-line @typescript-eslint/no-var-requires const presetVersion = require('../package.json').version; // TODO: update below to customize the workspace const { directory } = await createWorkspace(`my-plugin@${presetVersion}`, { name, nxCloud: 'skip', packageManager: 'npm', }); console.log(`Successfully created the workspace: ${directory}.`); } main(); ``` This is a plain node script at this point, and you can use any dependencies you wish to handle things like prompting or argument parsing. Keeping dependencies small and splitting out the command line tool from the Nx plugin is recommended, and will help keep your CLI feeling fast and snappy. Note the following code snippet: ```typescript const { directory } = await createWorkspace(`my-plugin@${presetVersion}`, { name, nxCloud: 'skip', packageManager: 'npm', }); ``` This will invoke the `my-plugin` package's `preset` generator, which will contain the logic for setting up the workspace. This preset generator will be invoked when running either `npx create-nx-workspace --preset my-plugin` or `npx create-my-plugin`. For more information about customizing your preset, see: [Creating a Preset](/docs/extending-nx/create-preset). ## Testing your create package Because your `create-my-plugin` package will install your plugin package at runtime, both packages must be published in order to run them and see the results. To test your packages without making them publicly available, a `local-registry` target should be present on project in your workspace's root. ```jsonc // project.json { ... "targets": { "local-registry": { "executor": "@nx/js:verdaccio", "options": { "port": 4873, "config": ".verdaccio/config.yml", "storage": "tmp/local-registry/storage" } } } } ``` _(If you don't have such a `local-registry` target, refer to the following [docs page to generate one](/docs/technologies/typescript/generators#setup-verdaccio))_ By running ```shell npx nx local-registry ``` ...a local instance of [Verdaccio](https://verdaccio.org/) will be launched at http://localhost:4873 and the NPM, Yarn and PNPM registries will be configured to point to it. This means that you can safely publish, without hitting npm, and test as if you were an end user of your package. {% aside type="note" title="Registry Cleanup & Reset" %} Note, after terminating the terminal window where the `nx local-registry` command is running (e.g. using `CTRL+c` or `CMD+c`) the registry will be stopped, previously installed packages will be cleaned up and the npm/yarn/pnpm registries will be restored to their original state, pointing to the real NPM servers again. {% /aside %} Next, you can **publish** your packages to your new local registry. All of the generated packages can use `nx release` to publish whatever is in your `build` output folder, so you can simply run: ```shell npx nx run-many --targets build npx nx release version 1.0.0 npx nx release publish --tag latest ``` Once the packages are published, you should be able to test the behavior of your "create package" as follows: ```shell npx create-my-plugin test-workspace ``` ## Writing and running e2e tests When setting up the workspace, you should also have gotten a `my-plugin-e2e` package. This package contains the e2e tests for your plugin, and can be run with the following command: ```shell npx nx e2e my-plugin-e2e ``` Have a look at some of the example tests that were generated for you. When running these tests, - the local registry will be started automatically - a new version of the packages will be deployed - then your test commands will be run (usually triggering processes that setup the workspace, just like the user would type into a command line interface) - once the test commands have finished, the local registry will be stopped again and cleaned up ## Publishing your create package Your plugin and create package will both need to be published to NPM to be useable. Publishing your packages is exactly the same as described [previously](#testing-your-create-package), except that you don't run the `local-registry` task so that the `publish` task will publish to the real NPM servers. ## Further reading - [Blog post: Create your own create-react-app CLI](https://nx.dev/blog/create-your-own-create-react-app-cli) --- ## Create a Custom Plugin Preset When you create a new nx workspace, you run the command: [`npx create-nx-workspace`](/docs/reference/create-nx-workspace). This command accepts a `--preset` option, for example: `npx create-nx-workspace --preset=react-standalone`. This preset option is pointing to a special generator function (remember, a generator is a function that simplifies an entire code generation script into a single function) that Nx will call when this `npx create-nx-workspace` command is run, that will generate your initial workspace. {% youtube src="https://www.youtube.com/embed/yGUrF0-uqaU" title="Develop a Nx Preset for your Nx Plugin" /%} ## What is a preset? At its core, a preset is a special [generator](/docs/features/generate-code) that is shipped as part of an Nx Plugin package. All first-party Nx presets are built into Nx itself, but you can [create your own plugin](/docs/extending-nx/intro) and create a generator with the magic name: `preset`. Once you've [published your plugin](/docs/extending-nx/tooling-plugin) on npm, you can now run the `create-nx-workspace` command with the preset option set to the name of your published package. To use a concrete example, let's look at the [`qwik-nx`](https://www.npmjs.com/package/qwik-nx) Nx community plugin. They include a [preset generator](https://github.com/qwikifiers/qwik-nx/tree/main/packages/qwik-nx/src/generators/preset) that you can use to create a new Nx workspace with Qwik support. ```shell npx create-nx-workspace --preset=qwik-nx ``` ## Create a new Nx plugin If you **don't** have an existing plugin you can create one by running ```shell npx create-nx-plugin my-org --pluginName my-plugin ``` ## Creating a "Preset" generator To create our preset inside of our plugin we can run ```shell nx generate @nx/plugin:generator packages/happynrwl/src/generators/preset ``` {% aside type="caution" title="Double check" %} The word `preset` is required for the name of this generator {% /aside %} You should have a similar structure to this: {% filetree %} - happynrwl/ - e2e/ - jest.config.js - jest.preset.js - nx.json - package-lock.json - package.json - packages/ - happynrwl/ - src/ - executors/ - generators/ - happynrwl/ - preset/ <-- Here - index.ts - tools/ - tsconfig.base.json/ {% /filetree %} After the command is finished, the preset generator is created under the folder named **preset**. The **generator.ts** provides an entry point to the generator. This file contains a function that is called to perform manipulations on a tree that represents the file system. The **schema.json** provides a description of the generator, available options, validation information, and default values. Here is the sample generator function which you can customize to meet your needs. ```typescript // generator.ts export default async function (tree: Tree, options: PresetGeneratorSchema) { const normalizedOptions = normalizeOptions(tree, options); addProjectConfiguration(tree, normalizedOptions.projectName, { root: normalizedOptions.projectRoot, projectType: 'application', sourceRoot: `${normalizedOptions.projectRoot}/src`, targets: { exec: { executor: 'nx:run-commands', options: { command: `node ${projectRoot}/src/index.js`, }, }, }, tags: normalizedOptions.parsedTags, }); addFiles(tree, normalizedOptions); await formatFiles(tree); } ``` To get an in-depth guide on customizing/running or debugging your generator see [local generators](/docs/extending-nx/local-generators). ## Usage Before you are able to use your newly created preset you must package and publish it to a registry. After you have published your plugin to a registry you can now use your preset when creating a new workspace ```shell npx create-nx-workspace my-workspace --preset=my-plugin-name ``` --- ## Create a Sync Generator Sync generators are generators that are used to ensure that your file system is in the correct state before a task is run or the CI process is started. From a technical perspective, a sync generator is no different from any other generator, but it has some additional performance considerations and needs to be registered in a particular way. {% aside type="caution" title="Disable the Nx Daemon during development" %} When developing the Nx sync generator, disable the [Nx Daemon](/docs/concepts/nx-daemon) by setting `NX_DAEMON=false`. The daemon caches your plugin code, so changes to your plugin won't be reflected until the daemon restarts. {% /aside %} ## Create a new sync generator You can create a new sync generator by hand or use the built-in generator that Nx provides via the `@nx/plugin` package. ### Step 1: Add @nx/plugin Make sure you have `@nx/plugin` installed or add it to your workspace: ```shell nx add @nx/plugin ``` ### Step 2: Create a local plugin Create a new local plugin where we can add our new sync generator. You can also add it to an existing local plugin if you already have one. In that case you can skip this step. ```shell nx g @nx/plugin:plugin tools/my-plugin ``` ### Step 3: Scaffold a new sync generator Create a sync generator the same way you would [create any generator](/docs/extending-nx/local-generators). ```shell nx g @nx/plugin:generator --path=tools/my-plugin/src/generators/my-sync-generator ``` ## Implement a global sync generator Global sync generators are executed when the `nx sync` or `nx sync:check` command is explicitly run by a user or in a script. They are not associated with an individual task or project and typically update root-level configuration files. A sync generator should be able to run without any required options, so update the schema accordingly: ```jsonc // tools/my-plugin/src/generators/my-sync-generator/schema.json { "$schema": "https://json-schema.org/schema", "$id": "MySyncGenerator", "title": "", "type": "object", "properties": {}, "required": [], } ``` Also update the TypeScript interface to match: ```ts // tools/my-plugin/src/generators/my-sync-generator/schema.d.ts export interface MySyncGeneratorSchema {} ``` Sync generators can optionally return an `outOfSyncMessage` to display to users when the sync generator needs to be run. ```ts // tools/my-plugin/src/generators/my-sync-generator/my-sync-generator.ts import { Tree } from '@nx/devkit'; import type { SyncGeneratorResult } from 'nx/src/utils/sync-generators'; export async function mySyncGenerator( tree: Tree ): Promise { if ( !tree.exists('/legal-message.txt') || tree.read('/legal-message.txt').toString() !== 'This is an important legal message.' ) { tree.write('/legal-message.txt', 'This is an important legal message.'); } return { outOfSyncMessage: 'The legal-message.txt file needs to be created', }; } export default mySyncGenerator; ``` ### Register a global sync generator Global sync generators are registered in the `nx.json` file like this: ```jsonc // nx.json { "sync": { "globalGenerators": ["@myprg/my-plugin:my-sync-generator"], }, } ``` {% aside type="caution" title="Verify the name of your plugin" %} You might have to adjust the name of your plugin based on your specific workspace scope. Verify the name in `tools/my-plugin/package.json`. If your package.json has a different name, adjust the `nx.json` configuration accordingly. {% /aside %} Now `my-sync-generator` will be executed any time the `nx sync` command is run. ## Implement a task sync generator that uses the project graph Task sync generators are run before a particular task and are used to ensure that the files are in the correct state for the task to be run. The primary use case for this is to set up configuration files based on the project graph. To read from the project graph, use the [`createProjectGraphAsync`](/docs/reference/devkit/createProjectGraphAsync) from the `@nx/devkit` package. Create a generator in the same way as a global sync generator and then read the project graph like this: ```ts // tools/my-plugin/src/generators/my-sync-generator/my-sync-generator.ts import { Tree, createProjectGraphAsync, joinPathFragments } from '@nx/devkit'; import type { SyncGeneratorResult } from 'nx/src/utils/sync-generators'; export async function mySyncGenerator( tree: Tree ): Promise { const projectGraph = await createProjectGraphAsync(); Object.values(projectGraph.nodes).forEach((project) => { tree.write( joinPathFragments(project.data.root, 'license.txt'), `${project.name} uses the Acme Corp license.` ); }); return { outOfSyncMessage: 'Some projects are missing a license.txt file.', }; } export default mySyncGenerator; ``` ### Register a task sync generator To register a generator as a sync generator for a particular task, add the generator to the `syncGenerators` property of the task configuration. {% aside type="note" title="Important: Package.json Configuration" %} For projects using [inferred targets](/docs/concepts/inferred-tasks) (no project.json file), the sync generators must be registered inside the `nx` property in package.json, not at the root level. {% /aside %} {% tabs %} {% tabitem label="package.json" %} ```jsonc // apps/my-app/package.json { "name": "my-app", ... "nx": { "targets": { "build": { "syncGenerators": ["my-plugin:my-sync-generator"] } } } } ``` {% /tabitem %} {% tabitem label="project.json" %} ```jsonc // apps/my-app/project.json { "targets": { "build": { "syncGenerators": ["my-plugin:my-sync-generator"], }, }, } ``` {% /tabitem %} {% /tabs %} {% aside type="caution" title="Verify the name of your plugin" %} You might have to adjust the name of your plugin based on your specific workspace scope. Verify the name in `tools/my-plugin/package.json`. If the name there is `@myorg/my-plugin` you have to register it as: ```jsonc { "syncGenerators": ["@myorg/my-plugin:my-sync-generator"], } ``` {% /aside %} With this configuration in place, running `nx build my-app` will first run `my-sync-generator` and then run the `build` task. The `my-sync-generator` and any other task or global sync generators will be run when `nx sync` or `nx sync:check` is run. ## Performance and DX considerations Task sync generators will block the execution of the task while they are running and both global and task sync generators will block the CI pipeline until the `nx sync:check` command finishes. Because of this, make sure to keep in mind the following performance tips: - Make the generator idempotent. Running the generator multiple times in a row should have the same impact as running the generator a single time. - Only write to the file system when a file is actually changed. Avoid reformatting files that have not been actually modified. Nx will identify the workspace as out of sync if there's any file change after the sync generator is run. - Make sure to provide an informative `outOfSyncMessage` so that developers know what to do to unblock their tasks. Do whatever you can to make your sync generators as fast and user-friendly as possible, because users will be running them over and over again without even realizing it. --- ## CreateNodes API Compatibility This is a reference for knowing how Nx versions and the `createNodes`/`createNodesV2` APIs interact. If you plan on supporting multiple Nx versions with a custom plugin, then it's important to know which APIs to use. ## Which createNodes version does Nx call? The following table shows which export Nx will call based on the Nx version: | Nx Version | Calls `createNodes` | Calls `createNodesV2` | Nx Call Preference | | ------------- | ------------------- | --------------------- | ---------------------------- | | 17.x - 19.1.x | Yes | No | Only v1 supported | | 19.2.x - 20.x | Yes (fallback) | Yes (preferred) | Prefers v2, falls back to v1 | | 21.x | No | Yes | Only v2 supported | | 22.x+ | Yes (v2 signature) | Yes | Both use v2 signature | ## Which Nx versions does my plugin support? > Note this is the same information as above, but presented as a lookup table for plugin authors. If you're a plugin author, this table shows which Nx versions your plugin will support based on which exports you provide: | Plugin Exports | Nx 17-19.1 | Nx 19.2-20 | Nx 21-21.x | Nx 22+ | | ----------------------------------------- | ---------------- | ---------------- | ---------------- | ---------------- | | Only `createNodes` (v1) | ✅ Supported | ✅ Supported | ❌ Not Supported | ❌ Not Supported | | Only `createNodesV2` | ❌ Not Supported | ✅ Supported | ✅ Supported | ✅ Supported | | Both `createNodes` (v1) & `createNodesV2` | ✅ Supported | ✅ Supported | ✅ Supported | ✅ Supported | | Both with v2 signature (Nx 22+) | ❌ Not Supported | ❌ Not Supported | ✅ Supported | ✅ Supported | ## Recommended implementation pattern ### Plugin Support for Nx 21 and later For plugins targeting **Nx 21 and later**, the recommended pattern is to export both `createNodes` and `createNodesV2` using the same v2 implementation: ```typescript // my-plugin/index.ts import { CreateNodesV2, CreateNodesContextV2, createNodesFromFiles, } from '@nx/devkit'; export interface MyPluginOptions { // your options } // Export createNodes with v2 signature export const createNodes: CreateNodesV2 = [ '**/some-config.json', async (configFiles, options, context) => { return await createNodesFromFiles( (configFile, options, context) => createNodesInternal(configFile, options, context), configFiles, options, context ); }, ]; // Re-export as createNodesV2 export const createNodesV2 = createNodes; async function createNodesInternal( configFilePath: string, options: MyPluginOptions, context: CreateNodesContextV2 ) { // Your plugin logic here return { projects: { // ... }, }; } ``` This pattern ensures your plugin works with both Nx 21 and Nx 22+. ### Plugin support for Nx 17 through Nx 20 If you need to support Nx versions 17-20, you'll need to provide separate implementations. In Nx 22 the type for v1 of the create nodes api are removed, you can inline the type to maintain type safety. ```typescript // my-plugin/index.ts import { CreateNodesV2, CreateNodesContextV2, CreateNodesResult, createNodesFromFiles, } from '@nx/devkit'; // inlined types for backwards compat to v1 of createNodes // removed in Nx 22 export interface OldCreateNodesContext extends CreateNodesContextV2 { /** * The subset of configuration files which match the createNodes pattern */ readonly configFiles: readonly string[]; } type OldCreateNodes = readonly [ projectFilePattern: string, createNodesFunction: OldCreateNodesFunction, ]; export type OldCreateNodesFunction = ( projectConfigurationFile: string, options: T | undefined, context: OldCreateNodesContext ) => CreateNodesResult | Promise; export interface MyPluginOptions { // your options } // V1 API for Nx 17-20 export const createNodes: OldCreateNodes = [ '**/my-config.json', (configFile, options, context: OldCreateNodesContext) => { // V1 implementation - processes one file at a time return createNodesInternal(configFile, options, context); }, ]; // V2 API for Nx 19.2+ export const createNodesV2: CreateNodesV2 = [ '**/my-config.json', async (configFiles, options, context: CreateNodesContextV2) => { return await createNodesFromFiles( (configFile, options, context) => createNodesInternal(configFile, options, context), configFiles, options, context ); }, ]; function createNodesInternal( configFilePath: string, options: MyPluginOptions, context: OldCreateNodesContext | CreateNodesContextV2 ) { // Shared logic that works with both APIs return { projects: { // ... }, }; } ``` ## Future deprecation timeline Nx is standardizing on the v2 API. Here's the planned timeline: - **Nx 22**: Both `createNodes` and `createNodesV2` can be exported with v2 signature. `createNodes` re-exported as `createNodesV2`. - **Nx 23**: The `createNodesV2` export will be marked as deprecated in TypeScript types. Use `createNodes` with v2 signature instead. ## Related documentation - [Extending the Project Graph](/docs/extending-nx/project-graph-plugins) - Learn how to create project graph plugins - [Integrate a New Tool with a Tooling Plugin](/docs/extending-nx/tooling-plugin) - Tutorial for creating a complete plugin - [CreateNodesV2 API Reference](/docs/reference/devkit/CreateNodesV2) - Detailed API documentation --- ## Creating Files with a Generator Generators provide an API for managing files within your workspace. You can use generators to do things such as create, update, move, and delete files. Files with static or dynamic content can also be created. The generator below shows you how to generate a library, and then scaffold out additional files with the newly created library. First, you define a folder to store your static or dynamic templates used to generated files. This is commonly done in a `files` folder. ```text happynrwl/ ├── apps/ ├── libs/ │ └── my-plugin │ └── src │ └── generators │ └── my-generator/ │ ├── files │ │ └── NOTES.md │ ├── index.ts │ └── schema.json ├── nx.json ├── package.json └── tsconfig.base.json ``` The files can use EJS syntax to substitute variables and logic. See the [EJS Docs](https://ejs.co/) to see more information about how to write these template files. Example NOTES.md: ```markdown Hello, my name is <%= name %>! ``` Next, update the `index.ts` file for the generator, and generate the new files. ```typescript // index.ts import { Tree, formatFiles, installPackagesTask, generateFiles, joinPathFragments, readProjectConfiguration, } from '@nx/devkit'; import { libraryGenerator } from '@nx/js'; export default async function (tree: Tree, schema: any) { await libraryGenerator(tree, { name: schema.name, directory: `libs/${schema.name}`, }); const libraryRoot = readProjectConfiguration(tree, schema.name).root; generateFiles( tree, // the virtual file system joinPathFragments(__dirname, './files'), // path to the file templates libraryRoot, // destination path of the files schema // config object to replace variable in file templates ); await formatFiles(tree); return () => { installPackagesTask(tree); }; } ``` The exported function first creates the library, then creates the additional files in the new library's folder. Next, run the generator: {% aside type="caution" title="Always do a dry-run" %} Use the `-d` or `--dry-run` flag to see your changes without applying them. This will let you see what the command will do to your workspace. {% /aside %} ```shell nx generate my-generator mylib ``` The following information will be displayed. ```text {% title="nx generate my-generator mylib" %} CREATE libs/mylib/README.md CREATE libs/mylib/.babelrc CREATE libs/mylib/src/index.ts CREATE libs/mylib/src/lib/mylib.spec.ts CREATE libs/mylib/src/lib/mylib.ts CREATE libs/mylib/tsconfig.json CREATE libs/mylib/tsconfig.lib.json UPDATE tsconfig.base.json UPDATE nx.json CREATE libs/mylib/.eslintrc.json CREATE libs/mylib/jest.config.ts CREATE libs/mylib/tsconfig.spec.json UPDATE jest.config.ts CREATE libs/mylib/NOTES.md ``` `libs/mylib/NOTES.md` will contain the content with substituted variables: ```markdown Hello, my name is mylib! ``` ## Dynamic file names If you want the generated file or folder name to contain variable values, use `__variable__`. So `NOTES-for-__name__.md` would be resolved to `NOTES_for_mylib.md` in the above example. ## Overwrite mode By default, generators overwrite files when they already exist. You can customize this behaviour with an optional argument to `generateFiles` that can take one of three values: - `OverwriteStrategy.Overwrite` (default): all generated files are created and overwrite existing target files if any. - `OverwriteStrategy.KeepExisting`: generated files are created only when target file does not exist. Existing target files are kept as is. - `OverwriteStrategy.ThrowIfExisting`: if a target file already exists, an exception is thrown. Suitable when a pristine target environment is expected. ## EJS syntax quickstart The [EJS syntax](https://ejs.co/) can do much more than replace variable names with values. Here are some common techniques. 1. Pass a function into the template: ```typescript // template file This is my <%= uppercase(name) %> ``` ```typescript // typescript file function uppercase(val: string) { return val.toUpperCase(); } // later generateFiles(tree, join(__dirname, './files'), libraryRoot, { uppercase, name: schema.name, }); ``` 2. Use javascript for control flow in the template: ```typescript <% if(shortVersion) { %> This is the short version. <% } else { for(let x=0; x This text will be repeated <%= numRepetitions %> times. <% } // end for loop } // end else block %> ``` ```typescript // typescript file generateFiles(tree, join(__dirname, './files'), libraryRoot, { shortVersion: false, numRepetitions: 3, }); ``` --- ## Extending Nx with Plugins Nx core functionality focuses on task running and understanding your project and task graph. Nx plugins leverage that functionality to enforce best practices, seamlessly integrate tooling and allow developers to get up and running quickly. As your repository grows, you'll discover more reasons to create your own plugin - You can help encourage your coworkers to consistently follow best practices by creating [code generators](/docs/features/generate-code) that are custom built for your repository. - You can remove duplicate configuration and ensure accurate caching settings by writing your own [inferred tasks](/docs/concepts/inferred-tasks). - For organizations with multiple monorepos, you can encourage consistency across repositories by providing a repository [preset](/docs/extending-nx/create-preset) and writing [migrations](/docs/extending-nx/migration-generators) that will help keep every project in sync. - You can write a plugin that integrates a tool or framework into Nx and then [share your plugin](/docs/extending-nx/publish-plugin) with the broader community. ## Create your own plugin Get started developing your own plugin with a few terminal commands: {% tabs %} {% tabitem label="Create a plugin in a new workspace" %} ```shell npx create-nx-plugin my-plugin ``` {% /tabitem %} {% tabitem label="Add a plugin to an existing workspace" %} ```shell npx nx add @nx/plugin npx nx g plugin tools/my-plugin ``` {% /tabitem %} {% /tabs %} ## Learn by doing You can follow along with one of the step by step tutorials below that is focused on a particular use-case. These tutorials expect you to already have the following skills: - [Run tasks](/docs/features/run-tasks) with Nx and configure Nx to [infers tasks for you](/docs/concepts/inferred-tasks) - [Use code generators](/docs/features/generate-code) - Understand the [project graph](/docs/features/explore-graph) - Write [TypeScript](https://www.typescriptlang.org/) code {% cardgrid %} {% linkcard title="Enforce Best Practices in Your Repository" href="/docs/extending-nx/organization-specific-plugin" /%} {% linkcard title="Integrate a Tool Into an Nx Repository" href="/docs/extending-nx/tooling-plugin" /%} {% /cardgrid %} ## Create your first code generator Wire up a new generator with this terminal command: ```shell npx nx g generator my-plugin/src/generators/library-with-readme ``` ### Understand the generator functionality This command will register the generator in the plugin's `generators.json` file and create some default generator code in the `library-with-readme` folder. The `libraryWithReadmeGenerator` function in the `generator.ts` file is where the generator functionality is defined. ```typescript // my-plugin/src/generators/library-with-readme/generator.ts export async function libraryWithReadmeGenerator( tree: Tree, options: LibraryWithReadmeGeneratorSchema ) { const projectRoot = `libs/${options.name}`; addProjectConfiguration(tree, options.name, { root: projectRoot, projectType: 'library', sourceRoot: `${projectRoot}/src`, targets: {}, }); generateFiles(tree, path.join(__dirname, 'files'), projectRoot, options); await formatFiles(tree); } ``` This generator calls the following functions: - `addProjectConfiguration` - Create a new project configured for TypeScript code. - `generateFiles` - Create files in the new project based on the template files in the `files` folder. - `formatFiles` - Format the newly created files with Prettier. You can find more helper functions in the [Nx Devkit reference documentation](/docs/reference/devkit). ### Create a rEADME template file We can remove the generated `index.ts.template` file and add our own `README.md.template` file in the `files` folder. ```typescript // my-plugin/src/generators/library-with-readme/files/README.md.template # <%= name %> This was generated by the `library-with-readme` generator! ``` The template files that are used in the `generateFiles` function can inject variables and functionality using the EJS syntax. Our `README` template will replace `<%= name %>` with the name specified in the generator. Read more about the EJS syntax in the [creating files with a generator recipe](/docs/extending-nx/creating-files). ### Run your generator You can test your generator in dry-run mode with the following command: ```shell npx nx g my-plugin:library-with-readme mylib --dry-run ``` If you're happy with the files that are generated, you can actually run the generator by removing the `--dry-run` flag. ## Next steps - [Browse the plugin registry](/docs/plugin-registry) to find one that suits your needs. - [Sign up for Nx Enterprise](https://nx.dev/enterprise) to get dedicated support from Nx team members. - [Collaborate on the Nx Discord](https://go.nx.dev/community) to work with other plugin authors. --- ## Write a Simple Executor Creating Executors for your workspace standardizes scripts that are run during your development/building/deploying tasks in order to provide guidance in the terminal with `--help` and when invoking with [Nx Console](/docs/getting-started/editor-setup) This guide shows you how to create, run, and customize executors within your Nx workspace. The examples use the trivial use-case of an `echo` command. ## Creating an executor If you don't already have a plugin, use Nx to generate one: ```shell nx add @nx/plugin nx g @nx/plugin:plugin tools/my-plugin ``` Use the Nx CLI to generate the initial files needed for your executor. ```shell nx generate @nx/plugin:executor tools/my-plugin/src/executors/echo ``` After the command is finished, the executor is created in the plugin `executors` folder. {% filetree %} - happynrwl/ - apps/ - tools/ - my-plugin/ - src/ - executors/ - echo/ - executor.spec.ts - executor.ts - schema.d.ts - schema.json - nx.json - package.json - tsconfig.base.json {% /filetree %} ### schema.json This file describes the options being sent to the executor (very similar to the `schema.json` file of generators). Setting the `cli` property to `nx` indicates that you're using the Nx Devkit to make this executor. ```json // schema.json { "$schema": "https://json-schema.org/schema", "type": "object", "properties": { "textToEcho": { "type": "string", "description": "Text To Echo" } } } ``` This example describes a single option for the executor that is a `string` called `textToEcho`. When using this executor, specify a `textToEcho` property inside the options. In our `executor.ts` file, we're creating an `Options` interface that matches the json object being described here. ### executor.ts The `executor.ts` contains the actual code for your executor. Your executor's implementation must export a function that takes an options object and returns a `Promise<{ success: boolean }>`. ```typescript // executor.ts import type { ExecutorContext } from '@nx/devkit'; import { exec } from 'child_process'; import { promisify } from 'util'; export interface EchoExecutorOptions { textToEcho: string; } export default async function echoExecutor( options: EchoExecutorOptions, context: ExecutorContext ): Promise<{ success: boolean }> { console.info(`Executing "echo"...`); console.info(`Options: ${JSON.stringify(options, null, 2)}`); const { stdout, stderr } = await promisify(exec)( `echo ${options.textToEcho}` ); console.log(stdout); console.error(stderr); const success = !stderr; return { success }; } ``` ## Running your executor Our last step is to add this executor to a given project's `targets` object in your project's `project.json` file: ```jsonc {% meta="{6-11}" %} // project.json { //... "targets": { // ... "echo": { "executor": "@my-org/my-plugin:echo", "options": { "textToEcho": "Hello World", }, }, }, } ``` {% aside type="caution" title="Use the package.json name" %} When referencing your plugin, you must use the name defined in `tools/my-plugin/package.json`. In this example, we're using `@myorg/my-plugin` as the plugin name. If your package.json has a different name, adjust the command accordingly. {% /aside %} Finally, you run the executor via the CLI as follows: ```shell nx run my-project:echo ``` To which we'll see the console output: ```text {% title="nx run my-project:echo" frame="terminal" %} Executing "echo"... Options: { "textToEcho": "Hello World" } Hello World ``` {% aside type="caution" title="string" %} Nx uses the paths from `tsconfig.base.json` when running plugins locally, but uses the recommended tsconfig for node 16 for other compiler options. See https://github.com/tsconfig/bases/blob/main/bases/node16.json {% /aside %} ## Using node child process [Node.js `childProcess`](https://nodejs.org/api/child_process.html) is often useful in executors. Part of the power of the executor API is the ability to compose executors via existing targets. This way you can combine other executors from your workspace into one which could be helpful when the process you're scripting is a combination of other existing executors provided by the CLI or other custom executors in your workspace. Here's an example of this (from a hypothetical project), that serves an api (project name: "api") in watch mode, then serves a frontend app (project name: "web-client") in watch mode: ```typescript // executor.ts import { ExecutorContext, runExecutor } from '@nx/devkit'; export interface MultipleExecutorOptions {} export default async function multipleExecutor( options: MultipleExecutorOptions, context: ExecutorContext ): Promise<{ success: boolean }> { const result = await Promise.race([ await runExecutor( { project: 'api', target: 'serve' }, { watch: true }, context ), await runExecutor( { project: 'web-client', target: 'serve' }, { watch: true }, context ), ]); for await (const res of result) { if (!res.success) return res; } return { success: true }; } ``` For other ideas on how to create your own executors, you can always check out Nx own open-source executors as well! {% github_repository url="https://github.com/nrwl/nx/blob/master/packages/cypress/src/executors/cypress/cypress.impl.ts" /%} ## Using custom hashers For most executors, the default hashing in Nx makes sense. The output of the executor is dependent on the files in the project that it is being run for, or that project's dependencies, and nothing else. Changing a miscellaneous file at the workspace root will not affect that executor, and changing _*any*_ file inside of the project may affect the executor. When dealing with targets which only depend on a small subset of the files in a project, or may depend on arbitrary data that is not stored within the project, the default hasher may not make sense anymore. In these cases, the target will either experience more frequent cache misses than necessary or not be able to be cached. Executors can provide a custom hasher that Nx uses when determining if a target run should be a cache hit, or if it must be run. When generating an executor for a plugin, you can use `nx g @nx/plugin:executor packages/my-plugin/src/executors/my-executor --includeHasher` to automatically add a custom hasher. If you want to add a custom hasher manually, create a new file beside your executor's implementation. We will use `hasher.ts` as an example here. You'll also need to update `executors.json`, so that it resembles something like this: ```json // executors.json { "executors": { "echo": { "implementation": "./src/executors/my-executor/executor", "hasher": "./src/executors/my-executor/hasher", "schema": "./src/executors/my-executor/schema.json" } } } ``` This would allow you to write a custom function in `hasher.ts`, which Nx would use to calculate the target's hash. As an example, consider the below hasher which mimics the behavior of Nx default hashing algorithm. ```typescript // hasher.ts import { CustomHasher, Task, HasherContext } from '@nx/devkit'; export const mimicNxHasher: CustomHasher = async ( task: Task, context: HasherContext ) => { return context.hasher.hashTask(task); }; export default mimicNxHasher; ``` The hash function can do anything it wants, but it is important to remember that the hasher replaces the hashing done normally by Nx. If you change the hasher, Nx may return cache hits when you do not anticipate it. Imagine the below custom hasher: ```typescript // hasher.ts import { CustomHasher, Task, HasherContext } from '@nx/devkit'; export const badHasher: CustomHasher = async ( task: Task, context: HasherContext ) => { return { value: 'my-static-hash', }; }; export default badHasher; ``` This hasher would never return a different hash, so every run of a task that consumes the executor would be a cache hit. It is important that anything that would change the result of your executor's implementation is accounted for in the hasher. --- ## Local Generators Local plugin generators provide a way to automate many tasks you regularly perform as part of your development workflow. Whether it is scaffolding out components, features, or ensuring libraries are generated and structured in a certain way, generators help you standardize these tasks in a consistent, and predictable manner. Nx provides tooling around creating, and running custom generators from within your workspace. This guide shows you how to create, run, and customize generators within your Nx workspace. {% youtube src="https://www.youtube.com/embed/myqfGDWC2go" title="Scaffold new Pkgs in a PNPM Workspaces Monorepo" caption="Demoes how to use Nx generators in a PNPM workspace to automate the creation of libraries" /%} ## Creating a generator If you don't already have a local plugin, use Nx to generate one: ```shell nx add @nx/plugin nx g @nx/plugin:plugin tools/my-plugin ``` Note that `latest` should match the version of the `nx` plugins installed in your workspace. Use the Nx CLI to generate the initial files needed for your generator. ```shell nx generate @nx/plugin:generator tools/my-plugin/src/generators/my-generator ``` After the command is finished, the generator is created in the plugin `generators` folder. {% filetree %} - happynrwl/ - apps/ - tools/ - my-plugin/ - src/ - generators/ - my-generator/ - generator.spec.ts - generator.ts - schema.d.ts - schema.json - nx.json - package.json - tsconfig.base.json {% /filetree %} The `generator.ts` provides an entry point to the generator. The file contains a function that is called to perform manipulations on a tree that represents the file system. The `schema.json` provides a description of the generator, available options, validation information, and default values. The initial generator function creates a library. ```typescript // generator.ts import { Tree, formatFiles, installPackagesTask } from '@nx/devkit'; import { libraryGenerator } from '@nx/js'; export default async function (tree: Tree, schema: any) { await libraryGenerator(tree, { name: schema.name }); await formatFiles(tree); return () => { installPackagesTask(tree); }; } ``` To invoke other generators, import the entry point function and run it against the tree. `async/await` can be used to make code with Promises read like procedural code. The generator function may return a callback function that is executed after changes to the file system have been applied. In the `schema.json` file for your generator, the `name` is provided as a default option. The `cli` property is set to `nx` to signal that this is a generator that uses `@nx/devkit` and not `@angular-devkit`. ```json // schema.json { "cli": "nx", "id": "test", "type": "object", "properties": { "name": { "type": "string", "description": "Library name", "$default": { "$source": "argv", "index": 0 } } }, "required": ["name"] } ``` The `$default` object is used to read arguments from the command-line that are passed to the generator. The first argument passed to this schematic is used as the `name` property. ## Running a generator To run a generator, invoke the `nx generate` command with the name of the generator. ```shell nx generate @myorg/my-plugin:my-generator mylib ``` {% aside type="caution" title="Use the package.json name" %} When referencing your plugin, you must use the name defined in `tools/my-plugin/package.json`. In this example, we're using `@myorg/my-plugin` as the plugin name. If your package.json has a different name, adjust the command accordingly. {% /aside %} {% aside type="caution" title="TsConfig" %} Nx uses the paths from `tsconfig.base.json` when running plugins locally, but uses the recommended tsconfig for node 16 for other compiler options. See https://github.com/tsconfig/bases/blob/main/bases/node16.json {% /aside %} ## Debugging generators ### With visual studio code 1. Open the Command Palette and choose `Debug: Create JavaScript Debug Terminal`. This will open a terminal with debugging enabled. 2. Set breakpoints in your code 3. Run `nx g my-generator` in the debug terminal. ![vscode-schematics-debug](../../../assets/nx-console/vscode-schematics-debug.png) ## Generator schema properties Beyond the standard [JSON Schema](https://json-schema.org/) properties like `type`, `description`, `enum`, and `default`, Nx recognizes several custom properties in your `schema.json` that control CLI prompting behavior and how [Nx Console](/docs/getting-started/editor-setup) renders the generator form. ### `$default` Provides a dynamic default value from a runtime source. Used to map positional CLI arguments and other context to schema properties. ```json // schema.json { "properties": { "name": { "type": "string", "$default": { "$source": "argv", "index": 0 } } } } ``` | Source | Description | | ----------------------------------- | ------------------------------------------------------------------------------------------- | | `{ "$source": "argv", "index": 0 }` | Uses the positional CLI argument at the given index | | `{ "$source": "projectName" }` | Uses the current project name. Also triggers project autocomplete in the CLI and Nx Console | | `{ "$source": "workingDirectory" }` | Uses the current working directory relative to the workspace root | | `{ "$source": "unparsed" }` | Collects any extra arguments not matched by other schema properties | ### `x-prompt` Defines an interactive prompt shown when the option is not provided on the command line. Can be a simple string or a structured object for more control. ```json // schema.json { "properties": { "style": { "type": "string", "description": "The file extension to be used for style files.", "x-prompt": { "message": "Which stylesheet format would you like to use?", "type": "list", "items": [ { "value": "css", "label": "CSS" }, { "value": "scss", "label": "SASS (.scss)" }, { "value": "less", "label": "LESS" } ] } } } } ``` **Short form:** `"x-prompt": "What name would you like to use?"` — displays a simple text prompt. **Long form object properties:** | Property | Type | Description | | ------------- | ------------------------------------------------ | ----------------------------------------------------- | | `message` | `string` | The prompt text displayed to the user | | `type` | `string` | Prompt type: `"input"`, `"list"`, or `"confirmation"` | | `multiselect` | `boolean` | Allow selecting multiple items (for `"list"` type) | | `items` | `(string \| { label: string, value: string })[]` | Choices for `"list"` type prompts | In Nx Console, the `message` is shown as a tooltip on the field and `items` labels are shown as option descriptions. ### `x-priority` Controls the visibility and ordering of an option in the Nx Console Generate form. ```json // schema.json { "properties": { "name": { "type": "string", "x-priority": "important" }, "skipFormat": { "type": "boolean", "x-priority": "internal" } } } ``` | Value | Effect | | ------------- | ------------------------------------------------------------- | | `"important"` | Field appears near the top of the form, after required fields | | `"internal"` | Field is hidden from the form by default | Options in the Nx Console form are sorted: **required** > **important** > **regular** > **deprecated** > **internal**. ### `x-deprecated` Marks an option as deprecated. Deprecated options are sorted to the bottom of the form in Nx Console and display a warning. ```json // schema.json { "properties": { "oldOption": { "type": "string", "x-deprecated": "Use 'newOption' instead." } } } ``` The value can be `true` (boolean) or a string with the deprecation reason/migration guidance. ### `x-dropdown` Tells both the CLI and Nx Console to present a dropdown populated with workspace data. ```json // schema.json { "properties": { "projectName": { "type": "string", "x-dropdown": "projects" } } } ``` Currently only `"projects"` is supported, which shows all projects in the workspace. {% aside type="note" title="Automatic project autocomplete" %} The CLI and Nx Console also automatically provide project autocomplete for any property named `project` or `projectName`, or that has `$default` set to `{ "$source": "projectName" }` — even without `x-dropdown`. {% /aside %} ### `x-hint` Displays a hint popover next to the field label in the Nx Console Generate form. Use this for brief contextual guidance that doesn't belong in the main `description`. ```json // schema.json { "properties": { "name": { "type": "string", "x-hint": "You can provide a nested path like my-dir/my-lib" } } } ``` ### `x-completion-type` and `x-completion-glob` These properties are used by Nx Console's language server to provide autocomplete suggestions when editing configuration files like `project.json` or `nx.json`. ```json // schema.json { "properties": { "tsConfig": { "type": "string", "x-completion-type": "file", "x-completion-glob": "tsconfig*.json" } } } ``` | `x-completion-type` value | Description | | ------------------------- | ------------------------------------------------------------------------- | | `"file"` | Autocomplete with file paths (optionally filtered by `x-completion-glob`) | | `"directory"` | Autocomplete with directory paths | | `"projects"` | Autocomplete with workspace project names | | `"targets"` | Autocomplete with available target names | | `"targetsWithDeps"` | Autocomplete targets, including `^target` syntax for dependencies | | `"tags"` | Autocomplete with project tags | | `"projectTarget"` | Autocomplete with `project:target` format | ## Generator utilities The [`@nx/devkit` package](/docs/reference/devkit) provides many utility functions that can be used in generators to help with modifying files, reading and updating configuration files, and working with an Abstract Syntax Tree (AST). --- ## Migration Generators When your plugin is being used in other repos, it is helpful to provide migration generators to automatically update configuration files when your plugin makes a breaking change. A migration generator is a normal generator that is triggered when a developer runs the `nx migrate` command. ## Create a migration generator For this example, we'll create a new migration generator that updates repos to use `newExecutorName` instead of `oldExecutorName` in their targets. This migration will be applied when the run `nx migrate` to move up past version `2.0.1` of our plugin. ### 1. Generate a migration ```shell nx generate @nx/plugin:migration libs/pluginName/src/migrations/change-executor-name \ --name='Change Executor Name' \ --packageVersion=2.0.1 \ --project=pluginName \ --description='Changes the executor name from oldExecutorName to newExecutorName' ``` This command will update the following files: ```json // package.json { "nx-migrations": { "migrations": "./migrations.json" } } ``` ```json // migrations.json { "generators": { "change-executor-name": { "version": "2.0.1", "description": "Changes the executor name from oldExecutorName to newExecutorName", "cli": "nx", "implementation": "./src/migrations/change-executor-name/change-executor-name" } } } ``` And it creates a blank generator under: `libs/pluginName/src/migrations/change-executor-name/change-executor-name.ts` ### 2. write the generator code ```ts // change-executor-name.ts import { getProjects, Tree, updateProjectConfiguration } from '@nx/devkit'; export function changeExecutorNameToNewName(tree: Tree) { const projects = getProjects(tree); for (const [name, project] of projects) { if ( project.targets?.build?.executor === '@myorg/pluginName:oldExecutorName' ) { project.targets.build.executor = '@myorg/pluginName:newExecutorName'; updateProjectConfiguration(tree, name, project); } } } export default changeExecutorNameToNewName; ``` ## Update package.json dependencies If you just need to change dependency versions, you can add some configuration options to the `migrations.json` file without making a full generator. ```json // migrations.json { "packageJsonUpdates": { // this can be any name "12.10.0": { // this is version at which the change will be applied "version": "12.10.0-beta.2", "packages": { // the name of the dependency to update "@testing-library/react": { // the version to set the dependency to "version": "11.2.6", // When true, the dependency will be added if it isn't there. When false, the dependency is skipped if it isn't already present. "alwaysAddToPackageJson": false } } } } } ``` --- ## Modifying Files with a Generator Modifying existing files is an order of magnitude harder than creating new files, so care should be taken when trying to automate this process. When the situation merits it, automating a process can lead to tremendous benefits across the organization. Here are some approaches listed from simplest to most complex. ## Compose existing generators If you can compose together existing generators to modify the files you need, you should take that approach. See [Composing Generators](/docs/extending-nx/composing-generators) for more information. ## Modify JSON files JSON files are fairly simple to modify, given their predictable structure. The following example adds a `package.json` script that issues a friendly greeting. ```typescript import { updateJson } from '@nx/devkit'; export default async function (tree: Tree, schema: any) { updateJson(tree, 'package.json', (pkgJson) => { // if scripts is undefined, set it to an empty object pkgJson.scripts = pkgJson.scripts ?? {}; // add greet script pkgJson.scripts.greet = 'echo "Hello!"'; // return modified JSON object return pkgJson; }); } ``` ## String replace For files that are not as predictable as JSON files (like `.ts`, `.md` or `.css` files), modifying the contents can get tricky. One approach is to do a find and replace on the string contents of the file. Let's say we want to replace any instance of `thomasEdison` with `nikolaTesla` in the `index.ts` file. ```typescript export default async function (tree: Tree, schema: any) { const filePath = `path/to/index.ts`; const contents = tree.read(filePath).toString(); const newContents = contents.replace('thomasEdison', 'nikolaTesla'); tree.write(filePath, newContents); } ``` This works, but only replaces the first instance of `thomasEdison`. To replace them all, you need to use regular expressions. (Regular expressions also give you a lot more flexibility in how you search for a string.) ```typescript export default async function (tree: Tree, schema: any) { const filePath = `path/to/index.ts`; const contents = tree.read(filePath).toString(); const newContents = contents.replace(/thomasEdison/g, 'nikolaTesla'); tree.write(filePath, newContents); } ``` ## AST manipulation ASTs (Abstract Syntax Trees) allow you to understand exactly the code you're modifying. Replacing a string value can accidentally modify text found in a comment rather than changing the name of a variable. We'll write a generator that replaces all instances of the type `Array` with `something[]`. To help accomplish this, we'll use the `@phenomnomnominal/tsquery` npm package and the [TSQuery Playground](https://tsquery-playground.firebaseapp.com/) site. TSQuery allows you to query and modify ASTs with a syntax similar to CSS selectors. The AST Explorer tool allows you to easily examine the AST for a given snippet of code. First, go to [TSQuery Playground](https://tsquery-playground.firebaseapp.com/) and paste in a snippet of code that contains the input and desired output of our generator. ```typescript // input const arr: Array = []; // desired output const arr: string[] = []; ``` When you place the cursor on the `Array` text, the right hand panel highlights the corresponding node of the AST. The AST node we're looking for looks like this: ```typescript { // TypeReference typeName: { // Identifier escapedText: "Array" }, typeArguments: [/* this is where the generic type parameter is specified */] } ``` Second, we need to choose a selector to target this node. Just like with CSS selectors, there is an art to choosing a selector that is specific enough to target the correct nodes, but not overly tied to a certain structure. For our simple example, we can use `TypeReference` to select the parent node and check to see if it has a `typeName` of `Array` before we perform the replacement. We'll then use the `typeArguments` to get the text inside the `<>` characters. The finished code looks like this: ```typescript import { readProjectConfiguration, Tree } from '@nx/devkit'; import { tsquery } from '@phenomnomnominal/tsquery'; import { TypeReferenceNode } from 'typescript'; /** * Run the callback on all files inside the specified path */ function visitAllFiles( tree: Tree, path: string, callback: (filePath: string) => void ) { tree.children(path).forEach((fileName) => { const filePath = `${path}/${fileName}`; if (!tree.isFile(filePath)) { visitAllFiles(tree, filePath, callback); } else { callback(filePath); } }); } export default function (tree: Tree, schema: any) { const sourceRoot = readProjectConfiguration(tree, schema.name).sourceRoot; visitAllFiles(tree, sourceRoot, (filePath) => { const fileEntry = tree.read(filePath); const contents = fileEntry.toString(); // Check each `TypeReference` node to see if we need to replace it const newContents = tsquery.replace(contents, 'TypeReference', (node) => { const trNode = node as TypeReferenceNode; if (trNode.typeName.getText() === 'Array') { const typeArgument = trNode.typeArguments[0]; return `${typeArgument.getText()}[]`; } // return undefined does not replace anything }); // only write the file if something has changed if (newContents !== contents) { tree.write(filePath, newContents); } }); } ``` --- ## Enforce Organizational Best Practices with a Local Plugin Every repository has a unique set of conventions and best practices that developers need to learn in order to write code that integrates well with the rest of the code base. It is important to document those best practices, but developers don't always read the documentation and even if they have read the documentation, they don't consistently follow the documentation every time they perform a task. Nx allows you to encode these best practices in code generators that have been tailored to your specific repository. We will create a generator that helps enforce the follow best practices: - Every project in this repository should use Vitest for unit tests. - Every project in this repository should be tagged with a `scope:*` tag that is chosen from the list of available scopes. - Projects should be placed in folders that match the scope that they are assigned. - Vitest should clear mocks before running tests. ## Get started Let's first create a new workspace with the `create-nx-workspace` command: ```shell npx create-nx-workspace myorg --preset=react-monorepo --ci=github ``` Then we , install the `@nx/plugin` package and generate a plugin: ```shell npx nx add @nx/plugin npx nx g @nx/plugin:plugin tools/recommended ``` This will create a `recommended` project that contains all your plugin code. ## Create a customized library generator To create a new generator run: ```shell npx nx generate @nx/plugin:generator tools/recommended/src/generators/library ``` The new generator is located in `tools/recommended/src/generators/library`. The `generator.ts` file contains the code that runs the generator. We can delete the `files` directory since we won't be using it and update the `generator.ts` file with the following code: ```ts // tools/recommended/src/generators/library/generator.ts import { Tree } from '@nx/devkit'; import { Linter } from '@nx/eslint'; import { libraryGenerator as reactLibraryGenerator } from '@nx/react'; import { LibraryGeneratorSchema } from './schema'; export async function libraryGenerator( tree: Tree, options: LibraryGeneratorSchema ) { const callbackAfterFilesUpdated = await reactLibraryGenerator(tree, { ...options, linter: 'eslint', style: 'css', unitTestRunner: 'vitest', }); return callbackAfterFilesUpdated; } export default libraryGenerator; ``` Notice how this generator is calling the `@nx/react` plugin's `library` generator with a predetermined list of options. This helps developers to always create projects with the recommended settings. We're returning the `callbackAfterFilesUpdated` function because the `@nx/react:library` generator sometimes needs to install packages from NPM after the file system has been updated by the generator. You can provide your own callback function instead, if you have tasks that rely on actual files being present. To try out the generator in dry-run mode, use the following command: ```shell npx nx g @myorg/recommended:library test-library --dry-run ``` {% aside type="caution" title="Use the package.json name" %} When referencing your plugin, you must use the name defined in `tools/recommended/package.json`. In this example, we're using `@myorg/recommended` as the plugin name. If your package.json has a different name, adjust the command accordingly. {% /aside %} Remove the `--dry-run` flag to actually create a new project. ### Add generator options The `schema.d.ts` file contains all the options that the generator supports. By default, it includes only a `name` option. Let's add a directory option to pass on to the `@nx/react` generator. {% tabs syncKey="schema-file" %} {% tabitem label="schema.d.ts" %} ```ts // tools/recommended/src/generators/library/schema.d.ts export interface LibraryGeneratorSchema { name: string; directory?: string; } ``` {% /tabitem %} {% tabitem label="schema.json" %} ```json // tools/recommended/src/generators/library/schema.json { "$schema": "https://json-schema.org/schema", "$id": "Library", "title": "", "type": "object", "properties": { "name": { "type": "string", "description": "", "$default": { "$source": "argv", "index": 0 }, "x-prompt": "What name would you like to use?" }, "directory": { "type": "string", "description": "" } }, "required": ["name"] } ``` {% /tabitem %} {% /tabs %} {% aside type="note" title="More details" %} The `schema.d.ts` file is used for type checking inside the implementation file. It should match the properties in `schema.json`. {% /aside %} The schema files not only provide structure to the CLI, but also allow [Nx Console](/docs/getting-started/editor-setup) to show an accurate UI for the generator. ![Nx Console UI for the library generator](../../../assets/nx-console/generator-options-ui.png) Notice how we made the `description` argument optional in both the JSON and type files. If we call the generator without passing a directory, the project will be created in a directory with same name as the project. We can test the changes to the generator with the following command: ```shell npx nx g @myorg/recommended:library test-library --directory=nested/directory/test-library --dry-run ``` ### Choose a scope It can be helpful to tag a library with a scope that matches the application it should be associated with. With these tags in place, you can [set up rules](/docs/features/enforce-module-boundaries) for how projects can depend on each other. For our repository, let's say the scopes can be `store`, `api` or `shared` and the default directory structure should match the chosen scope. We can update the generator to encourage developers to maintain this structure. {% tabs syncKey="schema-file" %} {% tabitem label="schema.d.ts" %} ```ts // tools/recommended/src/generators/library/schema.d.ts export interface LibraryGeneratorSchema { name: string; scope: string; directory?: string; } ``` {% /tabitem %} {% tabitem label="schema.json" %} ```json // tools/recommended/src/generators/library/schema.json { "$schema": "https://json-schema.org/schema", "$id": "Library", "title": "", "type": "object", "properties": { "name": { "type": "string", "description": "", "$default": { "$source": "argv", "index": 0 }, "x-prompt": "What name would you like to use?" }, "scope": { "type": "string", "description": "The scope of your library.", "enum": ["api", "store", "shared"], "x-prompt": { "message": "What is the scope of this library?", "type": "list", "items": [ { "value": "store", "label": "store" }, { "value": "api", "label": "api" }, { "value": "shared", "label": "shared" } ] } }, "directory": { "type": "string", "description": "" } }, "required": ["name"] } ``` {% /tabitem %} {% tabitem label="generator.ts" %} ```ts // tools/recommended/src/generators/library/generator.ts import { Tree } from '@nx/devkit'; import { Linter } from '@nx/eslint'; import { libraryGenerator as reactLibraryGenerator } from '@nx/react'; import { LibraryGeneratorSchema } from './schema'; export async function libraryGenerator( tree: Tree, options: LibraryGeneratorSchema ) { const callbackAfterFilesUpdated = await reactLibraryGenerator(tree, { ...options, tags: `scope:${options.scope}`, directory: options.directory || `${options.scope}/${options.name}`, linter: 'eslint', style: 'css', unitTestRunner: 'vitest', }); return callbackAfterFilesUpdated; } export default libraryGenerator; ``` {% /tabitem %} {% /tabs %} We can check that the scope logic is being applied correctly by running the generator again and specifying a scope. ```shell npx nx g @myorg/recommended:library test-library --scope=shared --dry-run ``` This should create the `test-library` in the `shared` folder. ## Configure tasks You can also use your Nx plugin to configure how your tasks are run. Usually, organization focused plugins configure tasks by modifying the configuration files for each project. If you have developed your own tooling scripts for your organization, you may want to create an executor or infer tasks, but that process is covered in more detail in the tooling plugin tutorial. Let's update our library generator to set the `clearMocks` property to `true` in the `vitest` configuration. First we'll run the `reactLibraryGenerator` and then we'll modify the created files. ```ts // tools/recommended/src/generators/library/generator.ts import { formatFiles, Tree, runTasksInSerial } from '@nx/devkit'; import { Linter } from '@nx/eslint'; import { libraryGenerator as reactLibraryGenerator } from '@nx/react'; import { LibraryGeneratorSchema } from './schema'; export async function libraryGenerator( tree: Tree, options: LibraryGeneratorSchema ) { const directory = options.directory || `${options.scope}/${options.name}`; const tasks = []; tasks.push( await reactLibraryGenerator(tree, { ...options, tags: `scope:${options.scope}`, directory, linter: 'eslint', style: 'css', unitTestRunner: 'vitest', }) ); updateViteConfiguration(tree, directory); await formatFiles(tree); return runTasksInSerial(...tasks); } function updateViteConfiguration(tree, directory) { // Read the vite configuration file let viteConfiguration = tree.read(`${directory}/vite.config.ts`)?.toString() || ''; // Modify the configuration // This is done with a naive search and replace, but could be done in a more robust way using AST nodes. viteConfiguration = viteConfiguration.replace( `globals: true,`, `globals: true,\n clearMocks:true,` ); // Write the modified configuration back to the file tree.write(`${directory}/vite.config.ts`, viteConfiguration); } export default libraryGenerator; ``` We updated the generator to use some new helper functions from the Nx devkit. Here are a few functions you may find useful. See the [full API reference](/docs/reference/devkit) for all the options. - [`runTasksInSerial`](/docs/reference/devkit/runTasksInSerial) - Allows you to collect many callbacks and return them all at the end of the generator. - [`formatFiles`](/docs/reference/devkit/formatFiles) - Run Prettier on the repository - [`readProjectConfiguration`](/docs/reference/devkit/readProjectConfiguration) - Get the calculated project configuration for a single project - [`updateNxJson`](/docs/reference/devkit/updateNxJson) - Update the `nx.json` file Now let's check to make sure that the `clearMocks` property is set correctly by the generator. First, we'll commit our changes so far. Then, we'll run the generator without the `--dry-run` flag so we can inspect the file contents. ```shell git add . git commit -am "library generator" npx nx g @myorg/recommended:library store-test --scope=store ``` ## Next steps Now that we have a working library generator, here are some more topics you may want to investigate. - [Generate files](/docs/extending-nx/creating-files) from EJS templates - [Modify files](/docs/extending-nx/modifying-files) with string replacement or AST transformations ## Encourage adoption Once you have a set of generators in place in your organization's plugin, the rest of the work is all communication. Let your developers know that the plugin is available and encourage them to use it. These are the most important points to communicate to your developers: - Whenever there are multiple plugins that provide a generator with the same name, use the `recommended` version - If there are repetitive or error prone processes that they identify, ask the plugin team to write a generator for that process Now you can go through all the README files in the repository and replace any multiple step instructions with a single line calling a generator. --- ## Extending the Project Graph The Project Graph is the representation of the source code in your repo. Projects can have files associated with them. Projects can have dependencies on each other. One of the best features of Nx the ability to construct the project graph automatically by analyzing your source code. Currently, this works best within the JavaScript ecosystem, but it can be extended to other languages and technologies using plugins. Project graph plugins are able to add new nodes or dependencies to the project graph. This allows you to extend the project graph with new projects and dependencies. The API is defined by two exported members, which are described below: - [createNodesV2](#adding-new-nodes-to-the-project-graph): This tuple allows a plugin to tell Nx information about projects that are identified by a given file. - [createDependencies](#adding-new-dependencies-to-the-project-graph): This function allows a plugin to tell Nx about dependencies between projects. {% aside type="caution" title="Disable the Nx Daemon during development" %} When developing project graph plugins, disable the [Nx Daemon](/docs/concepts/nx-daemon) by setting `NX_DAEMON=false`. The daemon caches your plugin code, so changes to your plugin won't be reflected until the daemon restarts. {% /aside %} ## Adding plugins to workspace You can register a plugin by adding it to the plugins array in `nx.json`: ```jsonc // nx.json { ..., "plugins": [ "awesome-plugin" ] } ``` ## Adding new nodes to the project graph You can add nodes to the project graph with [`createNodesV2`](/docs/reference/devkit/CreateNodesV2). This is the API that Nx uses under the hood to identify Nx projects coming from a `project.json` file or a `package.json` that's listed in a package manager's workspaces section. {% aside type="note" title="CreateNodes API Versions" %} Nx has evolved its plugin API over time. Different Nx versions call different `createNodes` exports (`createNodes` vs `createNodesV2`). If you need to support multiple Nx versions see the [CreateNodes Compatibility Guide](/docs/extending-nx/createnodes-compatibility). {% /aside %} ### Identifying projects Looking at the tuple, you can see that the first element is a file pattern. This is a glob pattern that Nx will use to find files in your workspace. The second element is a function that will be called for each file that matches the pattern. The function will be called with the path to the file and a context object. Your plugin can then return a set of projects and external nodes. If a plugin identifies a project that is already in the project graph, it will be merged with the information that is already present. The builtin plugins that identify projects from `package.json` files and `project.json` files are ran after any plugins listed in `nx.json`, and as such will overwrite any configuration that was identified by them. In practice, this means that if a project has both a `project.json`, and a file that your plugin identified, the settings the plugin identified will be overwritten by the `project.json`'s contents. Project nodes in the graph are considered to be the same if the project has the same root. If multiple plugins identify a project with the same root, the project will be merged. In doing so, the name that is already present in the graph is kept, and the properties below are shallowly merged. Any other properties are overwritten. - `targets` - `tags` - `implicitDependencies` - `generators` Note: This is a shallow merge, so if you have a target with the same name in both plugins, the target from the second plugin will overwrite the target from the first plugin. Options, configurations, or any other properties within the target will be overwritten **_not_** merged. #### Example (adding projects) A simplified version of Nx built-in `project.json` plugin is shown below, which adds a new project to the project graph for each `project.json` file it finds. This should be exported from the entry point of your plugin, which is listed in `nx.json` ```typescript // my-plugin/index.ts import { createNodesFromFiles, readJsonFile } from '@nx/devkit'; import { dirname } from 'path'; export interface MyPluginOptions {} export const createNodesV2: CreateNodesV2 = [ '**/project.json', async (configFiles, options, context) => { return await createNodesFromFiles( (configFile, options, context) => createNodesInternal(configFile, options, context), configFiles, options, context ); }, ]; async function createNodesInternal( configFilePath: string, options: MyPluginOptions, context: CreateNodesContextV2 ) { const projectConfiguration = readJsonFile(configFilePath); const root = dirname(configFilePath); // Project configuration to be merged into the rest of the Nx configuration return { projects: { [root]: projectConfiguration, }, }; } ``` {% aside type="caution" title="Dynamic target configurations can't be migrated" %} If you create targets for a project within a plugin's code, the Nx migration generators can not find that target configuration to update it. There are two ways to account for this: 1. Only create dynamic targets using executors that you own. This way you can update the configuration in both places when needed. 2. If you create a dynamic target for an executor you don't own, only define the `executor` property and instruct your users to define their options in the `targetDefaults` property of `nx.json`. {% /aside %} #### Example (extending projects / adding inferred targets) When writing a plugin to add support for some tooling, it may need to add a target to an existing project. For example, our @nx/jest plugin adds a target to the project for running Jest tests. This is done by checking for the presence of a jest configuration file, and if it is present, adding a target to the project. Most of Nx first party plugins are written to add a target to a given project based on the configuration files present for that project. The below example shows how a plugin could add a target to a project based on the presence of a `tsconfig.json` file. ```typescript // my-plugin/index.ts import { createNodesFromFiles, readJsonFile } from '@nx/devkit'; import { dirname } from 'path'; export interface MyPluginOptions {} export const createNodesV2: CreateNodesV2 = [ '**/tsconfig.json', async (configFiles, options, context) => { return await createNodesFromFiles( (configFile, options, context) => createNodesInternal(configFile, options, context), configFiles, options, context ); }, ]; async function createNodesInternal( configFilePath: string, options: MyPluginOptions, context: CreateNodesContextV2 ) { const projectConfiguration = readJsonFile(configFilePath); const projectRoot = dirname(configFilePath); const isProject = existsSync(join(projectRoot, 'project.json')) || existsSync(join(projectRoot, 'package.json')); if (!isProject) { return {}; } return { projects: { [projectRoot]: { targets: { build: { command: `tsc -p ${fileName}`, }, }, }, }, }; } ``` By checking for the presence of a `project.json` or `package.json` file, the plugin can be more confident that the project it is modifying is an existing Nx project. When extending an existing project, its important to consider how Nx will merge the returned project configurations. In general, plugins are run in the order they are listed in `nx.json`, and then Nx built-in plugins are run last. Plugins overwrite information that was identified by plugins that ran before them if a merge is not possible. Nx considers two identified projects to be the same if and only if they have the same root. If two projects are identified with the same name, but different roots, there will be an error. The logic for merging project declarations is as follows: - `name`, `sourceRoot`, `projectType`, and any other top level properties which are a literal (e.g. not an array or object) are overwritten. - `tags` are merged and deduplicated. - `implicitDependencies` are merged, with dependencies from later plugins being appended to the end - `targets` are merged, with special logic for the targets inside of them: - If the targets are deemed compatible (They use the same executor / command, or one of the two declarations does not specify an executor / command): - The `executor` or `command` remains the same - The `options` object is merged with the later plugin's options overwriting the earlier plugin's options. This is a shallow merge, so if a property is an object, the later plugin's object will overwrite the earlier plugin's object rather than merging the two. - The `configurations` object is merged, with the later plugin's configurations overwriting the earlier plugin's configurations. The options for each configuration are merged in the same way as the top level options. - `inputs` and `outputs` overwrite the earlier plugin's inputs and outputs. - If the targets are not deemed compatible, the later plugin's target will overwrite the earlier plugin's target. - `generators` are merged. If both project configurations specify the same generator, those generators are merged. - `namedInputs` are merged. If both project configurations specify the same named input, the later plugin's named input will overwrite the earlier plugin's named input. This is what allows overriding a named input from a plugin that ran earlier (e.g. in project.json). ### Adding external nodes Additionally, plugins can add external nodes to the project graph. External nodes are nodes that are not part of the workspace, but are still part of the project graph. This is useful for things like npm packages, or other external dependencies that are not part of the workspace. External nodes are identified by a unique name, and if plugins identify an external node with the same name, the external node will be **_overwritten_**. This is different from projects, where the properties are merged, but is handled this way as it should not be as common and there are less useful properties to merge. ## Adding new dependencies to the project graph It's more common for plugins to create new dependencies. First-party code contained in the workspace is added to the project graph automatically. Whether your project contains TypeScript or say Java, both projects will be created in the same way. However, Nx does not know how to analyze Java sources, and that's what plugins can do. The shape of the [`createDependencies`](/docs/reference/devkit/CreateDependencies) function follows: ```typescript export type CreateDependencies = ( opts: T, context: CreateDependenciesContext ) => CandidateDependency[] | Promise; ``` In the `createDependencies` function, you can analyze the files in the workspace and return a list of dependencies. It's up to the plugin to determine how to analyze the files. This should also be exported from the plugin's entry point, as listed in `nx.json`. Within the `CreateDependenciesContext`, you have access to the graph's external nodes, the configuration of each project in the workspace, the `nx.json` configuration from the workspace, all files in the workspace, and files that have changed since the last invocation. It's important to utilize the `filesToProcess` parameter, as this will allow Nx to only reanalyze files that have changed since the last invocation, and reuse the information from the previous invocation for files that haven't changed. `@nx/devkit` exports a function called `validateDependency` which can be used to validate a dependency. This function takes in a `CandidateDependency` and the `CreateDependenciesContext` and throws an error if the dependency is invalid. This function is called when the returned dependencies are merged with the existing project graph, but may be useful to call within your plugin to validate dependencies before returning them when debugging. The dependencies can be of three types: - Implicit - Static - Dynamic ### Implicit dependencies An implicit dependency is not associated with any file, and can be created as follows: ```typescript { source: 'existing-project', target: 'new-project', dependencyType: DependencyType.implicit, } ``` Because an implicit dependency is not associated with any file, Nx doesn't know when it might change, so it will be recomputed every time. ### Static dependencies Nx knows what files have changed since the last invocation. Only those files will be present in the provided `filesToProcess`. You can associate a dependency with a particular file (e.g., if that file contains an import). ```typescript { source: 'existing-project', target: 'new-project', sourceFile: 'libs/existing-project/src/index.ts', dependencyType: DependencyType.static, } ``` If a file hasn't changed since the last invocation, it doesn't need to be reanalyzed. Nx knows what dependencies are associated with what files, so it will reuse this information for the files that haven't changed. ### Dynamic dependencies Dynamic dependencies are a special type of explicit dependencies. In contrast to standard `explicit` dependencies, they are only imported in the runtime under specific conditions. A typical example would be lazy-loaded routes. Having separation between these two allows us to identify situations where static import breaks the lazy-loading. ```typescript { source: 'existing-project', target: 'new-project', sourceFile: 'libs/existing-project/src/index.ts', dependencyType: DependencyType.dynamic, } ``` ### Example {% aside type="note" title="More details" %} Even though the plugin is written in JavaScript, resolving dependencies of different languages will probably be more easily written in their native language. Therefore, a common approach is to spawn a new process and communicate via IPC or `stdout`. {% /aside %} A small plugin that recognizes dependencies to projects in the current workspace which a referenced in another project's `package.json` file may look like so: ```typescript // my-plugin/index.ts export const createDependencies: CreateDependencies = (opts, ctx) => { const packageJsonProjectMap = new Map(); const nxProjects = Object.values(ctx.projectsConfigurations); const results = []; for (const project of nxProjects) { const maybePackageJsonPath = join(project.root, 'package.json'); if (existsSync(maybePackageJsonPath)) { const json = JSON.parse(maybePackageJsonPath); packageJsonProjectMap.set(json.name, project.name); } } for (const project of nxProjects) { const maybePackageJsonPath = join(project.root, 'package.json'); if (existsSync(maybePackageJsonPath)) { const json = JSON.parse(maybePackageJsonPath); const deps = [...Object.keys(json.dependencies)]; for (const dep of deps) { if (packageJsonProjectMap.has(dep)) { const newDependency = { source: project, target: packageJsonProjectMap.get(dep), sourceFile: maybePackageJsonPath, dependencyType: DependencyType.static, }; } validateDependency(newDependency, ctx); results.push(newDependency); } } } return results; }; ``` Breaking down this example, we can see that it follows this flow: 1. Initializes an array to hold dependencies it locates 2. Builds a map of all projects in the workspace, mapping the name inside their package.json to their Nx project name. 3. Looks at the package.json files within the workspace and: 4. Checks if the dependency is another project 5. Builds a dependency from this information 6. Validates the dependency 7. Pushes it into the located dependency array 8. Returns the located dependencies ## Accepting plugin options When looking at `createNodesV2`, and `createDependencies` you may notice a parameter called `options`. This is the first parameter for `createDependencies` or the second parameter for `createDependencies`. By default, its typed as unknown. This is because it belongs to the plugin author. The `CreateNodes`, `CreateDependencies`, and `NxPluginV2` types all accept a generic parameter that allows you to specify the type of the options. The options are read from `nx.json` when your plugin is specified as an object rather than just its module name. As an example, the below `nx.json` file specifies a plugin called `my-plugin` and passes it an option called `tagName`. ```json // nx.json { "plugins": [ { "plugin": "my-plugin", "options": { "tagName": "plugin:my-plugin" } } ] } ``` `my-plugin` could then consume these options to add a tag to each project it detected: ```typescript // my-plugin/index.ts import { createNodesFromFiles } from '@nx/devkit'; import { dirname } from 'path'; type MyPluginOptions = { tagName: string }; export const createNodesV2: CreateNodesV2 = [ '**/tsconfig.json', async (configFiles, options, context) => { return await createNodesFromFiles( (configFile, options, context) => const root = dirname(configFile); return { projects: { [root]: { tags: options.tagName ? [options.tagName] : [], }, }, }; configFiles, options, context ); }, ]; ``` This functionality is available in Nx 17 or higher. ## Visualizing the project graph You can then visualize the project graph as described [here](/docs/features/explore-graph). However, there is a cache that Nx uses to avoid recalculating the project graph as much as possible. As you develop your project graph plugin, it might be a good idea to set the following environment variable to disable the project graph cache: `NX_CACHE_PROJECT_GRAPH=false`. --- ## Publish Your Nx Plugin In order to use your plugin in other workspaces or share it with the community, you will need to publish it to an npm registry. To publish your plugin follow these steps: 1. `nx nx-release-publish nx-cfonts` 2. Follow the prompts from npm. 3. That's it! After that, you can then install your plugin like any other Nx plugin - ```shell nx add nx-cfonts ``` ## List your Nx plugin Nx provides a utility (`nx list`) that lists both core and community plugins. You can submit your plugin to be added to this list, but it needs to meet a few criteria first: - Run some kind of automated e2e tests in your repository - Include `@nx/devkit` as a `dependency` in the plugin's `package.json` - List a `repository.url` in the plugin's `package.json` ```jsonc // package.json { "repository": { "type": "git", "url": "https://github.com/nrwl/nx.git", "directory": "packages/web", }, } ``` {% aside type="caution" title="Unmaintained Plugins" %} We reserve the right to remove unmaintained plugins from the registry. If the plugins become maintained again, they can be resubmitted to the registry. {% /aside %} Once those criteria are met, you can submit your plugin by following the steps below: - Fork the [Nx repo](https://github.com/nrwl/nx/fork) (if you haven't already) - Update the [`astro-docs/src/content/approved-community-plugins.json` file](https://github.com/nrwl/nx/blob/master/astro-docs/src/content/approved-community-plugins.json) with a new entry for your plugin that includes name, url and description - Use the following commit message template: `chore(core): nx plugin submission [PLUGIN_NAME]` - push your changes, and run `pnpm submit-plugin` > The `pnpm submit-plugin` command automatically opens the GitHub pull request process with the correct template. We will then verify the plugin, offer suggestions or merge the pull request! --- ## Hook into the Task Running Lifecycle Nx plugins can hook into the task running lifecycle to execute custom logic before and after tasks are run. This is useful for implementing custom analytics, environment validation, or any other pre/post processing that should happen when running tasks. {% aside type="note" title="New API for deprecated custom task runners" %} These task execution hooks are the new API that replaces the deprecated Custom Tasks Runners. This feature is available since Nx 20.4+. For information about migrating from Custom Tasks Runners to these hooks, see [Deprecating Custom Tasks Runner](/docs/reference/deprecated/custom-tasks-runner). {% /aside %} ## Task execution hooks Nx provides two hooks that plugins can register: 1. `preTasksExecution`: Runs before any tasks are executed 2. `postTasksExecution`: Runs after all tasks are executed These hooks allow you to extend Nx functionality without affecting task execution or violating any invariants. ## Creating task execution hooks To implement task execution hooks, create a plugin and export the `preTasksExecution` and/or `postTasksExecution` functions: ```typescript // some-example-hooks.ts // Example plugin with both pre and post execution hooks // context contains workspaceRoot and nx.json configuration export async function preTasksExecution(options: any, context) { // Run custom logic before tasks are executed console.log('About to run tasks!'); // You can modify environment variables if (process.env.QA_ENV) { process.env.NX_SKIP_NX_CACHE = 'true'; } // You can validate the environment if (!isEnvironmentValid()) { throw new Error('Environment is not set up correctly'); } } // context contains workspaceRoot, nx.json configuration, and task results export async function postTasksExecution(options: any, context) { // Run custom logic after tasks are executed console.log('All tasks have completed!'); // You can access task results for analytics if (options.reportAnalytics) { await fetch(process.env.ANALYTICS_API, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(context.taskResults), }); } } function isEnvironmentValid() { // Implement your validation logic return true; } ``` ## Configuring your plugin Configure your plugin in `nx.json` by adding it to the `plugins` array: ```json // nx.json { "plugins": [ { "plugin": "my-nx-plugin", "options": { "reportAnalytics": true } } ] } ``` The options you specify in the configuration will be passed to your hook functions. ## Maintaining state across command invocations By default, every plugin initiates a long-running process, allowing you to maintain state across command invocations. This is particularly useful for gathering advanced analytics or providing cumulative feedback. ## Conditional execution You can implement conditional logic in your hooks to control when they run: ```typescript // some-example-hooks.ts export async function preTasksExecution(options, context) { // Only run for specific environments if (process.env.RUNNER !== 'production') return; // Your pre-execution logic } export async function postTasksExecution(options, context) { // Only run for specific task types const hasAngularTasks = Object.keys(context.taskResults).some((taskId) => taskId.includes('angular') ); if (!hasAngularTasks) return; // Your post-execution logic } ``` ## Using `context.argv` to determine the command The context object includes an `argv` property that contains the original CLI arguments used to invoke Nx. This allows you to distinguish between different execution modes and apply conditional logic. ### Context properties Both `preTasksExecution` and `postTasksExecution` hooks receive a context object with the following properties: ```typescript type PreTaskExecutionContext = { id: string; workspaceRoot: string; nxJsonConfiguration: NxJsonConfiguration; argv: string[]; // Original CLI arguments }; type PostTasksExecutionContext = { id: string; workspaceRoot: string; nxJsonConfiguration: NxJsonConfiguration; taskResults: TaskResults; argv: string[]; // Original CLI arguments startTime: number; endTime: number; }; ``` ### Example: Detecting command types ```typescript export async function postTasksExecution(options, context) { // Check if running affected command if (context.argv.includes('affected')) { console.log('✅ Ran affected tasks'); } else if (context.argv.includes('run-many')) { console.log('✅ Ran tasks for multiple projects'); } else { console.log('✅ Ran task for specific project'); } } ``` ### Example: conditional analytics based on command ```typescript function isAffectedCommand(argv) { return argv.includes('affected'); } function getTargetName(argv) { const targetIndex = argv.findIndex( (arg) => arg === '-t' || arg === '--target' ); return targetIndex !== -1 ? argv[targetIndex + 1] : undefined; } export async function postTasksExecution(options, context) { const isAffected = isAffectedCommand(context.argv); const target = getTargetName(context.argv); // Send analytics with command context await sendAnalytics({ executionId: context.id, commandType: isAffected ? 'affected' : 'direct', target: target, taskCount: Object.keys(context.taskResults).length, duration: context.endTime - context.startTime, }); } ``` ### Common command patterns - Direct execution: `nx build my-app` → `argv: ['node', '/path/to/nx', 'build', 'my-app']` - Affected: `nx affected -t build test` → `argv: ['node', '/path/to/nx', 'affected', '-t', 'build', 'test']` - Run many: `nx run-many -t build -p app1 app2` → `argv: ['node', '/path/to/nx', 'run-many', '-t', 'build', '-p', 'app1', 'app2']` ### Best practices for parsing argv When working with `context.argv`, parse it defensively: ```typescript // Bad - assumes structure const target = context.argv[2]; // Good - searches for the flag const targetIndex = context.argv.findIndex( (arg) => arg === '-t' || arg === '--target' ); const target = targetIndex !== -1 ? context.argv[targetIndex + 1] : undefined; ``` ## Best practices 1. **Keep hooks fast**: Hooks should execute quickly to avoid slowing down the task execution process 2. **Handle errors gracefully**: Ensure your hooks don't crash the entire execution pipeline 3. **Use environment variables** for configuration that needs to persist across tasks 4. **Leverage context data**: Use the context object to access relevant information about the workspace and task results 5. **Provide clear errors**: If throwing errors, make sure they are descriptive and actionable 6. **Parse argv defensively**: Don't assume the position of arguments; search for specific flags instead --- ## Integrate a New Tool with a Tooling Plugin Nx Plugins can be used to easily integrate a tool or framework into an Nx repository. If there is no plugin available for your favorite tool or framework, you can write your own. We'll create a plugin that helps to integrate the _Astro_ framework. `Astro` is a JavaScript web framework optimized for building fast, content-driven websites. We'll call our plugin `nx-astro`. To create a plugin in a brand new repository, use the `create-nx-plugin` command: ```shell npx create-nx-plugin nx-astro ``` Skip the `create-*` package prompt, since we won't be creating a preset. ## Understand tooling configuration files When integrating your tool into an Nx repository, you first need to have a clear understanding of how your tool works. Pay special attention to all the possible formats for configuration files, so that your plugin can process any valid configuration options. For our `nx-astro` plugin, we'll read information from the `astro.config.mjs` or `astro.config.ts` file. We'll mainly be interested in the `srcDir`, `publicDir` and `outDir` properties specified in the `defineConfig` object. `srcDir` and `publicDir` define input files that are used in the build process and `outDir` defines what the build output will be created. ```js // astro.config.mjs import { defineConfig } from 'astro/config'; // https://astro.build/config export default defineConfig({ srcDir: './src', publicDir: './public', outDir: './dist', }); ``` ## Create an inferred task The easiest way for people integrate your tool into their repository is for them to use inferred tasks. When leveraging inferred tasks, all your users need to do is install your plugin and the tool configuration file to their projects. Your plugin will take care of registering tasks with Nx and setting up the correct caching settings. Once the inferred task logic is written, we want to be able to automatically create a task for any project that has a `astro.config.*` file defined in the root of the project. We'll name the task based on our plugin configuration in the `nx.json` file: ```json // nx.json { "plugins": [ { "plugin": "nx-astro", "options": { "buildTargetName": "build", "devTargetName": "dev" } } ] } ``` If the `astro.config.mjs` for a project looks like our example in the previous section, then the inferred configuration for the `build` task should look like this: ```json { "command": "astro build", "cache": true, "inputs": [ "{projectRoot}/astro.config.mjs", "{projectRoot}/src/**/*", "{projectRoot}/public/**/*", { "externalDependencies": ["astro"] } ], "outputs": ["{projectRoot}/dist"] } ``` To create an inferred task, we need to export a `createNodesV2` function from the plugin's `index.ts` file. The entire file is shown below with inline comments to explain what is happening in each section. {% aside type="note" title="Supporting Multiple Nx Versions" %} Different Nx versions call different `createNodes` exports. If you need to support Nx versions before 21 see the [CreateNodes Compatibility Guide](/docs/extending-nx/createnodes-compatibility). {% /aside %} ```ts // src/index.ts import { CreateNodesContextV2, CreateNodesV2, TargetConfiguration, createNodesFromFiles, joinPathFragments, } from '@nx/devkit'; import { readdirSync, readFileSync } from 'fs'; import { dirname, join, resolve } from 'path'; // Expected format of the plugin options defined in nx.json export interface AstroPluginOptions { buildTargetName?: string; devTargetName?: string; } // File glob to find all the configuration files for this plugin const astroConfigGlob = '**/astro.config.{mjs,ts}'; // Entry function that Nx calls to modify the graph export const createNodesV2: CreateNodesV2 = [ astroConfigGlob, async (configFiles, options, context) => { return await createNodesFromFiles( (configFile, options, context) => createNodesInternal(configFile, options, context), configFiles, options, context ); }, ]; async function createNodesInternal( configFilePath: string, options: AstroPluginOptions, context: CreateNodesContextV2 ) { const projectRoot = dirname(configFilePath); // Do not create a project if package.json or project.json isn't there. const siblingFiles = readdirSync(join(context.workspaceRoot, projectRoot)); if ( !siblingFiles.includes('package.json') && !siblingFiles.includes('project.json') ) { return {}; } // Contents of the astro config file const astroConfigContent = readFileSync( resolve(context.workspaceRoot, configFilePath) ).toString(); // Read config values using Regex. // There are better ways to read config values, but this works for the tutorial function getConfigValue(propertyName: string, defaultValue: string) { const result = new RegExp(`${propertyName}: '(.*)'`).exec( astroConfigContent ); if (result && result[1]) { return result[1]; } return defaultValue; } const srcDir = getConfigValue('srcDir', './src'); const publicDir = getConfigValue('publicDir', './public'); const outDir = getConfigValue('outDir', './dist'); // Inferred task final output const buildTarget: TargetConfiguration = { command: `astro build`, options: { cwd: projectRoot }, cache: true, inputs: [ '{projectRoot}/astro.config.mjs', joinPathFragments('{projectRoot}', srcDir, '**', '*'), joinPathFragments('{projectRoot}', publicDir, '**', '*'), { externalDependencies: ['astro'], }, ], outputs: [`{projectRoot}/${outDir}`], }; const devTarget: TargetConfiguration = { command: `astro dev`, options: { cwd: projectRoot }, }; // Project configuration to be merged into the rest of the Nx configuration return { projects: { [projectRoot]: { targets: { [options.buildTargetName]: buildTarget, [options.devTargetName]: devTarget, }, }, }, }; } ``` We'll test out this inferred task a little later in the tutorial. Inferred tasks work well for getting users started using your tool quickly, but you can also provide users with [executors](/docs/extending-nx/local-executors), which are another way of encapsulating a task script for easy use in an Nx workspace. Without inferred tasks, executors must be explicitly configured for each task. ## Create an init generator You'll want to create generators to automate the common coding tasks for developers that use your tool. The most obvious coding task is the initial setup of the plugin. We'll create an `init` generator to automatically register the `nx-astro` plugin and start inferring tasks. If you create a generator named `init`, Nx will automatically run that generator when someone installs your plugin with the `nx add nx-astro` command. This generator should provide a good default set up for using your plugin. In our case, we need to register the plugin in the `nx.json` file. To create the generator run the following command: ```shell npx nx g generator src/generators/init ``` Then we can edit the `generator.ts` file to define the generator functionality: ```ts // src/generators/init/generator.ts import { formatFiles, readNxJson, Tree, updateNxJson } from '@nx/devkit'; import { InitGeneratorSchema } from './schema'; export async function initGenerator(tree: Tree, options: InitGeneratorSchema) { const nxJson = readNxJson(tree) || {}; const hasPlugin = nxJson.plugins?.some((p) => typeof p === 'string' ? p === 'nx-astro' : p.plugin === 'nx-astro' ); if (!hasPlugin) { if (!nxJson.plugins) { nxJson.plugins = []; } nxJson.plugins = [ ...nxJson.plugins, { plugin: 'nx-astro', options: { buildTargetName: 'build', devTargetName: 'dev', }, }, ]; } updateNxJson(tree, nxJson); await formatFiles(tree); } export default initGenerator; ``` This will automatically add the plugin configuration to the `nx.json` file if the plugin is not already registered. We need to remove the generated `name` option from the generator schema files so that the `init` generator can be executed without passing any arguments. {% tabs syncKey="schema-file" %} {% tabitem label="schema.d.ts" %} ```ts // src/generators/init/schema.d.ts export interface InitGeneratorSchema {} ``` {% /tabitem %} {% tabitem label="schema.json" %} ```json // src/generators/init/schema.json { "$schema": "https://json-schema.org/schema", "$id": "Init", "title": "", "type": "object", "properties": {}, "required": [] } ``` {% /tabitem %} {% /tabs %} ## Create an application generator Let's make one more generator to automatically create a simple Astro application. First we'll create the generator: ```shell npx nx g generator src/generators/application ``` Then we'll update the `generator.ts` file to define the generator functionality: ```ts // src/generators/application/generator.ts import { addProjectConfiguration, formatFiles, generateFiles, Tree, } from '@nx/devkit'; import * as path from 'path'; import { ApplicationGeneratorSchema } from './schema'; export async function applicationGenerator( tree: Tree, options: ApplicationGeneratorSchema ) { const projectRoot = `${options.name}`; addProjectConfiguration(tree, options.name, { root: projectRoot, projectType: 'application', sourceRoot: `${projectRoot}/src`, targets: {}, }); generateFiles(tree, path.join(__dirname, 'files'), projectRoot, options); await formatFiles(tree); } export default applicationGenerator; ``` The `generateFiles` function will use the template files in the `files` folder to add files to the generated project. {% tabs %} {% tabitem label="package.json__templ__" %} ```json // src/generators/application/files/package.json__templ__ { "name": "<%= name %>", "dependencies": {} } ``` {% /tabitem %} {% tabitem label="astro.config.mjs" %} ```js // src/generators/application/files/astro.config.mjs import { defineConfig } from 'astro/config'; // https://astro.build/config export default defineConfig({}); ``` {% /tabitem %} {% tabitem label="index.astro" %} ```astro // src/generators/application/files/src/pages/index.astro --- // Welcome to Astro! Everything between these triple-dash code fences // is your "component frontmatter". It never runs in the browser. console.log('This runs in your terminal, not the browser!'); ---

Hello, World!

``` {% /tabitem %} {% tabitem label="robots.txt" %} ```json // src/generators/application/files/public/robots.txt # Example: Allow all bots to scan and index your site. # Full syntax: https://developers.google.com/search/docs/advanced/robots/create-robots-txt User-agent: * Allow: / ``` {% /tabitem %} {% /tabs %} The generator options in the schema files can be left unchanged. ## Test your plugin The plugin is generated with a default e2e test (`e2e/src/nx-astro.spec.ts`) that: 1. Launches a local npm registry with Verdaccio 2. Publishes the current version of the `nx-astro` plugin to the local registry 3. Creates an empty Nx workspace 4. Installs `nx-astro` in the Nx workspace Let's update the e2e tests to make sure that the inferred tasks are working correctly. We'll update the `beforeAll` function to use `nx add` to add the `nx-astro` plugin and call our `application` generator. ```ts // e2e/src/nx-astro.spec.ts beforeAll(() => { projectDirectory = createTestProject(); // The plugin has been built and published to a local registry in the jest globalSetup // Install the plugin built with the latest source code into the test repo execSync('npx nx add nx-astro@e2e', { cwd: projectDirectory, stdio: 'inherit', env: process.env, }); execSync('npx nx g nx-astro:application my-lib', { cwd: projectDirectory, stdio: 'inherit', env: process.env, }); }); ``` Now we can add a new test that verifies the inferred task configuration: ```ts // e2e/src/nx-astro.spec.ts it('should infer tasks', () => { const projectDetails = JSON.parse( execSync('nx show project my-lib --json', { cwd: projectDirectory, }).toString() ); expect(projectDetails).toMatchObject({ name: 'my-lib', root: 'my-lib', sourceRoot: 'my-lib/src', targets: { build: { cache: true, executor: 'nx:run-commands', inputs: [ '{projectRoot}/astro.config.mjs', '{projectRoot}/src/**/*', '{projectRoot}/public/**/*', { externalDependencies: ['astro'], }, ], options: { command: 'astro build', cwd: 'my-lib', }, outputs: ['{projectRoot}/./dist'], }, dev: { executor: 'nx:run-commands', options: { command: 'astro dev', cwd: 'my-lib', }, }, }, }); }); ``` ## Next steps Now that you have a working plugin, here are a few other topics you may want to investigate: - [Publish your Nx plugin](/docs/extending-nx/publish-plugin) to npm and the Nx plugin registry - [Write migration generators](/docs/extending-nx/migration-generators) to automatically account for breaking changes - [Create a preset](/docs/extending-nx/create-preset) to scaffold out an entire new repository # Technologies --- ## Technologies Guides and API references for all supported technologies. {% index_page_cards path="technologies" /%} --- ## Angular {% index_page_cards path="technologies/angular" /%} --- ## Angular with Rsbuild {% index_page_cards path="technologies/angular/angular-rsbuild" /%} --- ## createServer - @nx/angular-rsbuild/ssr ```shell import { createServer } from '@nx/angular-rsbuild/ssr'; ``` The `createServer` function is used to setup Angular `CommonEngine` using an `express` server. It takes the bootstrap function as an argument, which is the function that bootstraps the Angular server application. This is usually` main.server.ts`. It returns `RsbuildAngularServer` which contains the server instance to allow further modifications as well as the listen method to start the server. ```ts function createServer(bootstrap: any): RsbuildAngularServer; ``` --- ## Examples {% tabs %} {% tabitem label="Standard Express Server Usage" %} The following example shows how to create a standard express server: ```ts // myapp/src/server.ts import { createServer } from '@nx/angular-rsbuild/ssr'; import bootstrap from './main.server'; const server = createServer(bootstrap); /** Add your custom server logic here * * For example, you can add a custom static file server: * * server.app.use('/static', express.static(staticFolder)); * * Or add additional api routes: * * server.app.get('/api/hello', (req, res) => { * res.send('Hello World!'); * }); * * Or add additional middleware: * * server.app.use((req, res, next) => { * res.send('Hello World!'); * }); */ server.listen(); ``` {% /tabitem %} {% /tabs %} --- ## RsbuildAngularServer ```ts export interface RsbuildAngularServer { app: express.Express; listen: (port?: number) => void; } ``` --- ### `app` `express.Express` The express application instance. ### `listen` `(port?: number) => void` Starts the express application on the specified port. If no port is provided, the default port (4000) is used. --- ## Angular Rsbuild Plugin for Nx The `@nx/angular-rsbuild` package provides configuration utilities for building Angular applications with [Rsbuild](https://rsbuild.dev). Rsbuild is built on top of Rspack and offers a streamlined development experience with fast builds and hot module replacement. ## Requirements The `@nx/angular-rsbuild` plugin supports the following package versions. | Package | Supported Versions | | --------------- | ----------------------------------------------------------------------------------------- | | `@rsbuild/core` | ^1.1.0 | | `@angular/core` | See [Angular version matrix](/docs/technologies/angular/guides/angular-nx-version-matrix) | [Nx generators](/docs/features/generate-code) install the latest supported versions automatically when scaffolding new projects. ## Usage ```shell import { createConfig } from '@nx/angular-rsbuild'; ``` The `createConfig` function is used to create an Rsbuild configuration object setup for Angular applications. It takes an optional `RsbuildConfig` object as an argument, which allows for customization of the Rsbuild configuration. ```ts async function createConfig( defaultOptions: { options: PluginAngularOptions; rsbuildConfigOverrides?: Partial; }, configurations: Record< string, { options: Partial; rsbuildConfigOverrides?: Partial; } > = {}, configEnvVar = 'NGRS_CONFIG' ); ``` --- ## Examples {% tabs %} {% tabitem label="Server-Side Rendering (SSR)" %} The following example shows how to create a configuration for a SSR application: ```ts // myapp/rsbuild.config.ts import { createConfig } from '@nx/angular-rsbuild'; export default createConfig({ options: { browser: './src/main.ts', server: './src/main.server.ts', ssrEntry: './src/server.ts', }, }); ``` {% /tabitem %} {% tabitem label="Client-Side Rendering (CSR)" %} The following example shows how to create a configuration for a CSR application: ```ts // myapp/rsbuild.config.ts import { createConfig } from '@nx/angular-rsbuild'; export default createConfig({ options: { browser: './src/main.ts', }, }); ``` {% /tabitem %} {% tabitem label="Modify Rsbuild Configuration" %} The following example shows how to modify the base Rsbuild configuration: ```ts // myapp/rsbuild.config.ts import { createConfig } from '@nx/angular-rsbuild'; export default createConfig({ options: { browser: './src/main.ts', server: './src/main.server.ts', ssrEntry: './src/server.ts', }, rsbuildConfigOverrides: { mode: 'development', }, }); ``` {% /tabitem %} {% tabitem label="File Replacements" %} The following example shows how to use file replacements: ```ts // myapp/rsbuild.config.ts import { createConfig } from '@nx/angular-rsbuild'; export default createConfig({ options: { browser: './src/main.ts', server: './src/main.server.ts', ssrEntry: './src/server.ts', fileReplacements: [ { replace: './src/environments/environment.ts', with: './src/environments/environment.prod.ts', }, ], }, }); ``` {% /tabitem %} {% /tabs %} --- ## PluginAngularOptions The `PluginAngularOptions` object is an object that contains the following properties: ```ts export interface PluginAngularOptions extends PluginUnsupportedOptions { aot?: boolean; assets?: AssetElement[]; browser?: string; commonChunk?: boolean; devServer?: DevServerOptions; extractLicenses?: boolean; fileReplacements?: FileReplacement[]; index?: IndexElement; inlineStyleLanguage?: InlineStyleLanguage; namedChunks?: boolean; optimization?: boolean | OptimizationOptions; outputHashing?: OutputHashing; outputPath?: | string | (Required> & Partial); polyfills?: string[]; root?: string; scripts?: ScriptOrStyleEntry[]; server?: string; skipTypeChecking?: boolean; sourceMap?: boolean | Partial; ssr?: | boolean | { entry: string; experimentalPlatform?: 'node' | 'neutral'; }; stylePreprocessorOptions?: StylePreprocessorOptions; styles?: ScriptOrStyleEntry[]; tsConfig?: string; useTsProjectReferences?: boolean; vendorChunk?: boolean; } export interface DevServerOptions extends DevServerUnsupportedOptions { port?: number; ssl?: boolean; sslKey?: string; sslCert?: string; proxyConfig?: string; } export interface OptimizationOptions { scripts?: boolean; styles?: boolean; fonts?: boolean; } export type OutputHashing = 'none' | 'all' | 'media' | 'bundles'; export type HashFormat = { chunk: string; extract: string; file: string; script: string; }; export interface OutputPath { base: string; browser: string; server: string; media: string; } export type AssetExpandedDefinition = { glob: string; input: string; ignore?: string[]; output?: string; }; export type AssetElement = AssetExpandedDefinition | string; export type NormalizedAssetElement = AssetExpandedDefinition & { output: string; }; export type ScriptOrStyleEntry = | string | { input: string; bundleName?: string; inject?: boolean; }; export type GlobalEntry = { name: string; files: string[]; initial: boolean; }; export type IndexExpandedDefinition = { input: string; output?: string; preloadInitial?: boolean; }; export type IndexElement = IndexExpandedDefinition | string | false; export type IndexHtmlTransform = (content: string) => Promise; export type NormalizedIndexElement = | (IndexExpandedDefinition & { insertionOrder: [string, boolean][]; transformer: IndexHtmlTransform | undefined; }) | false; export interface SourceMap { scripts: boolean; styles: boolean; hidden: boolean; vendor: boolean; } export type InlineStyleExtension = 'css' | 'scss' | 'sass' | 'less'; export interface FileReplacement { replace: string; with: string; } export interface StylePreprocessorOptions { includePaths?: string[]; sass?: Sass; } export interface Sass { fatalDeprecations?: DeprecationOrId[]; futureDeprecations?: DeprecationOrId[]; silenceDeprecations?: DeprecationOrId[]; } ``` --- ### aot `boolean` `default: true` Enables or disables Ahead-of-Time compilation for Angular applications. ### assets `AssetElement[]` Array of static assets to include in the build output. Can be either a string path or an object with glob patterns. ### browser `string` The entry point file for the browser bundle (e.g., 'src/main.ts'). ### commonChunk `boolean` `default: true` Controls whether to create a separate bundle containing shared code between multiple chunks. ### devServer `DevServerOptions` RsbuildConfig options for the development server including port, SSL settings, and proxy configuration. ### extractLicenses `boolean` `default: false` When true, extracts all license information from dependencies into a separate file. ### fileReplacements `FileReplacement[]` List of files to be replaced during the build process, typically used for environment-specific configurations. ### index `IndexElement` Configuration for the index.html file. Can be a string path, an object with specific settings, or false to disable. ### inlineStyleLanguage `InlineStyleLanguage` Specifies the default language to use for inline styles in components. ### namedChunks `boolean` `default: true` When true, generates named chunks instead of numerical IDs. ### optimization `boolean | OptimizationOptions` `default: true` Controls build optimization settings for scripts, styles, and fonts. ### outputHashing `OutputHashing` `default: 'none'` Defines the hashing strategy for output files. Can be 'none', 'all', 'media', or 'bundles'. ### outputPath `string | OutputPath` Specifies the output directory for built files. Can be a string or an object defining paths for browser, server, and media files. ### polyfills `string[]` Array of polyfill files to include in the build. ### root `string` The root directory of the project where the rsbuild.config.ts file is located. ### scripts `ScriptOrStyleEntry[]` Array of global scripts to include in the build, with options for bundling and injection. ### server `string` The entry point file for the server bundle in SSR applications. ### skipTypeChecking `boolean` `default: false` When true, skips TypeScript type checking during the build process. ### sourceMap `boolean | Partial` `default: true` Controls generation of source maps for debugging. Can be boolean or detailed configuration object. ### ssr `boolean | { entry: string; experimentalPlatform?: 'node' | 'neutral' }` Configuration for Server-Side Rendering. Can be boolean or object with specific SSR settings. ### stylePreprocessorOptions `StylePreprocessorOptions` Options for style preprocessors, including include paths and Sass-specific configurations. ### styles `ScriptOrStyleEntry[]` Array of global styles to include in the build, with options for bundling and injection. ### tsConfig `string` Path to the TypeScript configuration file. ### useTsProjectReferences `boolean` `default: false` Enables usage of TypeScript project references. ### vendorChunk `boolean` `default: true` When true, creates a separate bundle for vendor (third-party) code. --- ## Angular with Rspack {% index_page_cards path="technologies/angular/angular-rspack" /%} --- ## Angular Rspack Compiler {% index_page_cards path="technologies/angular/angular-rspack-compiler" /%} --- ## Angular Rspack Compiler Compilation utilities for Angular with Rspack and Rsbuild. --- ## createConfig - @nx/angular-rspack ```shell import { createConfig } from '@nx/angular-rspack'; ``` The `createConfig` function is used to create an Rspack configuration object setup for Angular applications. It takes an optional `Configuration` object as an argument, which allows for customization of the Rspack configuration. ```ts async function createConfig( defaultOptions: { options: AngularRspackPluginOptions; rspackConfigOverrides?: Partial; }, configurations: Record< string, { options: Partial; rspackConfigOverrides?: Partial; } > = {}, configEnvVar = 'NGRS_CONFIG' ); ``` --- ## Examples {% tabs %} {% tabitem label="Server-Side Rendering (SSR)" %} The following example shows how to create a configuration for a SSR application: ```ts // myapp/rspack.config.ts import { createConfig } from '@nx/angular-rspack'; export default createConfig({ options: { browser: './src/main.ts', server: './src/main.server.ts', ssrEntry: './src/server.ts', }, }); ``` {% /tabitem %} {% tabitem label="Client-Side Rendering (CSR)" %} The following example shows how to create a configuration for a CSR application: ```ts // myapp/rspack.config.ts import { createConfig } from '@nx/angular-rspack'; export default createConfig({ options: { browser: './src/main.ts', }, }); ``` {% /tabitem %} {% tabitem label="Modify Rspack Configuration" %} The following example shows how to modify the base Rspack configuration: ```ts // myapp/rspack.config.ts import { createConfig } from '@nx/angular-rspack'; export default createConfig({ options: { browser: './src/main.ts', server: './src/main.server.ts', ssrEntry: './src/server.ts', }, rspackConfigOverrides: { mode: 'development', }, }); ``` {% /tabitem %} {% tabitem label="File Replacements" %} The following example shows how to use file replacements: ```ts // myapp/rspack.config.ts import { createConfig } from '@nx/angular-rspack'; export default createConfig({ options: { browser: './src/main.ts', server: './src/main.server.ts', ssrEntry: './src/server.ts', fileReplacements: [ { replace: './src/environments/environment.ts', with: './src/environments/environment.prod.ts', }, ], }, }); ``` {% /tabitem %} {% /tabs %} --- ## AngularRspackPluginOptions The `AngularRspackPluginOptions` object is an object that contains the following properties: ```ts export interface AngularRspackPluginOptions extends PluginUnsupportedOptions { aot?: boolean; assets?: AssetElement[]; browser?: string; commonChunk?: boolean; devServer?: DevServerOptions; extractLicenses?: boolean; fileReplacements?: FileReplacement[]; index?: IndexElement; inlineStyleLanguage?: InlineStyleLanguage; namedChunks?: boolean; optimization?: boolean | OptimizationOptions; outputHashing?: OutputHashing; outputPath?: | string | (Required> & Partial); polyfills?: string[]; root?: string; scripts?: ScriptOrStyleEntry[]; server?: string; skipTypeChecking?: boolean; sourceMap?: boolean | Partial; ssr?: | boolean | { entry: string; experimentalPlatform?: 'node' | 'neutral'; }; stylePreprocessorOptions?: StylePreprocessorOptions; styles?: ScriptOrStyleEntry[]; tsConfig?: string; useTsProjectReferences?: boolean; vendorChunk?: boolean; } export interface DevServerOptions extends DevServerUnsupportedOptions { port?: number; ssl?: boolean; sslKey?: string; sslCert?: string; proxyConfig?: string; } export interface OptimizationOptions { scripts?: boolean; styles?: boolean; fonts?: boolean; } export type OutputHashing = 'none' | 'all' | 'media' | 'bundles'; export type HashFormat = { chunk: string; extract: string; file: string; script: string; }; export interface OutputPath { base: string; browser: string; server: string; media: string; } export type AssetExpandedDefinition = { glob: string; input: string; ignore?: string[]; output?: string; }; export type AssetElement = AssetExpandedDefinition | string; export type NormalizedAssetElement = AssetExpandedDefinition & { output: string; }; export type ScriptOrStyleEntry = | string | { input: string; bundleName?: string; inject?: boolean; }; export type GlobalEntry = { name: string; files: string[]; initial: boolean; }; export type IndexExpandedDefinition = { input: string; output?: string; preloadInitial?: boolean; }; export type IndexElement = IndexExpandedDefinition | string | false; export type IndexHtmlTransform = (content: string) => Promise; export type NormalizedIndexElement = | (IndexExpandedDefinition & { insertionOrder: [string, boolean][]; transformer: IndexHtmlTransform | undefined; }) | false; export interface SourceMap { scripts: boolean; styles: boolean; hidden: boolean; vendor: boolean; } export type InlineStyleExtension = 'css' | 'scss' | 'sass' | 'less'; export interface FileReplacement { replace: string; with: string; } export interface StylePreprocessorOptions { includePaths?: string[]; sass?: Sass; } export interface Sass { fatalDeprecations?: DeprecationOrId[]; futureDeprecations?: DeprecationOrId[]; silenceDeprecations?: DeprecationOrId[]; } ``` --- ### aot `boolean` `default: true` Enables or disables Ahead-of-Time compilation for Angular applications. ### assets `AssetElement[]` Array of static assets to include in the build output. Can be either a string path or an object with glob patterns. ### browser `string` The entry point file for the browser bundle (e.g., 'src/main.ts'). ### commonChunk `boolean` `default: true` Controls whether to create a separate bundle containing shared code between multiple chunks. ### devServer `DevServerOptions` Configuration options for the development server including port, SSL settings, and proxy configuration. ### extractLicenses `boolean` `default: false` When true, extracts all license information from dependencies into a separate file. ### fileReplacements `FileReplacement[]` List of files to be replaced during the build process, typically used for environment-specific configurations. ### index `IndexElement` Configuration for the index.html file. Can be a string path, an object with specific settings, or false to disable. ### inlineStyleLanguage `InlineStyleLanguage` Specifies the default language to use for inline styles in components. ### namedChunks `boolean` `default: true` When true, generates named chunks instead of numerical IDs. ### optimization `boolean | OptimizationOptions` `default: true` Controls build optimization settings for scripts, styles, and fonts. ### outputHashing `OutputHashing` `default: 'none'` Defines the hashing strategy for output files. Can be 'none', 'all', 'media', or 'bundles'. ### outputPath `string | OutputPath` Specifies the output directory for built files. Can be a string or an object defining paths for browser, server, and media files. ### polyfills `string[]` Array of polyfill files to include in the build. ### root `string` The root directory of the project where the rspack.config.ts file is located. ### scripts `ScriptOrStyleEntry[]` Array of global scripts to include in the build, with options for bundling and injection. ### server `string` The entry point file for the server bundle in SSR applications. ### skipTypeChecking `boolean` `default: false` When true, skips TypeScript type checking during the build process. ### sourceMap `boolean | Partial` `default: true` Controls generation of source maps for debugging. Can be boolean or detailed configuration object. ### ssr `boolean | { entry: string; experimentalPlatform?: 'node' | 'neutral' }` Configuration for Server-Side Rendering. Can be boolean or object with specific SSR settings. ### stylePreprocessorOptions `StylePreprocessorOptions` Options for style preprocessors, including include paths and Sass-specific configurations. ### styles `ScriptOrStyleEntry[]` Array of global styles to include in the build, with options for bundling and injection. ### tsConfig `string` Path to the TypeScript configuration file. ### useTsProjectReferences `boolean` `default: false` Enables usage of TypeScript project references. ### vendorChunk `boolean` `default: true` When true, creates a separate bundle for vendor (third-party) code. --- ## createServer - @nx/angular-rspack/ssr ```shell import { createServer } from '@nx/angular-rspack/ssr'; ``` The `createServer` function is used to setup Angular `CommonEngine` using an `express` server. It takes the bootstrap function as an argument, which is the function that bootstraps the Angular server application. This is usually `main.server.ts`. It returns `RsbuildAngularServer` which contains the server instance to allow further modifications as well as the listen method to start the server. ```ts function createServer( bootstrap: any, opts?: RspackAngularServerOptions ): RspackAngularServer; ``` --- ## Examples {% tabs %} {% tabitem label="Standard Express Server Usage" %} The following example shows how to create a standard express server: ```ts // myapp/src/server.ts import { createServer } from '@nx/angular-rspack/ssr'; import bootstrap from './main.server'; const server = createServer(bootstrap); /** Add your custom server logic here * * For example, you can add a custom static file server: * * server.app.use('/static', express.static(staticFolder)); * * Or add additional api routes: * * server.app.get('/api/hello', (req, res) => { * res.send('Hello World!'); * }); * * Or add additional middleware: * * server.app.use((req, res, next) => { * res.send('Hello World!'); * }); */ server.listen(); ``` {% /tabitem %} {% /tabs %} --- ## RspackAngularServer ```ts export interface RspackAngularServer { app: express.Express; listen: (port?: number) => void; } ``` --- ### `app` `express.Express` The express application instance. ### `listen` `(port?: number) => void` Starts the express application on the specified port. If no port is provided, the default port (4000) is used. --- ## RspackAngularServerOptions ```ts export interface RspackAngularServerOptions { serverDistFolder?: string; browserDistFolder?: string; indexHtml?: string; } ``` --- ### `serverDistFolder` `string` The folder where the server bundle is located. Defaults to the `dist/server` folder. ### `browserDistFolder` `string` The folder where the browser bundle is located. Defaults to the `dist/browser` folder. ### `indexHtml` `string` The path to the index.html file. Defaults to the `index.html` file in the `browserDistFolder`. --- ## Guides {% index_page_cards path="technologies/angular/angular-rspack/guides" /%} --- ## Getting Started - Angular Rspack Walk you through setting up a new Angular Rspack application in Nx. By the end of this guide, you will have a new Angular application with Rspack configured. There are two paths you can follow to get started with Angular Rspack in Nx: 1. Create a new Nx Workspace with preconfigured Angular Rspack application 2. Add an existing Angular Rspack application to an Nx Workspace ## Create a new Nx Workspace with preconfigured Angular Rspack application To create a new Nx Workspace with a preconfigured Angular Rspack application, run the following command: {% tabs %} {% tabitem label="Client-Side Rendering (CSR)" %} ```shell frame="terminal" title="npx create-nx-workspace myorg" meta="{ranges:''}" // filename: ~/ NX Let's create a new workspace [https://nx.dev/getting-started/intro] ✔ Which stack do you want to use? · angular ✔ Integrated monorepo, or standalone project? · integrated ✔ Application name · myorg ✔ Which bundler would you like to use? · rspack ✔ Default stylesheet format · css ✔ Do you want to enable Server-Side Rendering (SSR)? · No ✔ Which unit test runner would you like to use? · vitest ✔ Test runner to use for end to end (E2E) tests · playwright NX Creating your v20.8.0 workspace. ``` {% /tabitem %} {% tabitem label="Server-Side Rendering (SSR)" %} ```shell frame="terminal" title="npx create-nx-workspace myorg" meta="{ranges:''}" // filename: ~/ NX Let's create a new workspace [https://nx.dev/getting-started/intro] ✔ Which stack do you want to use? · angular ✔ Integrated monorepo, or standalone project? · integrated ✔ Application name · myorg ✔ Which bundler would you like to use? · rspack ✔ Default stylesheet format · css ✔ Do you want to enable Server-Side Rendering (SSR)? · Yes ✔ Which unit test runner would you like to use? · vitest ✔ Test runner to use for end to end (E2E) tests · playwright NX Creating your v20.8.0 workspace. ``` {% /tabitem %} {% /tabs %} This command will create a new Nx Workspace with an Angular Rspack application in the `myorg` directory. ## Add an existing Angular Rspack application to an Nx Workspace To add an existing Angular Rspack application to an Nx Workspace, run the following command: {% aside type="tip" title="Minimum Nx Version" %} The minimum Nx version required to add an Angular Rspack application to an Nx Workspace is `20.6.1`. If you are using an older version of Nx, run `npx nx migrate` to migrate your workspace to the latest version. You can learn more about Nx migrations [here](/docs/features/automate-updating-dependencies). {% /aside %} {% tabs %} {% tabitem label="Client-Side Rendering (CSR)" %} ```shell npx nx g @nx/angular:app myapp --bundler=rspack ``` {% /tabitem %} {% tabitem label="Server-Side Rendering (SSR)" %} ```shell npx nx g @nx/angular:app myapp --bundler=rspack --ssr ``` {% /tabitem %} {% /tabs %} This command will add an Angular Rspack application to the `myapp` directory. ## Working with the Angular Rspack application After generating the application, you will notice the following: - A `rspack.config.ts` file in the root of the project - The `project.json` file does not have a `targets.build` or `targets.serve` target The `rspack.config.ts` file is a configuration file for Rspack. It contains the configuration options for Rspack and for Angular Rspack a helper [createConfig](/docs/technologies/angular/angular-rspack/create-config) function is used to map the options you would expect to set in the `project.json` or `angular.json` files to the Rspack configuration. The `project.json` file does not contain the `targets.build` or `targets.serve` targets because Angular Rspack uses Nx [Inferred Tasks](/docs/concepts/inferred-tasks) to build and serve your application with Rspack. ## Building your Angular Rspack application To build your Angular Rspack application, run the following command: ```shell npx nx build myapp ``` This command will build your Angular Rspack application and output the results to the `dist/browser` directory. If you are using the Angular Rspack application with Server-Side Rendering (SSR), the `dist/server` directory will contain the server files. The same `nx build` command will build both the client and server files. ## Serving your Angular Rspack application To serve your Angular Rspack application, run the following command: ```shell npx nx serve myapp ``` This command will serve your Angular Rspack application. For Client-Side Rendering (CSR) applications, the default port is `4200`. You can visit the application by navigating to `http://localhost:4200` in your browser. For Server-Side Rendering (SSR) applications, the default port is `4000`. You can visit the application by navigating to `http://localhost:4000` in your browser. HMR is enabled by default so you can make changes to your application and see the changes in real-time. --- ## Handling Configurations Configurations are handled slightly differently compared to the Angular CLI. Rsbuild and Rspack use `mode` instead of configurations to handle different environments by default. This means that a different solution is needed to handle different build configurations you may have to match the behavior of Angular configuration handling. The [`createConfig`](/docs/technologies/angular/angular-rspack/create-config) function helps you to handle this. It uses the `NGRS_CONFIG` environment variable to determine which configuration to use. The default configuration is `production`. {% aside type="tip" title="Roll your own" %} You can handle configurations by yourself if you prefer, all you need is some manner of detecting the environment and then merging the options passed to `createConfig`. {% /aside %} ## Using `createConfig` for configurations The `createConfig` function takes two arguments, the first is the default options, and the second is an object of configurations. The configurations object is keyed by the name of the configuration, and the value is an object with the options and `rspackConfigOverrides | rsbuildConfigOverrides` to be used for that configuration. ```ts // myapp/rspack.config.ts import { createConfig } from '@nx/angular-rspack'; export default createConfig( { options: { browser: './src/main.ts', server: './src/main.server.ts', ssrEntry: './src/server.ts', }, rspackConfigOverrides: { mode: 'development', }, }, { production: { options: { fileReplacements: [ { replace: './src/environments/environment.ts', with: './src/environments/environment.prod.ts', }, ], }, }, } ); ``` The above example shows how to handle the `production` configuration. The `options` are the same as the default options but with the `fileReplacements` property added, and the `rspackConfigOverrides` are the same as the default `rspackConfigOverrides`. The `NGRS_CONFIG` environment variable is used to determine which configuration to use. If the environment variable is not set, the `production` configuration is used by default. If a production configuration is not provided, the default configuration is used. To run the build with the `production` configuration: ```shell NGRS_CONFIG=production npx myapp build ``` --- ## Build-time Internationalization (i18n) with Angular Rspack Angular Rspack supports Angular [build-time i18n](https://angular.dev/guide/i18n) out of the box. walk you through how to use it. You can follow the steps completely, just make sure to place any changes to `angular.json` in your project's `project.json` file. Some of these changes may also need to be made to the `rspack.config` file. The steps below indicate where to make these changes. The process of building an Angular Rspack application with i18n is similar to building an Angular application with i18n and reuses most of the same steps and configuration. ## Prerequisites - `@angular/localize` must be installed in your project. - You must have an `i18n` configuration in your `project.json` file. It is assumed you have an `i18n` property in your `project.json` file that looks like this: ```json meta="{3-10}" { "name": "my-app", "i18n": { "sourceLocale": "en-GB", "locales": { "fr": { "translation": "src/locale/messages.fr.xlf" } } }, "targets": { "extract-i18n": {} } } ``` {% aside type="note" title="Extracting i18n messages" %} The `extract-i18n` target found in Angular projects will still be used to extract the i18n messages into XLIFF (or chosen format) files. You simply need to run `nx extract-i18n my-app` to extract the messages. {% /aside %} ## Step 1: Configure the Rspack Configuration To enable i18n, you need to add the following configuration to your `rspack.config` file: ```js meta="{7,15}" export default createConfig( { options: { root: __dirname, polyfills: [ 'zone.js', '@angular/localize/init', ], ..., }, }, { production: { options: { localize: true, }, }, } ); ``` ## Step 2: Run the build After configuring the Rspack configuration, you can run the build with the following command: ```shell npx nx build my-app ``` It will output bundles in the `dist` directory with the following structure: {%filetree %} - dist - browser - [localeCode] - main.js - main.js.map - index.html - styles.css {% /filetree %} --- ## Migrate Angular with Webpack to Rspack Until recently, Angular used Webpack to build applications. Based on this, some third-party builders emerged to allow users to use a custom Webpack configuration when building their Angular applications. When Angular switched to Esbuild, this meant that it became difficult for applications that used custom Webpack configurations to migrate to the new build system. help you migrate from Angular with Webpack to Rspack. By migrating to Rspack, you can gain an immediate build performance benefit while maintaining the your existing Webpack build toolchain. ## Step 1: Initialize Nx {% aside type="caution" title="Optional Step" %} If you are already using Nx with version 20.6.0 or greater, you can skip this step. {% /aside %} Nx provides a generator to convert existing Angular Webpack projects to use `@nx/angular-rspack`. This generator is currently available in Nx `20.6.0`. At the root of your project, run the following command: ```shell npx nx@latest init ``` ## Step 2: run the convert to Rspack generator With Nx now initialized, the `@nx/angular` plugin will have been installed providing access to a series of generators and executors that aid Angular development. In particular, the `@nx/angular:convert-to-rspack` generator will convert an Angular project to use Rspack. To use it, run the following command: ```shell npx nx g convert-to-rspack ``` ## Step 3: Run your application After running the generator, you can run tasks such as `build` and `serve` to build and serve your application. ```shell npx nx build yourProjectName npx nx serve yourProjectName ``` --- ## Angular Rspack Plugin for Nx ## Requirements The `@angular-rspack/nx` plugin supports the following package versions. | Package | Supported Versions | | --------------- | ----------------------------------------------------------------------------------------- | | `@rspack/core` | >=1.3.5 <1.7.0 | | `@angular/core` | See [Angular version matrix](/docs/technologies/angular/guides/angular-nx-version-matrix) | [Nx generators](/docs/features/generate-code) install the latest supported versions automatically when scaffolding new projects. Angular compilation has always been a black box hidden behind layers of abstraction and configuration, exposed via the Angular Builders from the Angular CLI packages. Originally, the underlying tool that bundled the Angular application was [Webpack](https://webpack.js.org). This was great as teams were able to extend their builds by leveraging the vast Webpack ecosystem and plugins that are available. Over time, it became clear that the inherit slowness with Webpack build speeds was becoming more and more of an issue for Angular developers. The Angular Team decided to address this build speed issue by building out a new build pipeline that used [Esbuild](https://esbuild.github.io/). This succeeded in reducing the build times for Angular applications, however, it made one crucial mistake. It left the existing Angular applications that relied on the Webpack ecosystem behind, with either a difficult migration path or none at all. --- ## Rspack The solution to this problem was to create a new build pipeline that could use the existing Webpack ecosystem and plugins while also providing faster builds for Angular applications. This is where [Rspack](https://rspack.dev) come into play. Rspack is a high performance JavaScript bundler written in Rust. It offers strong compatibility with the Webpack ecosystem, and can serve as a near drop-in replacement for webpack with significantly faster build speeds. Because it supports the existing Webpack ecosystem, it provides an answer to teams that maintain Angular applications using Webpack and want to migrate to a faster build pipeline. This makes it a great solution for teams that want to migrate to a faster build pipeline, but still want the ability to extend their builds and use [Module Federation](https://module-federation.io). {% aside type="caution" title="Angular Rspack Status" %} Please note that Angular Rspack support is still experimental and is not yet considered production ready. We are actively working on improving the experience and stability of Angular Rspack, and we will continue to update this page as we make progress. {% /aside %} ## Known limitations and missing features The following are known limitations and missing features of Angular Rspack: - Server Routing is not supported - still experimental in Angular currently. - App Engine APIs are not supported - still experimental in Angular currently. If you have any other missing features or limitations, please [let us know](https://github.com/nrwl/angular-rspack/issues/new). ## Benchmarks ![Benchmarks](../../../../../assets/guides/angular-rspack/bundler-build-times.png) Below is a table of benchmarks for different bundlers, tested on an application with ~800 lazy-loaded routes and ~10 components per route—totaling ~8000 components. **System Info** - MacBook Pro (macOS 15.3.1) - Processor: M2 Max - Memory: 96 GB | Build/Bundler | Prod SSR (s) | Prod (s) | Dev (s) | | ------------- | ------------ | -------- | ------- | | Webpack | 198.614 | 154.339 | 159.436 | | esbuild | 23.701 | 19.569 | 15.358 | | Rspack | 30.589 | 19.269 | 19.940 | You can find the benchmarks and run them yourself: [https://github.com/nrwl/ng-bundler-benchmark](https://github.com/nrwl/ng-bundler-benchmark) --- ## @nx/angular Executors The @nx/angular plugin provides various executors to help you create and configure angular projects within your Nx workspace. Below is a complete reference for all available executors and their options. ### `application` Builds an Angular application using [esbuild](https://esbuild.github.io/) with integrated SSR and prerendering capabilities. This executor is a drop-in replacement for the `@angular-devkit/build-angular:application` builder provided by the Angular CLI. It builds an Angular application using [esbuild](https://esbuild.github.io/) with integrated SSR and prerendering capabilities. In addition to the features provided by the Angular CLI builder, the `@nx/angular:application` executor also supports the following: - Providing esbuild plugins - Providing a function to transform the application's `index.html` file - Incremental builds :::tip[Dev Server] The [`@nx/angular:dev-server` executor](/nx-api/angular/executors/dev-server) is required to serve your application when using the `@nx/angular:application` to build it. It is a drop-in replacement for the Angular CLI's `@angular-devkit/build-angular:dev-server` builder and ensures the application is correctly served with Vite when using the `@nx/angular:application` executor. ::: ### Examples ###### Providing esbuild plugins The executor accepts a `plugins` option that allows you to provide esbuild plugins that will be used when building your application. It allows providing a path to a plugin file or an object with a `path` and `options` property to provide options to the plugin. ```json title="apps/my-app/project.json" {8-16} { ... "targets": { "build": { "executor": "@nx/angular:application", "options": { ... "plugins": [ "apps/my-app/plugins/plugin1.js", { "path": "apps/my-app/plugins/plugin2.js", "options": { "someOption": "some value" } } ] } } ... } } ``` ```ts title="apps/my-app/plugins/plugin1.js" const plugin1 = { name: 'plugin1', setup(build) { const options = build.initialOptions; options.define.PLUGIN1_TEXT = '"Value was provided at build time"'; }, }; module.exports = plugin1; ``` ```ts title="apps/my-app/plugins/plugin2.js" function plugin2({ someOption }) { return { name: 'plugin2', setup(build) { const options = build.initialOptions; options.define.PLUGIN2_TEXT = JSON.stringify(someOption); }, }; } module.exports = plugin2; ``` Additionally, we need to inform TypeScript of the defined variables to prevent type-checking errors during the build. We can achieve this by creating or updating a type definition file included in the TypeScript build process (e.g. `src/types.d.ts`) with the following content: ```ts title="apps/my-app/src/types.d.ts" declare const PLUGIN1_TEXT: number; declare const PLUGIN2_TEXT: string; ``` ###### Transforming the 'index.html' file The executor accepts an `indexHtmlTransformer` option to provide a path to a file with a default export for a function that receives the application's `index.html` file contents and outputs the updated contents. ```json title="apps/my-app/project.json" {8} { ... "targets": { "build": { "executor": "@nx/angular:application", "options": { ... "indexHtmlTransformer": "apps/my-app/index-html.transformer.ts" } } ... } } ``` ```ts title="apps/my-app/index-html.transformer.ts" export default function (indexContent: string) { return indexContent.replace( 'my-app', 'my-app (transformed)' ); } ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `outputPath` | string [**required**] | Specify the output path relative to workspace root. | | | `tsConfig` | string [**required**] | The full path for the TypeScript configuration file, relative to the current workspace. | | | `allowedCommonJsDependencies` | array | A list of CommonJS or AMD packages that are allowed to be used without a build time warning. Use `'*'` to allow all. | `[]` | | `aot` | boolean | Build using Ahead of Time compilation. | `true` | | `appShell` | boolean | Generates an application shell during build time. | | | `assets` | array | List of static application assets. | `[]` | | `baseHref` | string | Base url for the application being built. | | | `browser` | string | The full path for the browser entry point to the application, relative to the current workspace. | | | `budgets` | array | Budget thresholds to ensure parts of your application stay within boundaries which you set. | `[]` | | `buildLibsFromSource` | boolean | Read buildable libraries from source instead of building them separately. | `true` | | `clearScreen` | boolean | Automatically clear the terminal screen during rebuilds. | `false` | | `conditions` | array | Custom package resolution conditions used to resolve conditional exports/imports. Defaults to ['module', 'development'/'production']. The following special conditions are always present if the requirements are satisfied: 'default', 'import', 'require', 'browser', 'node'. _Note: this is only supported in Angular versions >= 20.0.0_. | | | `crossOrigin` | string | Define the crossorigin attribute setting of elements that provide CORS support. | `"none"` | | `define` | object | Defines global identifiers that will be replaced with a specified constant value when found in any JavaScript or TypeScript code including libraries. The value will be used directly. String values must be put in quotes. Identifiers within Angular metadata such as Component Decorators will not be replaced. | | | `deleteOutputPath` | boolean | Delete the output path before building. | `true` | | `deployUrl` | string | Customize the base path for the URLs of resources in 'index.html' and component stylesheets. This option is only necessary for specific deployment scenarios, such as with Angular Elements or when utilizing different CDN locations. | | | `externalDependencies` | array | Exclude the listed external dependencies from being bundled into the bundle. Instead, the created bundle relies on these dependencies to be available during runtime. | `[]` | | `extractLicenses` | boolean | Extract all licenses in a separate file. | `true` | | `fileReplacements` | array | Replace compilation source files with other compilation source files in the build. | `[]` | | `i18nDuplicateTranslation` | string | How to handle duplicate translations for i18n. | `"warning"` | | `i18nMissingTranslation` | string | How to handle missing translations for i18n. | `"warning"` | | `index` | string | Configures the generation of the application's HTML index. | | | `indexHtmlTransformer` | string | Path to a file exposing a default function to transform the `index.html` file. | | | `inlineStyleLanguage` | string | The stylesheet language to use for the application's inline component styles. | `"css"` | | `loader` | object | Defines the type of loader to use with a specified file extension when used with a JavaScript `import`. `text` inlines the content as a string; `binary` inlines the content as a Uint8Array; `file` emits the file and provides the runtime location of the file; `dataurl` inlines the content as a data URL with best guess of MIME type; `base64` inlines the content as a Base64-encoded string; `empty` considers the content to be empty and not include it in bundles. _Note: `dataurl` and `base64` are only supported in Angular versions >= 20.1.0_. | | | `localize` | string | Translate the bundles in one or more locales. | | | `namedChunks` | boolean | Use file name for lazy loaded chunks. | `false` | | `optimization` | string | Enables optimization of the build output. Including minification of scripts and styles, tree-shaking, dead-code elimination, inlining of critical CSS and fonts inlining. For more information, see https://angular.dev/reference/configs/workspace-config#optimization-configuration. | `true` | | `outputHashing` | string | Define the output filename cache-busting hashing mode. | `"none"` | | `outputMode` | string | Defines the build output target. 'static': Generates a static site for deployment on any static hosting service. 'server': Produces an application designed for deployment on a server that supports server-side rendering (SSR). | | | `plugins` | array | A list of ESBuild plugins. | | | `poll` | number | Enable and define the file watching poll time period in milliseconds. | | | `polyfills` | array | A list of polyfills to include in the build. Can be a full path for a file, relative to the current workspace or module specifier. Example: 'zone.js'. | `[]` | | `prerender` | string | Prerender (SSG) pages of your application during build time. | | | `preserveSymlinks` | boolean | Do not use the real path when resolving modules. If unset then will default to `true` if NodeJS option --preserve-symlinks is set. | | | `progress` | boolean | Log progress to the console while building. | `true` | | `scripts` | array | Global scripts to be included in the build. | `[]` | | `security` | object | Security features to protect against XSS and other common attacks | | | `server` | string | The full path for the server entry point to the application, relative to the current workspace. | | | `serviceWorker` | string | Generates a service worker configuration. | `false` | | `sourceMap` | string | Output source maps for scripts and styles. For more information, see https://angular.dev/reference/configs/workspace-config#source-map-configuration. | `false` | | `ssr` | string | Server side render (SSR) pages of your application during runtime. | `false` | | `statsJson` | boolean | Generates a 'stats.json' file which can be analyzed with https://esbuild.github.io/analyze/. | `false` | | `stylePreprocessorOptions` | object | Options to pass to style preprocessors. | | | `styles` | array | Global styles to be included in the build. | `[]` | | `subresourceIntegrity` | boolean | Enables the use of subresource integrity validation. | `false` | | `verbose` | boolean | Adds more details to output logging. | `false` | | `watch` | boolean | Run build when files change. | `false` | | `webWorkerTsConfig` | string | TypeScript configuration for Web Worker modules. | | ### `browser-esbuild` Builds an Angular application using [esbuild](https://esbuild.github.io/). This executor is a drop-in replacement for the `@angular-devkit/build-angular:browser-esbuild` builder provided by the Angular CLI. It builds an Angular application using esbuild. In addition to the features provided by the Angular CLI builder, the `@nx/angular:browser-esbuild` executor also supports the following: - Providing esbuild plugins - Incremental builds :::tip[Dev Server] The [`@nx/angular:dev-server` executor](/nx-api/angular/executors/dev-server) is required to serve your application when using the `@nx/angular:browser-esbuild` to build it. It is a drop-in replacement for the Angular CLI's `@angular-devkit/build-angular:dev-server` builder and ensures the application is correctly served with Vite when using the `@nx/angular:browser-esbuild` executor. ::: ### Examples ###### Providing esbuild plugins The executor accepts a `plugins` option that allows you to provide esbuild plugins that will be used when building your application. It allows providing a path to a plugin file or an object with a `path` and `options` property to provide options to the plugin. ```json title="apps/my-app/project.json" {8-16} { ... "targets": { "build": { "executor": "@nx/angular:browser-esbuild", "options": { ... "plugins": [ "apps/my-app/plugins/plugin1.js", { "path": "apps/my-app/plugins/plugin2.js", "options": { "someOption": "some value" } } ] } } ... } } ``` ```ts title="apps/my-app/plugins/plugin1.js" const plugin1 = { name: 'plugin1', setup(build) { const options = build.initialOptions; options.define.PLUGIN1_TEXT = '"Value was provided at build time"'; }, }; module.exports = plugin1; ``` ```ts title="apps/my-app/plugins/plugin2.js" function plugin2({ someOption }) { return { name: 'plugin2', setup(build) { const options = build.initialOptions; options.define.PLUGIN2_TEXT = JSON.stringify(someOption); }, }; } module.exports = plugin2; ``` Additionally, we need to inform TypeScript of the defined variables to prevent type-checking errors during the build. We can achieve this by creating or updating a type definition file included in the TypeScript build process (e.g. `src/types.d.ts`) with the following content: ```ts title="apps/my-app/src/types.d.ts" declare const PLUGIN1_TEXT: number; declare const PLUGIN2_TEXT: string; ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `index` | string [**required**] | Configures the generation of the application's HTML index. | | | `main` | string [**required**] | The full path for the main entry point to the app, relative to the current workspace. | | | `outputPath` | string [**required**] | The full path for the new output directory, relative to the current workspace. | | | `tsConfig` | string [**required**] | The full path for the TypeScript configuration file, relative to the current workspace. | | | `allowedCommonJsDependencies` | array | A list of CommonJS or AMD packages that are allowed to be used without a build time warning. Use `'*'` to allow all. | `[]` | | `aot` | boolean | Build using Ahead of Time compilation. | `true` | | `assets` | array | List of static application assets. | `[]` | | `baseHref` | string | Base url for the application being built. | | | `budgets` | array | Budget thresholds to ensure parts of your application stay within boundaries which you set. | `[]` | | `buildLibsFromSource` | boolean | Read buildable libraries from source instead of building them separately. | `true` | | `buildOptimizer` | boolean | Enables advanced build optimizations when using the 'aot' option. | `true` | | `commonChunk` | boolean | Generate a separate bundle containing code used across multiple bundles. | `true` | | `crossOrigin` | string | Define the crossorigin attribute setting of elements that provide CORS support. | `"none"` | | `deleteOutputPath` | boolean | Delete the output path before building. | `true` | | `deployUrl` | string | Customize the base path for the URLs of resources in 'index.html' and component stylesheets. This option is only necessary for specific deployment scenarios, such as with Angular Elements or when utilizing different CDN locations. | | | `externalDependencies` | array | Exclude the listed external dependencies from being bundled into the bundle. Instead, the created bundle relies on these dependencies to be available during runtime. | `[]` | | `extractLicenses` | boolean | Extract all licenses in a separate file. | `true` | | `fileReplacements` | array | Replace compilation source files with other compilation source files in the build. | `[]` | | `i18nDuplicateTranslation` | string | How to handle duplicate translations for i18n. | `"warning"` | | `i18nMissingTranslation` | string | How to handle missing translations for i18n. | `"warning"` | | `inlineStyleLanguage` | string | The stylesheet language to use for the application's inline component styles. | `"css"` | | `localize` | string | Translate the bundles in one or more locales. | | | `namedChunks` | boolean | Use file name for lazy loaded chunks. | `false` | | `ngswConfigPath` | string | Path to ngsw-config.json. | | | `optimization` | string | Enables optimization of the build output. Including minification of scripts and styles, tree-shaking, dead-code elimination, inlining of critical CSS and fonts inlining. For more information, see https://angular.dev/reference/configs/workspace-config#optimization-configuration. | `true` | | `outputHashing` | string | Define the output filename cache-busting hashing mode. | `"none"` | | `plugins` | array | A list of ESBuild plugins. | | | `poll` | number | Enable and define the file watching poll time period in milliseconds. | | | `polyfills` | string | Polyfills to be included in the build. | | | `preserveSymlinks` | boolean | Do not use the real path when resolving modules. If unset then will default to `true` if NodeJS option --preserve-symlinks is set. | | | `progress` | boolean | Log progress to the console while building. | `true` | | `resourcesOutputPath` | string | The path where style resources will be placed, relative to outputPath. | | | `scripts` | array | Global scripts to be included in the build. | `[]` | | `serviceWorker` | boolean | Generates a service worker config for production builds. | `false` | | `sourceMap` | string | Output source maps for scripts and styles. For more information, see https://angular.dev/reference/configs/workspace-config#source-map-configuration. | `false` | | `statsJson` | boolean | Generates a 'stats.json' file which can be analyzed using tools such as 'webpack-bundle-analyzer'. | `false` | | `stylePreprocessorOptions` | object | Options to pass to style preprocessors. | | | `styles` | array | Global styles to be included in the build. | `[]` | | `subresourceIntegrity` | boolean | Enables the use of subresource integrity validation. | `false` | | `vendorChunk` | boolean | Generate a separate bundle containing only vendor libraries. This option should only be used for development to reduce the incremental compilation time. | `false` | | `verbose` | boolean | Adds more details to output logging. | `false` | | `watch` | boolean | Run build when files change. | `false` | | `webWorkerTsConfig` | string | TypeScript configuration for Web Worker modules. | | ### `delegate-build` Delegates the build to a different target while supporting incremental builds. ### Examples ###### Basic Usage Delegate the build of the project to a different target. ```json { "prod-build": { "executor": "@nx/angular:delegate-build", "options": { "buildTarget": "app:build:production", "outputPath": "dist/apps/app/production", "tsConfig": "apps/app/tsconfig.json", "watch": false } } } ``` ###### Watch for build changes Delegate the build of the project to a different target. ```json { "prod-build": { "executor": "@nx/angular:delegate-build", "options": { "buildTarget": "app:build:production", "outputPath": "dist/apps/app/production", "tsConfig": "apps/app/tsconfig.json", "watch": true } } } ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `buildTarget` | string [**required**] | Build target used for building the application after its dependencies have been built. | | | `outputPath` | string [**required**] | The full path for the output directory, relative to the workspace root. | | | `tsConfig` | string [**required**] | The full path for the TypeScript configuration file, relative to the workspace root. | | | `watch` | boolean | Whether to run a build when any file changes. | `false` | ### `extract-i18n` Extracts i18n messages from source code. #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `buildTarget` | string [**required**] | A builder target to extract i18n messages in the format of `project:target[:configuration]`. You can also pass in more than one configuration name as a comma-separated list. Example: `project:target:production,staging`. | | | `format` | string | Output format for the generated file. | `"xlf"` | | `i18nDuplicateTranslation` | string | How to handle duplicate translations. _Note: this is only available in Angular 20.0.0 and above._ | | | `outFile` | string | Name of the file to output. | | | `outputPath` | string | Path where output will be placed. | | | `progress` | boolean | Log progress to the console. | `true` | ### `module-federation-dev-server` Serves host [Module Federation](https://module-federation.io/) applications ([webpack](https://webpack.js.org/)-based) allowing to specify which remote applications should be served with the host. ### Examples ###### Basic Usage The Module Federation Dev Server will serve a host application and find the remote applications associated with the host and serve them statically also. See an example set up of it below: ```json { "serve": { "executor": "@nx/angular:module-federation-dev-server", "configurations": { "production": { "buildTarget": "host:build:production" }, "development": { "buildTarget": "host:build:development" } }, "defaultConfiguration": "development", "options": { "port": 4200, "publicHost": "http://localhost:4200" } } } ``` ###### Serve host with remotes that can be live reloaded The Module Federation Dev Server will serve a host application and find the remote applications associated with the host and serve a set selection with live reloading enabled also. See an example set up of it below: ```json { "serve-with-hmr-remotes": { "executor": "@nx/angular:module-federation-dev-server", "configurations": { "production": { "buildTarget": "host:build:production" }, "development": { "buildTarget": "host:build:development" } }, "defaultConfiguration": "development", "options": { "port": 4200, "publicHost": "http://localhost:4200", "devRemotes": [ "remote1", { "remoteName": "remote2", "configuration": "development" } ] } } } ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `buildTarget` | string [**required**] | A build builder target to serve in the format of `project:target[:configuration]`. | | | `allowedHosts` | array | List of hosts that are allowed to access the dev server. | `[]` | | `buildLibsFromSource` | boolean | Read buildable libraries from source instead of building them separately. If not set, it will take the value specified in the `buildTarget` options, or it will default to `true` if it's also not set in the `buildTarget` options. | | | `devRemotes` | array | List of remote applications to run in development mode (i.e. using serve target). | | | `disableHostCheck` | boolean | Don't verify connected clients are part of allowed hosts. | `false` | | `headers` | object | Custom HTTP headers to be added to all responses. | | | `hmr` | boolean | Enable hot module replacement. | `false` | | `host` | string | Host to listen on. | `"localhost"` | | `isInitialHost` | boolean | Whether the host that is running this executor is the first in the project tree to do so. | `true` | | `liveReload` | boolean | Whether to reload the page on change, using live-reload. | `true` | | `open` | boolean | Opens the url in default browser. | `false` | | `parallel` | number | Max number of parallel processes for building static remotes | | | `pathToManifestFile` | string | Path to a Module Federation manifest file (e.g. `my/path/to/module-federation.manifest.json`) containing the dynamic remote applications relative to the workspace root. | | | `poll` | number | Enable and define the file watching poll time period in milliseconds. | | | `port` | number | Port to listen on. | `4200` | | `proxyConfig` | string | Proxy configuration file. For more information, see https://angular.dev/tools/cli/serve#proxying-to-a-backend-server. | | | `publicHost` | string | The URL that the browser client (or live-reload client, if enabled) should use to connect to the development server. Use for a complex dev server setup, such as one with reverse proxies. | | | `servePath` | string | The pathname where the app will be served. | | | `skipRemotes` | array | List of remote applications to not automatically serve, either statically or in development mode. This will not remove the remotes from the `module-federation.config` file, and therefore the application may still try to fetch these remotes. This option is useful if you have other means for serving the `remote` application(s). **NOTE:** Remotes that are not in the workspace will be skipped automatically. | | | `ssl` | boolean | Serve using HTTPS. | `false` | | `sslCert` | string | SSL certificate to use for serving HTTPS. | | | `sslKey` | string | SSL key to use for serving HTTPS. | | | `static` | boolean | Whether to use a static file server instead of the webpack-dev-server. This should be used for remote applications that are also host applications. | | | `staticRemotesPort` | number | The port at which to serve the file-server for the static remotes. | | | `verbose` | boolean | Adds more details to output logging. | | | `watch` | boolean | Rebuild on change. | `true` | ### `module-federation-dev-ssr` The module-federation-ssr-dev-server executor is reserved exclusively for use with host SSR Module Federation applications. It allows the user to specify which remote applications should be served with the host. #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `browserTarget` | string [**required**] | Browser target to build. | | | `serverTarget` | string [**required**] | Server target to build. | | | `devRemotes` | array | List of remote applications to run in development mode (i.e. using serve target). | | | `host` | string | Host to listen on. | `"localhost"` | | `inspect` | boolean | Launch the development server in inspector mode and listen on address and port '127.0.0.1:9229'. | `false` | | `isInitialHost` | boolean | Whether the host that is running this executor is the first in the project tree to do so. | `true` | | `open` | boolean | Opens the url in default browser. | `false` | | `parallel` | number | Max number of parallel processes for building static remotes | | | `pathToManifestFile` | string | Path to a Module Federation manifest file (e.g. `my/path/to/module-federation.manifest.json`) containing the dynamic remote applications relative to the workspace root. | | | `port` | number | Port to start the development server at. Default is 4200. Pass 0 to get a dynamically assigned port. | `4200` | | `progress` | boolean | Log progress to the console while building. | | | `proxyConfig` | string | Proxy configuration file. | | | `publicHost` | string | The URL that the browser client should use to connect to the development server. Use for a complex dev server setup, such as one with reverse proxies. | | | `skipRemotes` | array | List of remote applications to not automatically serve, either statically or in development mode. | | | `ssl` | boolean | Serve using HTTPS. | `false` | | `sslCert` | string | SSL certificate to use for serving HTTPS. | | | `sslKey` | string | SSL key to use for serving HTTPS. | | | `staticRemotesPort` | number | The port at which to serve the file-server for the static remotes. | | | `verbose` | boolean | Adds more details to output logging. | `false` | ### `ng-packagr-lite` Builds an Angular library with support for incremental builds. This executor is meant to be used with buildable libraries in an incremental build scenario. It is similar to the `@nx/angular:package` executor but it only produces ESM2022 bundles. #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `poll` | number | Enable and define the file watching poll time period in milliseconds. | | | `project` | string | The file path for the ng-packagr configuration file, relative to the workspace root. | | | `tsConfig` | string | The full path for the TypeScript configuration file, relative to the workspace root. | | | `watch` | boolean | Whether to run a build when any file changes. | `false` | ### `package` Builds and packages an Angular library producing an output following the Angular Package Format (APF) to be distributed as an NPM package. This executor is a drop-in replacement for the `@angular-devkit/build-angular:ng-packagr` and `@angular/build:ng-packagr` builders, with additional support for incremental builds. #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `poll` | number | Enable and define the file watching poll time period in milliseconds. | | | `project` | string | The file path for the ng-packagr configuration file, relative to the workspace root. | | | `tsConfig` | string | The full path for the TypeScript configuration file, relative to the workspace root. | | | `watch` | boolean | Whether to run a build when any file changes. | `false` | ### `unit-test` Run application unit tests. _Note: this is only supported in Angular versions >= 21.0.0_. #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `browsers` | array | Specifies the browsers to use for test execution. When not specified, tests are run in a Node.js environment using jsdom. For both Vitest and Karma, browser names ending with 'Headless' (e.g., 'ChromeHeadless') will enable headless mode. | | | `browserViewport` | string | Specifies the browser viewport dimensions for browser-based tests in the format `widthxheight`. | | | `buildTarget` | string | Specifies the build target to use for the unit test build in the format `project:target[:configuration]`. This defaults to the `build` target of the current project with the `development` configuration. You can also pass a comma-separated list of configurations. Example: `project:target:production,staging`. | | | `coverage` | boolean | Enables coverage reporting for tests. | `false` | | `coverageExclude` | array | Specifies glob patterns of files to exclude from the coverage report. | | | `coverageInclude` | array | Specifies glob patterns of files to include in the coverage report. | | | `coverageReporters` | array | Specifies the reporters to use for coverage results. Each reporter can be a string representing its name, or a tuple containing the name and an options object. Built-in reporters include 'html', 'lcov', 'lcovonly', 'text', 'text-summary', 'cobertura', 'json', and 'json-summary'. | | | `coverageThresholds` | object | Specifies minimum coverage thresholds that must be met. If thresholds are not met, the builder will exit with an error. | | | `coverageWatermarks` | object | Specifies coverage watermarks for the HTML reporter. These determine the color coding for high, medium, and low coverage. | | | `debug` | boolean | Enables debugging mode for tests, allowing the use of the Node Inspector. | `false` | | `dumpVirtualFiles` | boolean | Dumps build output files to the `.angular/cache` directory for debugging purposes. | `false` | | `exclude` | array | Specifies glob patterns of files to exclude from testing, relative to the project root. | | | `filter` | string | Specifies a regular expression pattern to match against test suite and test names. Only tests with a name matching the pattern will be executed. For example, `^App` will run only tests in suites beginning with 'App'. | | | `headless` | boolean | Forces all configured browsers to run in headless mode. When using the Vitest runner, this option is ignored if no browsers are configured. The Karma runner does not support this option. _Note: this is only supported in Angular versions >= 21.2.0_. | | | `include` | array | Specifies glob patterns of files to include for testing, relative to the project root. This option also has special handling for directory paths (includes all test files within) and file paths (includes the corresponding test file if one exists). | `["**/*.spec.ts","**/*.test.ts"]` | | `indexHtmlTransformer` | string | Path to a file exposing a default function to transform the `index.html` file. | | | `listTests` | boolean | Lists all discovered test files and exits the process without building or executing the tests. | `false` | | `outputFile` | string | Specifies a file path for the test report, applying only to the first reporter. To configure output files for multiple reporters, use the tuple format `['reporter-name', { outputFile: '...' }]` within the `reporters` option. When not provided, output is written to the console. | | | `plugins` | array | A list of ESBuild plugins. | | | `progress` | boolean | Shows build progress information in the console. Defaults to the `progress` setting of the specified `buildTarget`. | | | `providersFile` | string | Specifies the path to a TypeScript file that provides an array of Angular providers for the test environment. The file must contain a default export of the provider array. | | | `reporters` | array | Specifies the reporters to use during test execution. Each reporter can be a string representing its name, or a tuple containing the name and an options object. Built-in reporters include 'default', 'verbose', 'dots', 'json', 'junit', 'tap', 'tap-flat', and 'html'. You can also provide a path to a custom reporter. | | | `runner` | string | Specifies the test runner to use for test execution. | `"vitest"` | | `runnerConfig` | string | boolean | Specifies the configuration file for the selected test runner. If a string is provided, it will be used as the path to the configuration file. If `true`, the builder will search for a default configuration file (e.g., `vitest.config.ts` or `karma.conf.js`). If `false`, no external configuration file will be used.\nFor Vitest, this enables advanced options and the use of custom plugins. Please note that while the file is loaded, the Angular team does not provide direct support for its specific contents or any third-party plugins used within it. | `false` | | `setupFiles` | array | A list of paths to global setup files that are executed before the test files. The application's polyfills and the Angular TestBed are always initialized before these files. | | | `tsConfig` | string | The path to the TypeScript configuration file, relative to the workspace root. Defaults to `tsconfig.spec.json` in the project root if it exists. If not specified and the default does not exist, the `tsConfig` from the specified `buildTarget` will be used. | | | `ui` | boolean | Enables the Vitest UI for interactive test execution. This option is only available for the Vitest runner. | | | `watch` | boolean | Enables watch mode, which re-runs tests when source files change. Defaults to `true` in TTY environments and `false` otherwise. | | --- ## @nx/angular Generators The @nx/angular plugin provides various generators to help you create and configure angular projects within your Nx workspace. Below is a complete reference for all available generators and their options. ## `application` Creates an Angular application. ### Examples ###### Simple Application Create an application named `my-app`: ```bash nx g @nx/angular:application apps/my-app ``` ###### Specify style extension Create an application named `my-app` in the `my-dir` directory and use `scss` for styles: ```bash nx g @nx/angular:app my-dir/my-app --style=scss ``` ###### Single File Components application Create an application with Single File Components (inline styles and inline templates): ```bash nx g @nx/angular:app apps/my-app --inlineStyle --inlineTemplate ``` ###### Set custom prefix and tags Set the prefix to apply to generated selectors and add tags to the application (used for linting). ```bash nx g @nx/angular:app apps/my-app --prefix=admin --tags=scope:admin,type:ui ``` **Usage:** ```bash nx generate @nx/angular:application [options] ``` **Aliases:** `app` **Arguments:** ```bash nx generate @nx/angular:application [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--addTailwind` | boolean | Whether to configure Tailwind CSS for the application. | `false` | | `--backendProject` | string | Backend project that provides data to this application. This sets up `proxy.config.json`. | | | `--bundler` | string | Bundler to use to build the application. | `"esbuild"` | | `--e2eTestRunner` | string | Test runner to use for end to end (E2E) tests. | `"playwright"` | | `--inlineStyle` | boolean | Specifies if the style will be in the ts file. | `false` | | `--inlineTemplate` | boolean | Specifies if the template will be in the ts file. | `false` | | `--linter` | string | The tool to use for running lint checks. | `"eslint"` | | `--minimal` | boolean | Generate a Angular app with a minimal setup. | `false` | | `--name` | string | The name of the application. | | | `--port` | number | The port at which the remote application should be served. | | | `--prefix` | string | The prefix to apply to generated selectors. | `"app"` | | `--rootProject` | boolean | Create an application at the root of the workspace. | `false` | | `--routing` | boolean | Enable routing for the application. | `true` | | `--serverRouting` | boolean | Creates a server application using the Server Routing and App Engine APIs for application using the `application` builder (Developer Preview). _Note: this is only supported in Angular versions 19.x.x_. From Angular 20 onwards, SSR will always enable server routing when using the `application` builder. | | | `--setParserOptionsProject` | boolean | Whether or not to configure the ESLint `parserOptions.project` option. We do not do this by default for lint performance reasons. | `false` | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | | `--skipTests` | boolean | Skip creating spec files. | `false` | | `--ssr` | boolean | Creates an application with Server-Side Rendering (SSR) and Static Site Generation (SSG/Prerendering) enabled. | `false` | | `--standalone` | boolean | Generate an application that is setup to use standalone components. | `true` | | `--strict` | boolean | Create an application with stricter type checking and build optimization options. | `true` | | `--style` | string | The file extension to be used for style files. | `"css"` | | `--tags` | string | Add tags to the application (used for linting). | | | `--unitTestRunner` | string | Test runner to use for unit tests. `vitest-angular` uses the `@angular/build:unit-test` executor (requires Angular v21+ and the `esbuild` bundler). `vitest-analog` uses AnalogJS-based setup with `@nx/vitest`. It defaults to `vitest-angular` when using the `esbuild` bundler for Angular versions >= 21.0.0, `vitest-analog` when using other bundlers on Angular >= 21.0.0, otherwise `jest`. | | | `--viewEncapsulation` | string | Specifies the view encapsulation strategy. | | | `--zoneless` | boolean | Generate an application that does not use `zone.js`. It defaults to `true`. _Note: this is only supported in Angular versions >= 21.0.0_ | | ## `component` Creates a new Angular component. ### Examples ###### Simple Component Generate a component named `Card` at `apps/my-app/src/lib/card/card.ts`: ```bash nx g @nx/angular:component apps/my-app/src/lib/card/card.ts ``` ###### Without Providing the File Extension Generate a component named `Card` at `apps/my-app/src/lib/card/card.ts`: ```bash nx g @nx/angular:component apps/my-app/src/lib/card/card ``` ###### With Different Symbol Name Generate a component named `Custom` at `apps/my-app/src/lib/card/card.ts`: ```bash nx g @nx/angular:component apps/my-app/src/lib/card/card --name=custom ``` ###### With a Component Type Generate a component named `CardComponent` at `apps/my-app/src/lib/card/card.component.ts`: ```bash nx g @nx/angular:component apps/my-app/src/lib/card/card --type=component ``` ###### Single File Component Create a component named `Card` with inline styles and inline template: ```bash nx g @nx/angular:component apps/my-app/src/lib/card/card --inlineStyle --inlineTemplate ``` ###### Component with OnPush Change Detection Strategy Create a component named `Card` with `OnPush` Change Detection Strategy: ```bash nx g @nx/angular:component apps/my-app/src/lib/card/card --changeDetection=OnPush ``` **Usage:** ```bash nx generate @nx/angular:component [options] ``` **Aliases:** `c` **Arguments:** ```bash nx generate @nx/angular:component [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--changeDetection` | string | The change detection strategy to use in the new component. | `"Default"` | | `--displayBlock` | boolean | Specifies if the style will contain `:host { display: block; }`. | `false` | | `--export` | boolean | Specifies if the component should be exported in the declaring `NgModule`. Additionally, if the project is a library, the component will be exported from the project's entry point (normally `index.ts`) if the module it belongs to is also exported or if the component is standalone. | `false` | | `--exportDefault` | boolean | Use default export for the component instead of a named export. | `false` | | `--inlineStyle` | boolean | Include styles inline in the component.ts file. Only CSS styles can be included inline. By default, an external styles file is created and referenced in the component.ts file. | `false` | | `--inlineTemplate` | boolean | Include template inline in the component.ts file. By default, an external template file is created and referenced in the component.ts file. | `false` | | `--module` | string | The filename or path to the NgModule that will declare this component. | | | `--name` | string | The component symbol name. Defaults to the last segment of the file path. | | | `--ngHtml` | boolean | Generate component template files with an '.ng.html' file extension instead of '.html'. | `false` | | `--prefix` | string | The prefix to apply to the generated component selector. | | | `--selector` | string | The HTML selector to use for this component. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipImport` | boolean | Do not import this component into the owning NgModule. | `false` | | `--skipSelector` | boolean | Specifies if the component should have a selector or not. | `false` | | `--skipTests` | boolean | Do not create `spec.ts` test files for the new component. | `false` | | `--standalone` | boolean | Whether the generated component is standalone. | `true` | | `--style` | string | The file extension or preprocessor to use for style files, or `none` to skip generating the style file. | `"css"` | | `--type` | string | Append a custom type to the component's filename. It defaults to 'component' for Angular versions below v20. For Angular v20 and above, no type is appended unless specified. | | | `--viewEncapsulation` | string | The view encapsulation strategy to use in the new component. | | ## `component-test` Create a `*.cy.ts` file for Cypress component testing for an Angular component. ### Examples :::caution[Can I use component testing?] Angular component testing with Nx requires **Cypress version 10.7.0** and up. You can migrate with to v11 via the [migrate-to-cypress-11 generator](/nx-api/cypress/generators/migrate-to-cypress-11). ::: This generator is used to create a Cypress component test file for a given Angular component. ```shell nx g @nx/angular:component-test --project=my-cool-angular-project --componentName=CoolBtnComponent --componentDir=src/cool-btn --componentFileName=cool-btn.component ``` Test file are generated with the `.cy.ts` suffix. this is to prevent colliding with any existing `.spec.` files contained in the project. It's currently expected the generated `.cy.ts` file will live side by side with the component. It is also assumed the project is already setup for component testing. If it isn't, then you can run the [cypress-component-project generator](/nx-api/angular/generators/cypress-component-configuration) to set up the project for component testing. **Usage:** ```bash nx generate @nx/angular:component-test [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--componentDir` | string [**required**] | Relative path to the folder that contains the component from the project root. | | | `--componentFileName` | string [**required**] | File name that contains the component without the `.ts` extension. | | | `--componentName` | string [**required**] | Class name of the component to create a test for. | | | `--project` | string [**required**] | The name of the project where the component is located. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | ## `convert-to-application-executor` Converts a project or all projects using one of the `@angular-devkit/build-angular:browser`, `@angular-devkit/build-angular:browser-esbuild`, `@nx/angular:browser` and `@nx/angular:browser-esbuild` executors to use the `@nx/angular:application` executor or the `@angular-devkit/build-angular:application` builder. If the converted target is using one of the `@nx/angular` executors, the `@nx/angular:application` executor will be used. Otherwise, the `@angular-devkit/build-angular:application` builder will be used. **Usage:** ```bash nx generate @nx/angular:convert-to-application-executor [options] ``` **Arguments:** ```bash nx generate @nx/angular:convert-to-application-executor [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--skipFormat` | boolean | Skip formatting files. | `false` | ## `convert-to-rspack` Creates an Angular application. **Usage:** ```bash nx generate @nx/angular:convert-to-rspack [options] ``` **Arguments:** ```bash nx generate @nx/angular:convert-to-rspack [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipInstall` | boolean | Skip installing dependencies. | `false` | ## `convert-to-with-mf` Converts an old micro frontend configuration to use the new withModuleFederation helper. It will run successfully if the following conditions are met: - Is either a host or remote application - Shared npm package configurations have not been modified - Name used to identify the Micro Frontend application matches the project name {% callout type="warning" title="Overrides" %}This generator will overwrite your webpack config. If you have additional custom configuration in your config file, it will be lost!{% /callout %}. **Usage:** ```bash nx generate @nx/angular:convert-to-with-mf [options] ``` **Arguments:** ```bash nx generate @nx/angular:convert-to-with-mf [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--skipFormat` | boolean | Skip formatting files. | `false` | ## `cypress-component-configuration` Add a Cypress component testing configuration to an existing project. Cypress v10.7.0 or higher is required. :::caution[Can I use component testing?] Angular component testing with Nx requires **Cypress version 10.7.0** and up. You can migrate with to v11 via the [migrate-to-cypress-11 generator](/nx-api/cypress/generators/migrate-to-cypress-11). This generator is for Cypress based component testing. If you want to test components via Storybook with Cypress, then check out the [storybook-configuration generator docs](/nx-api/angular/generators/storybook-configuration). However, this functionality is deprecated, and will be removed on Nx version 18. ::: This generator is designed to get your Angular project up and running with Cypress Component Testing. ```shell nx g @nx/angular:cypress-component-configuration --project=my-cool-angular-project ``` Running this generator, adds the required files to the specified project with a preconfigured `cypress.config.ts` designed for Nx workspaces. ```ts title="cypress.config.ts" import { defineConfig } from 'cypress'; import { nxComponentTestingPreset } from '@nx/angular/plugins/component-testing'; export default defineConfig({ component: nxComponentTestingPreset(__filename), }); ``` Here is an example on how to add custom options to the configuration ```ts title="cypress.config.ts" import { defineConfig } from 'cypress'; import { nxComponentTestingPreset } from '@nx/angular/plugins/component-testing'; export default defineConfig({ component: { ...nxComponentTestingPreset(__filename), // extra options here }, }); ``` ### Specifying a Build Target Component testing requires a _build target_ to correctly run the component test dev server. This option can be manually specified with `--build-target=some-angular-app:build`, but Nx will infer this usage from the [project graph](/concepts/mental-model#the-project-graph) if one isn't provided. For Angular projects, the build target needs to be using the `@nx/angular:webpack-browser` or `@angular-devkit/build-angular:browser` executor. The generator will throw an error if a build target can't be found and suggest passing one in manually. Letting Nx infer the build target by default ```shell nx g @nx/angular:cypress-component-configuration --project=my-cool-angular-project ``` Manually specifying the build target ```shell nx g @nx/angular:cypress-component-configuration --project=my-cool-angular-project --build-target:some-angular-app:build --generate-tests ``` :::note[Build Target with Configuration] If you're wanting to use a build target with a specific configuration. i.e. `my-app:build:production`, then manually providing `--build-target=my-app:build:production` is the best way to do that. ::: ### Auto Generating Tests You can optionally use the `--generate-tests` flag to generate a test file for each component in your project. ```shell nx g @nx/angular:cypress-component-configuration --project=my-cool-angular-project --generate-tests ``` ### Running Component Tests A new `component-test` target will be added to the specified project to run your component tests. ```shell nx g component-test my-cool-angular-project ``` Here is an example of the project configuration that is generated. The `--build-target` option is added as the `devServerTarget` which can be changed as needed. ```json title="project.json" { "targets" { "component-test": { "executor": "@nx/cypress:cypress", "options": { "cypressConfig": "/cypress.config.ts", "testingType": "component", "devServerTarget": "some-angular-app:build", "skipServe": true } } } } ``` ### What is bundled When the project being tested is a dependent of the specified `--build-target`, then **assets, scripts, and styles** are applied to the component being tested. You can determine if the project is dependent by using the [project graph](/features/explore-graph). If there is no link between the two projects, then the **assets, scripts, and styles** won't be included in the build; therefore, they will not be applied to the component. To have a link between projects, you can import from the project being tested into the specified `--build-target` project, or set the `--build-target` project to [implicitly depend](/reference/project-configuration#implicitdependencies) on the project being tested. Nx also supports [React component testing](/nx-api/react/generators/cypress-component-configuration). **Usage:** ```bash nx generate @nx/angular:cypress-component-configuration [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--project` | string [**required**] | The name of the project to add cypress component testing configuration to | | | `--buildTarget` | string | A build target used to configure Cypress component testing in the format of `project:target[:configuration]`. The build target should be an angular app. If not provided we will try to infer it from your projects usage. | | | `--generateTests` | boolean | Generate default component tests for existing components in the project | `false` | | `--skipFormat` | boolean | Skip formatting files | `false` | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | ## `directive` Creates a new Angular directive. **Usage:** ```bash nx generate @nx/angular:directive [options] ``` **Aliases:** `d` **Arguments:** ```bash nx generate @nx/angular:directive [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--export` | boolean | The declaring NgModule exports this directive. | `false` | | `--module` | string | The filename of the declaring NgModule. | | | `--name` | string | The directive symbol name. Defaults to the last segment of the file path. | | | `--prefix` | string | A prefix to apply to generated selectors. | | | `--selector` | string | The HTML selector to use for this directive. | | | `--skipFormat` | boolean | Skip formatting of files. | `false` | | `--skipImport` | boolean | Do not import this directive into the owning NgModule. | `false` | | `--skipTests` | boolean | Do not create "spec.ts" test files for the new class. | `false` | | `--standalone` | boolean | Whether the generated directive is standalone. | `true` | | `--type` | string | Append a custom type to the directive's filename. It defaults to 'directive' for Angular versions below v20. For Angular v20 and above, no type is appended unless specified. | | ## `federate-module` Create a federated module, which is exposed by a Producer (remote) and can be subsequently loaded by a Consumer (host). **Usage:** ```bash nx generate @nx/angular:federate-module [options] ``` **Arguments:** ```bash nx generate @nx/angular:federate-module [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--name` | string [**required**] | The name of the module. | | | `--remote` | string [**required**] | The name of the Producer (remote). | | | `--e2eTestRunner` | string | Test runner to use for end to end (e2e) tests of the Producer (remote) if it needs to be created. | `"cypress"` | | `--host` | string | The Consumer (host) application for this Producer (remote). | | | `--remoteDirectory` | string | The directory of the new Producer (remote) application if one needs to be created. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--standalone` | boolean | Whether to generate the Producer (remote) application with standalone components if it needs to be created. | `true` | | `--style` | string | The file extension to be used for style files for the Producer (remote) if one needs to be created. | `"css"` | | `--unitTestRunner` | string | Test runner to use for unit tests of the Producer (remote) if it needs to be created. `vitest-analog` uses AnalogJS-based setup with `@nx/vitest`. It defaults to `vitest-analog` for Angular versions >= 21.0.0, otherwise `jest`. | | ## `host` Create an Angular Consumer (Host) Module Federation Application. **Usage:** ```bash nx generate @nx/angular:host [options] ``` **Aliases:** `consumer` **Arguments:** ```bash nx generate @nx/angular:host [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--addTailwind` | boolean | Whether to configure Tailwind CSS for the application. | `false` | | `--backendProject` | string | Backend project that provides data to this application. This sets up `proxy.config.json`. | | | `--bundler` | string | The bundler to use for the host application. | `"webpack"` | | `--dynamic` | boolean | Should the host application use dynamic federation? | `false` | | `--e2eTestRunner` | string | Test runner to use for end to end (E2E) tests. | `"playwright"` | | `--inlineStyle` | boolean | Specifies if the style will be in the ts file. | `false` | | `--inlineTemplate` | boolean | Specifies if the template will be in the ts file. | `false` | | `--linter` | string | The tool to use for running lint checks. | `"eslint"` | | `--name` | string | The name to give to the Consumer (host) Angular application. | | | `--prefix` | string | The prefix to apply to generated selectors. | | | `--remotes` | array | The names of the Producers (remote) applications to add to the Consumer (host). | | | `--setParserOptionsProject` | boolean | Whether or not to configure the ESLint `parserOptions.project` option. We do not do this by default for lint performance reasons. | `false` | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | | `--skipPostInstall` | boolean | Do not add or append `ngcc` to the `postinstall` script in `package.json`. | `false` | | `--skipTests` | boolean | Skip creating spec files. | `false` | | `--ssr` | boolean | Whether to configure SSR for the Consumer (host) application | `false` | | `--standalone` | boolean | Whether to generate a Consumer (host) application that uses standalone components. | `true` | | `--strict` | boolean | Create an application with stricter type checking and build optimization options. | `true` | | `--style` | string | The file extension to be used for style files. | `"css"` | | `--tags` | string | Add tags to the application (used for linting). | | | `--typescriptConfiguration` | boolean | Whether the module federation configuration and webpack configuration files should use TS. | `true` | | `--unitTestRunner` | string | Test runner to use for unit tests. `vitest-analog` uses AnalogJS-based setup with `@nx/vitest`. It defaults to `vitest-analog` for Angular versions >= 21.0.0, otherwise `jest`. | | | `--viewEncapsulation` | string | Specifies the view encapsulation strategy. | | | `--zoneless` | boolean | Generate an application that does not use `zone.js`. It defaults to `true`. _Note: this is only supported in Angular versions >= 21.0.0_ | | ## `library` Creates an Angular library. ### Examples ###### Simple Library Creates the `my-ui-lib` library with an `ui` tag: ```bash nx g @nx/angular:library libs/my-ui-lib --tags=ui ``` ###### Publishable Library Creates the `my-lib` library that can be built producing an output following the Angular Package Format (APF) to be distributed as an NPM package: ```bash nx g @nx/angular:library libs/my-lib --publishable --import-path=@my-org/my-lib ``` ###### Buildable Library Creates the `my-lib` library with support for incremental builds: ```bash nx g @nx/angular:library libs/my-lib --buildable ``` ###### Nested Folder & Import Creates the `my-lib` library in the `nested` directory and sets the import path to `@myorg/nested/my-lib`: ```bash nx g @nx/angular:library libs/nested/my-lib --importPath=@myorg/nested/my-lib ``` **Usage:** ```bash nx generate @nx/angular:library [options] ``` **Aliases:** `lib` **Arguments:** ```bash nx generate @nx/angular:library [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--addModuleSpec` | boolean | Add a module spec file. | `false` | | `--addTailwind` | boolean | Whether to configure Tailwind CSS for the application. It can only be used with buildable and publishable libraries. Non-buildable libraries will use the application's Tailwind configuration. | `false` | | `--buildable` | boolean | Generate a buildable library. | `false` | | `--changeDetection` | string | The change detection strategy to use in the new component. Disclaimer: This option is only valid when `--standalone` is set to `true`. | `"Default"` | | `--compilationMode` | string | Specifies the compilation mode to use. If not specified, it will default to `partial` for publishable libraries and to `full` for buildable libraries. The `full` value can not be used for publishable libraries. | | | `--displayBlock` | boolean | Specifies if the component generated style will contain `:host { display: block; }`. Disclaimer: This option is only valid when `--standalone` is set to `true`. | `false` | | `--flat` | boolean | Ensure the generated standalone component is not placed in a subdirectory. Disclaimer: This option is only valid when `--standalone` is set to `true`. | `false` | | `--importPath` | string | The library name used to import it, like `@myorg/my-awesome-lib`. Must be a valid npm name. | | | `--inlineStyle` | boolean | Include styles inline in the component.ts file. Only CSS styles can be included inline. By default, an external styles file is created and referenced in the component.ts file. Disclaimer: This option is only valid when `--standalone` is set to `true`. | `false` | | `--inlineTemplate` | boolean | Include template inline in the component.ts file. By default, an external template file is created and referenced in the component.ts file. Disclaimer: This option is only valid when `--standalone` is set to `true`. | `false` | | `--lazy` | boolean | Add `RouterModule.forChild` when set to true, and a simple array of routes when set to false. | `false` | | `--linter` | string | The tool to use for running lint checks. | `"eslint"` | | `--name` | string | The name of the library. | | | `--parent` | string | Path to the parent route configuration using `loadChildren` or `children`, depending on what `lazy` is set to. | | | `--prefix` | string | The prefix to apply to generated selectors. | | | `--publishable` | boolean | Generate a publishable library. | `false` | | `--routing` | boolean | Add router configuration. See `lazy` for more information. | `false` | | `--selector` | string | The HTML selector to use for this component. Disclaimer: This option is only valid when `--standalone` is set to `true`. | | | `--setParserOptionsProject` | boolean | Whether or not to configure the ESLint `parserOptions.project` option. We do not do this by default for lint performance reasons. | `false` | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipModule` | boolean | Whether to skip the creation of a default module when generating the library. | `false` | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | | `--skipSelector` | boolean | Specifies if the component should have a selector or not. Disclaimer: This option is only valid when `--standalone` is set to `true`. | `false` | | `--skipTests` | boolean | Do not create `spec.ts` test files for the new component. Disclaimer: This option is only valid when `--standalone` is set to `true`. | `false` | | `--skipTsConfig` | boolean | Do not update `tsconfig.json` for development experience. | `false` | | `--standalone` | boolean | Generate a library that uses a standalone component instead of a module as the entry point. | `true` | | `--strict` | boolean | Create a library with stricter type checking and build optimization options. | `true` | | `--style` | string | The file extension or preprocessor to use for style files, or `none` to skip generating the style file. Disclaimer: This option is only valid when `--standalone` is set to `true`. | `"css"` | | `--tags` | string | Add tags to the library (used for linting). | | | `--unitTestRunner` | string | Test runner to use for unit tests. `vitest-angular` uses the `@nx/angular:unit-test` executor (requires Angular v21+ and a buildable/publishable library). `vitest-analog` uses AnalogJS-based setup with `@nx/vitest`. It defaults to `vitest-angular` for buildable/publishable libraries on Angular >= 21.0.0, `vitest-analog` for non-buildable libraries on Angular >= 21.0.0, otherwise `jest`. | | | `--viewEncapsulation` | string | The view encapsulation strategy to use in the new component. Disclaimer: This option is only valid when `--standalone` is set to `true`. | | ## `library-secondary-entry-point` Creates a secondary entry point for an Angular publishable library. ### Examples ###### Basic Usage Create a secondary entrypoint named `button` in the `ui` library. ```bash nx g @nx/angular:library-secondary-entry-point --library=ui --name=button ``` ###### Skip generating module Create a secondary entrypoint named `button` in the `ui` library but skip creating an NgModule. ```bash nx g @nx/angular:library-secondary-entry-point --library=ui --name=button --skipModule ``` **Usage:** ```bash nx generate @nx/angular:library-secondary-entry-point [options] ``` **Aliases:** `secondary-entry-point`, `entry-point` **Arguments:** ```bash nx generate @nx/angular:library-secondary-entry-point [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--library` | string [**required**] | The name of the library to create the secondary entry point for. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipModule` | boolean | Skip generating a module for the secondary entry point. | `false` | ## `move` Move an Angular project to another folder in the workspace. **Usage:** ```bash nx generate @nx/angular:move [options] ``` **Aliases:** `mv` **Arguments:** ```bash nx generate @nx/angular:move [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--projectName` | string [**required**] | The name of the Angular project to move. | | | `--importPath` | string | The new import path to use in the `tsconfig.base.json`. | | | `--newProjectName` | string | The new name of the project after the move. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--updateImportPath` | boolean | Update the import path to reflect the new location. | `true` | ## `ngrx` Adds NgRx support to an application or library. **Usage:** ```bash nx generate @nx/angular:ngrx [options] ``` **Arguments:** ```bash nx generate @nx/angular:ngrx [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--barrels` | boolean | Use barrels to re-export actions, state and selectors. | `false` | | `--directory` | string | The name of the folder used to contain/group the generated NgRx files. | `"+state"` | | `--facade` | boolean | Create a Facade class for the the feature. | `false` | | `--minimal` | boolean | Only register the root state management setup or feature state. | `true` | | `--module` | string | The path to the `NgModule` where the feature state will be registered. The host directory will create/use the new state directory. | | | `--parent` | string | The path to the file where the state will be registered. For NgModule usage, this will be your `app-module.ts` for your root state, or your Feature Module for feature state. For Standalone API usage, this will be your `app.config.ts` file for your root state, or the Routes definition file for your feature state. The host directory will create/use the new state directory. | | | `--root` | boolean | Setup root or feature state management with NgRx. | `false` | | `--route` | string | The route that the Standalone NgRx Providers should be added to. | `"''"` | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipImport` | boolean | Generate NgRx feature files without registering the feature in the NgModule. | `false` | | `--skipPackageJson` | boolean | Do not update the `package.json` with NgRx dependencies. | `false` | ## `ngrx-feature-store` Add an NgRx Feature Store to an application or library. **Usage:** ```bash nx generate @nx/angular:ngrx-feature-store [options] ``` **Arguments:** ```bash nx generate @nx/angular:ngrx-feature-store [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--barrels` | boolean | Use barrels to re-export actions, state and selectors. | `false` | | `--directory` | string | The name of the folder used to contain/group the generated NgRx files. | `"+state"` | | `--facade` | boolean | Create a Facade class for the the feature. | `false` | | `--minimal` | boolean | Only register the feature state. | `false` | | `--parent` | string | The path to the file where the state will be registered. For NgModule usage, this will be your Feature Module. For Standalone API usage, this will be your Routes definition file for your feature state. The host directory will create/use the new state directory. | | | `--route` | string | The route that the Standalone NgRx Providers should be added to. | `"''"` | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipImport` | boolean | Generate NgRx feature files without registering the feature in the NgModule. | `false` | | `--skipPackageJson` | boolean | Do not update the `package.json` with NgRx dependencies. | `false` | ## `ngrx-root-store` Adds NgRx support to an application. **Usage:** ```bash nx generate @nx/angular:ngrx-root-store [options] ``` **Arguments:** ```bash nx generate @nx/angular:ngrx-root-store [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--addDevTools` | boolean | Instrument the Store Devtools. | `false` | | `--directory` | string | The name of the folder used to contain/group the generated NgRx files. | `"+state"` | | `--facade` | boolean | Create a Facade class for the the feature. | `false` | | `--minimal` | boolean | Only register the root state management setup or also generate a global feature state. | `true` | | `--name` | string | Name of the NgRx state, such as `products` or `users`. Recommended to use the plural form of the name. | | | `--route` | string | The route that the Standalone NgRx Providers should be added to. | `"''"` | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipImport` | boolean | Generate NgRx feature files without registering the feature in the NgModule. | `false` | | `--skipPackageJson` | boolean | Do not update the `package.json` with NgRx dependencies. | `false` | ## `pipe` Creates an Angular pipe. **Usage:** ```bash nx generate @nx/angular:pipe [options] ``` **Aliases:** `p` **Arguments:** ```bash nx generate @nx/angular:pipe [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--export` | boolean | The declaring NgModule exports this pipe. | `false` | | `--module` | string | The filename of the declaring NgModule. | | | `--name` | string | The pipe symbol name. Defaults to the last segment of the file path. | | | `--skipFormat` | boolean | Skip formatting of files. | `false` | | `--skipImport` | boolean | Do not import this pipe into the owning NgModule. | `false` | | `--skipTests` | boolean | Do not create "spec.ts" test files for the new pipe. | `false` | | `--standalone` | boolean | Whether the generated pipe is standalone. | `true` | | `--typeSeparator` | string | The separator character to use before the type within the generated file's name. For example, if you set the option to `.`, the file will be named `example.pipe.ts`. It defaults to '-' for Angular v20+. For versions below v20, it defaults to '.'. | | ## `remote` Create an Angular Producer (Remote) Module Federation Application. **Usage:** ```bash nx generate @nx/angular:remote [options] ``` **Aliases:** `producer` **Arguments:** ```bash nx generate @nx/angular:remote [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--addTailwind` | boolean | Whether to configure Tailwind CSS for the application. | `false` | | `--backendProject` | string | Backend project that provides data to this application. This sets up `proxy.config.json`. | | | `--bundler` | string | The bundler to use for the remote application. | `"webpack"` | | `--e2eTestRunner` | string | Test runner to use for end to end (E2E) tests. | `"playwright"` | | `--host` | string | The name of the Consumer (host) app to attach this Producer (remote) app to. | | | `--inlineStyle` | boolean | Specifies if the style will be in the ts file. | `false` | | `--inlineTemplate` | boolean | Specifies if the template will be in the ts file. | `false` | | `--linter` | string | The tool to use for running lint checks. | `"eslint"` | | `--name` | string | The name to give to the Producer (remote) Angular app. | | | `--port` | number | The port on which this app should be served. | | | `--prefix` | string | The prefix to apply to generated selectors. | | | `--setParserOptionsProject` | boolean | Whether or not to configure the ESLint `parserOptions.project` option. We do not do this by default for lint performance reasons. | `false` | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | | `--skipTests` | boolean | Skip creating spec files. | `false` | | `--ssr` | boolean | Whether to configure SSR for the Producer (remote) application to be consumed by a Consumer (host) application using SSR. | `false` | | `--standalone` | boolean | Whether to generate a Producer (remote) application with standalone components. | `true` | | `--strict` | boolean | Create an application with stricter type checking and build optimization options. | `true` | | `--style` | string | The file extension to be used for style files. | `"css"` | | `--tags` | string | Add tags to the application (used for linting). | | | `--typescriptConfiguration` | boolean | Whether the module federation configuration and webpack configuration files should use TS. | `true` | | `--unitTestRunner` | string | Test runner to use for unit tests. `vitest-analog` uses AnalogJS-based setup with `@nx/vitest`. It defaults to `vitest-analog` for Angular versions >= 21.0.0, otherwise `jest`. | | | `--viewEncapsulation` | string | Specifies the view encapsulation strategy. | | | `--zoneless` | boolean | Generate an application that does not use `zone.js`. It defaults to `true`. _Note: this is only supported in Angular versions >= 21.0.0_ | | ## `scam` Creates a new Angular SCAM. **Usage:** ```bash nx generate @nx/angular:scam [options] ``` **Arguments:** ```bash nx generate @nx/angular:scam [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--changeDetection` | string | The change detection strategy to use in the new component. | `"Default"` | | `--displayBlock` | boolean | Specifies if the style will contain `:host { display: block; }`. | `false` | | `--export` | boolean | Specifies if the SCAM should be exported from the project's entry point (normally `index.ts`). It only applies to libraries. | `true` | | `--inlineScam` | boolean | Create the `NgModule` in the same file as the component. | `true` | | `--inlineStyle` | boolean | Include styles inline in the `component.ts` file. Only CSS styles can be included inline. By default, an external styles file is created and referenced in the `component.ts` file. | `false` | | `--inlineTemplate` | boolean | Include template inline in the `component.ts` file. By default, an external template file is created and referenced in the `component.ts` file. | `false` | | `--name` | string | The component symbol name. Defaults to the last segment of the file path. | | | `--prefix` | string | The prefix to apply to the generated component selector. | | | `--selector` | string | The `HTML` selector to use for this component. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipSelector` | boolean | Specifies if the component should have a selector or not. | `false` | | `--skipTests` | boolean | Do not create `spec.ts` test files for the new component. | `false` | | `--style` | string | The file extension or preprocessor to use for style files, or 'none' to skip generating the style file. | `"css"` | | `--type` | string | Append a custom type to the component's filename. It defaults to 'component' for Angular versions below v20. For Angular v20 and above, no type is appended unless specified. | | | `--viewEncapsulation` | string | The view encapsulation strategy to use in the new component. | | ## `scam-directive` Creates a new, generic Angular directive definition in the given or default project. **Usage:** ```bash nx generate @nx/angular:scam-directive [options] ``` **Arguments:** ```bash nx generate @nx/angular:scam-directive [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--export` | boolean | Specifies if the SCAM should be exported from the project's entry point (normally `index.ts`). It only applies to libraries. | `true` | | `--inlineScam` | boolean | Create the `NgModule` in the same file as the Directive. | `true` | | `--name` | string | The directive symbol name. Defaults to the last segment of the file path. | | | `--prefix` | string | The prefix to apply to the generated directive selector. | | | `--selector` | string | The `HTML` selector to use for this directive. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipTests` | boolean | Do not create `spec.ts` test files for the new directive. | `false` | | `--type` | string | Append a custom type to the directive's filename. It defaults to 'directive' for Angular versions below v20. For Angular v20 and above, no type is appended unless specified. | | ## `scam-pipe` Creates a new, generic Angular pipe definition in the given or default project. **Usage:** ```bash nx generate @nx/angular:scam-pipe [options] ``` **Arguments:** ```bash nx generate @nx/angular:scam-pipe [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--export` | boolean | Specifies if the SCAM should be exported from the project's entry point (normally `index.ts`). It only applies to libraries. | `true` | | `--inlineScam` | boolean | Create the NgModule in the same file as the Pipe. | `true` | | `--name` | string | The pipe symbol name. Defaults to the last segment of the file path. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--skipTests` | boolean | Do not create `spec.ts` test files for the new pipe. | `false` | | `--typeSeparator` | string | The separator character to use before the type within the generated file's name. For example, if you set the option to `.`, the file will be named `example.pipe.ts`. It defaults to '-' for Angular v20+. For versions below v20, it defaults to '.'. | | ## `scam-to-standalone` Convert an Inline SCAM to a Standalone Component. ### Examples ###### Basic Usage This generator allows you to convert an Inline SCAM to a Standalone Component. It's important that the SCAM you wish to convert has it's NgModule within the same file for the generator to be able to correctly convert the component to Standalone. ```bash nx g @nx/angular:scam-to-standalone --component=libs/mylib/src/lib/myscam/myscam.ts --project=mylib ``` **Usage:** ```bash nx generate @nx/angular:scam-to-standalone [options] ``` **Arguments:** ```bash nx generate @nx/angular:scam-to-standalone [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--project` | string | The project containing the SCAM. | | | `--skipFormat` | boolean | Skip formatting the workspace after the generator completes. | | ## `setup-mf` Create Module Federation configuration files for given Angular Application. ### Examples The `setup-mf` generator is used to add Module Federation support to existing applications. ###### Convert to Host To convert an existing application to a host application, run the following ```bash nx g setup-mf myapp --mfType=host --routing=true ``` ###### Convert to Remote To convert an existing application to a remote application, run the following ```bash nx g setup-mf myapp --mfType=remote --routing=true ``` ###### Convert to Remote and attach to a host application To convert an existing application to a remote application and attach it to an existing host application name `myhostapp`, run the following ```bash nx g setup-mf myapp --mfType=remote --routing=true --host=myhostapp ``` ###### Convert to Host and attach to existing remote applications To convert an existing application to a host application and attaching existing remote applications named `remote1` and `remote2`, run the following ```bash nx g setup-mf myapp --mfType=host --routing=true --remotes=remote1,remote2 ``` **Usage:** ```bash nx generate @nx/angular:setup-mf [options] ``` **Arguments:** ```bash nx generate @nx/angular:setup-mf [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--mfType` | string [**required**] | Type of application to generate the Module Federation configuration for. | `"remote"` | | `--e2eProjectName` | string | The project name of the associated E2E project for the application. This is only required for Cypress E2E projects that do not follow the naming convention `-e2e`. | | | `--federationType` | string | Use either Static or Dynamic Module Federation pattern for the application. | `"static"` | | `--host` | string | The name of the host application that the remote application will be consumed by. | | | `--port` | number | The port at which the remote application should be served. | | | `--prefix` | string | The prefix to use for any generated component. | | | `--remotes` | array | A list of remote application names that the Consumer (host) application should consume. | | | `--routing` | boolean | Generate a routing setup to allow a Consumer (host) application to route to the Producer (remote) application. | | | `--setParserOptionsProject` | boolean | Whether or not to configure the ESLint `parserOptions.project` option. We do not do this by default for lint performance reasons. | `false` | | `--skipE2E` | boolean | Do not set up E2E related config. | `false` | | `--skipFormat` | boolean | Skip formatting the workspace after the generator completes. | | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | | `--standalone` | boolean | Whether the application is a standalone application. | `true` | | `--typescriptConfiguration` | boolean | Whether the module federation configuration and webpack configuration files should use TS. | `true` | ## `setup-ssr` Create the additional configuration required to enable SSR via Angular Universal for an Angular application. **Usage:** ```bash nx generate @nx/angular:setup-ssr [options] ``` **Arguments:** ```bash nx generate @nx/angular:setup-ssr [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--hydration` | boolean | Set up Hydration for the SSR application. | `true` | | `--main` | string | The name of the main entry-point file. | `"main.server.ts"` | | `--rootModuleClassName` | string | The name of the root module class. | `"AppServerModule"` | | `--rootModuleFileName` | string | The name of the root module file | `"app.server.module.ts"` | | `--serverFileName` | string | The name of the Express server file. | `"server.ts"` | | `--serverPort` | number | The port for the Express server. | `4000` | | `--serverRouting` | boolean | Creates a server application using the Server Routing and App Engine APIs for application using the `application` builder (Developer Preview). _Note: this is only supported in Angular versions 19.x.x_. From Angular 20 onwards, SSR will always enable server routing when using the `application` builder. | | | `--skipFormat` | boolean | Skip formatting the workspace after the generator completes. | | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | | `--standalone` | boolean | Use Standalone Components to bootstrap SSR. | | ## `setup-tailwind` Adds the Tailwind CSS configuration files for a given Angular project and installs, if needed, the packages required for Tailwind CSS to work. ### Examples The `setup-tailwind` generator can be used to add [Tailwind](https://tailwindcss.com) configuration to apps and publishable libraries. ###### Standard Setup To generate a standard Tailwind setup, just run the following command. ```bash nx g @nx/angular:setup-tailwind myapp ``` ###### Specifying the Styles Entrypoint To specify the styles file that should be used as the entrypoint for Tailwind, simply pass the `--stylesEntryPoint` flag, relative to workspace root. ```bash nx g @nx/angular:setup-tailwind myapp --stylesEntryPoint=apps/myapp/src/styles.css ``` **Usage:** ```bash nx generate @nx/angular:setup-tailwind [options] ``` **Arguments:** ```bash nx generate @nx/angular:setup-tailwind [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--buildTarget` | string | The name of the target used to build the project. This option only applies to buildable/publishable libraries. | `"build"` | | `--skipFormat` | boolean | Skips formatting the workspace after the generator completes. | | | `--skipPackageJson` | boolean | Do not add dependencies to `package.json`. | `false` | | `--stylesEntryPoint` | string | Path to the styles entry point relative to the workspace root. If not provided the generator will do its best to find it and it will error if it can't. This option only applies to applications. | | ## `stories` Creates Storybook stories/specs for all Angular components declared in a project. This generator will generate stories for all your components in your project. The stories will be generated using [Component Story Format 3 (CSF3)](https://storybook.js.org/blog/storybook-csf3-is-here/). ```bash nx g @nx/angular:stories project-name ``` You can read more about how this generator works, in the [Storybook for Angular overview page](/recipes/storybook/overview-angular#auto-generate-stories). When running this generator, you will be prompted to provide the following: - The `name` of the project you want to generate the configuration for. - Whether you want to set up [Storybook interaction tests](https://storybook.js.org/docs/angular/writing-tests/interaction-testing) (`interactionTests`). If you choose `yes`, a `play` function will be added to your stories, and all the necessary dependencies will be installed. You can read more about this in the [Nx Storybook interaction tests documentation page](/recipes/storybook/storybook-interaction-tests#setup-storybook-interaction-tests). You must provide a `name` for the generator to work. By default, this generator will also set up [Storybook interaction tests](https://storybook.js.org/docs/angular/writing-tests/interaction-testing). If you don't want to set up Storybook interaction tests, you can pass the `--interactionTests=false` option, but it's not recommended. There are a number of other options available. Let's take a look at some examples. ### Examples #### Ignore certain paths when generating stories ```bash nx g @nx/angular:stories ui --ignorePaths=libs/ui/src/not-stories/**,**/**/src/**/*.other.* ``` This will generate stories for all the components in the `ui` project, except for the ones in the `libs/ui/src/not-stories` directory, and also for components that their file name is of the pattern `*.other.*`. This is useful if you have a project that contains components that are not meant to be used in isolation, but rather as part of a larger component. By default, Nx will ignore the following paths: ```text *.stories.ts, *.stories.tsx, *.stories.js, *.stories.jsx, *.stories.mdx ``` but you can change this behaviour easily, as explained above. **Usage:** ```bash nx generate @nx/angular:stories [options] ``` **Arguments:** ```bash nx generate @nx/angular:stories [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--ignorePaths` | array | Paths to ignore when looking for components. | `["*.stories.ts,*.stories.tsx,*.stories.js,*.stories.jsx,*.stories.mdx"]` | | `--interactionTests` | boolean | Set up Storybook interaction tests. | `true` | | `--skipFormat` | boolean | Skip formatting files. | `false` | ## `storybook-configuration` Adds Storybook configuration to a project to be able to use and create stories. This generator will set up Storybook for your **Angular** project. By default, Storybook v10 is used. ```bash nx g @nx/angular:storybook-configuration project-name ``` You can read more about how this generator works, in the [Storybook for Angular overview page](/recipes/storybook/overview-angular#generate-storybook-configuration-for-an-angular-project). When running this generator, you will be prompted to provide the following: - The `name` of the project you want to generate the configuration for. - Whether you want to set up [Storybook interaction tests](https://storybook.js.org/docs/angular/writing-tests/interaction-testing) (`interactionTests`). If you choose `yes`, a `play` function will be added to your stories, and all the necessary dependencies will be installed. Also, a `test-storybook` target will be generated in your project's `project.json`, with a command to invoke the [Storybook `test-runner`](https://storybook.js.org/docs/angular/writing-tests/test-runner). You can read more about this in the [Nx Storybook interaction tests documentation page](/recipes/storybook/storybook-interaction-tests#setup-storybook-interaction-tests). - Whether you want to `generateStories` for the components in your project. If you choose `yes`, a `.stories.ts` file will be generated next to each of your components in your project. You must provide a `name` for the generator to work. By default, this generator will also set up [Storybook interaction tests](https://storybook.js.org/docs/angular/writing-tests/interaction-testing). If you don't want to set up Storybook interaction tests, you can pass the `--interactionTests=false` option, but it's not recommended. There are a number of other options available. Let's take a look at some examples. ### Examples #### Generate Storybook configuration ```bash nx g @nx/angular:storybook-configuration ui ``` This will generate Storybook configuration for the `ui` project using TypeScript for the Storybook configuration files (the files inside the `.storybook` directory, eg. `.storybook/main.ts`). #### Ignore certain paths when generating stories ```bash nx g @nx/angular:storybook-configuration ui --generateStories=true --ignorePaths=libs/ui/src/not-stories/**,**/**/src/**/*.other.*,apps/my-app/**/*.something.ts ``` This will generate a Storybook configuration for the `ui` project and generate stories for all components in the `libs/ui/src/lib` directory, except for the ones in the `libs/ui/src/not-stories` directory, and the ones in the `apps/my-app` directory that end with `.something.ts`, and also for components that their file name is of the pattern `*.other.*`. This is useful if you have a project that contains components that are not meant to be used in isolation, but rather as part of a larger component. By default, Nx will ignore the following paths: ```text *.stories.ts, *.stories.tsx, *.stories.js, *.stories.jsx, *.stories.mdx ``` but you can change this behaviour easily, as explained above. #### Generate Storybook configuration using JavaScript ```bash nx g @nx/angular:storybook-configuration ui --tsConfiguration=false ``` By default, our generator generates TypeScript Storybook configuration files. You can choose to use JavaScript for the Storybook configuration files of your project (the files inside the `.storybook` directory, eg. `.storybook/main.js`). **Usage:** ```bash nx generate @nx/angular:storybook-configuration [options] ``` **Arguments:** ```bash nx generate @nx/angular:storybook-configuration [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--configureStaticServe` | boolean | Specifies whether to configure a static file server target for serving storybook. Helpful for speeding up CI build/test times. | `true` | | `--generateStories` | boolean | Specifies whether to automatically generate `*.stories.ts` files for components declared in this project or not. | `true` | | `--ignorePaths` | array | Paths to ignore when looking for components. | `["*.stories.ts,*.stories.tsx,*.stories.js,*.stories.jsx,*.stories.mdx"]` | | `--interactionTests` | boolean | Set up Storybook interaction tests. | `true` | | `--linter` | string | The tool to use for running lint checks. | `"eslint"` | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--tsConfiguration` | boolean | Configure your project with TypeScript. Generate main.ts and preview.ts files, instead of main.js and preview.js. | `true` | ## `web-worker` Creates a new, generic web worker definition in the given or default project. ### Examples ###### Simple Usage The basic usage of the `web-worker` generator is defined below. You must provide a name for the web worker and the project to assign it to. ```bash nx g @nx/angular:web-worker myWebWorker --project=myapp ``` **Usage:** ```bash nx generate @nx/angular:web-worker [options] ``` **Arguments:** ```bash nx generate @nx/angular:web-worker [options] ``` #### Options | Option | Type | Description | Default | |--------|------|-------------|---------| | `--project` | string [**required**] | The name of the project. | | | `--path` | string | The path at which to create the worker file, relative to the current workspace. | | | `--skipFormat` | boolean | Skip formatting files. | `false` | | `--snippet` | boolean | Add a worker creation snippet in a sibling file of the same name. | `true` | ## Getting Help You can get help for any generator by adding the `--help` flag: ```bash nx generate @nx/angular: --help ``` --- ## Guides {% index_page_cards path="technologies/angular/guides" /%} --- ## Nx and Angular Versions The latest version of Nx supports the [actively supported versions of Angular (current and LTS versions)](https://angular.dev/reference/releases#actively-supported-versions). Workspaces in any of those versions are recommended to use the latest version of Nx to benefit from all the new features and fixes. {% aside type="note" title="Older Nx and Angular versions" %} The support for multiple versions of Angular in the latest version of Nx was added in **v15.7.0** and started by supporting Angular v14 and v15. If your workspace is in an older version of Angular or you can't update to the latest version of Nx for some reason, please have a look at the next section to know which version of Nx to use. {% /aside %} ## Nx and Angular version compatibility matrix Below is a reference table that matches versions of Angular to the version of Nx that is compatible with it. The table shows the version of Angular, the recommended version of Nx to use and the range of Nx versions that support the version of Angular. We provide a recommended version, and it is usually the latest minor version of Nx in the range provided because there will have been bug fixes added since the first release in the range. | Angular Version | **Nx Version _(recommended)_** | Nx Version _(range)_ | | --------------- | ------------------------------ | ---------------------------------------- | | ~21.2.0 | **latest** | >=22.6.0 <=latest | | ~21.1.0 | **latest** | >=22.4.0 <=latest | | ~21.0.0 | **latest** | >=22.3.0 <=latest | | ~20.3.0 | **latest** | >=21.6.1 <=latest | | ~20.2.0 | **latest** | >=21.5.1 <=latest | | ~20.1.0 | **latest** | >=21.3.0 <=latest | | ~20.0.0 | **latest** | >=21.2.0 <=latest | | ~19.2.0 | **latest** | >=20.5.0 <=latest | | ~19.1.0 | **latest** | >=20.4.0 <=latest | | ~19.0.0 | **latest** | >=20.2.0 <=latest | | ~18.2.0 | **~22.2.0** | >=19.6.0 <22.3.0 | | ~18.1.0 | **~22.2.0** | >=19.5.0 <22.3.0 | | ~18.0.0 | **~22.2.0** | >=19.1.0 <22.3.0 | | ~17.3.0 | **~21.1.0** | >=18.2.0 <21.2.0 | | ~17.2.0 | **~21.1.0** | >=18.1.1 <21.2.0 | | ~17.1.0 | **~21.1.0** | >=17.3.0 <21.2.0 | | ~17.0.0 | **~21.1.0** | >=17.1.0 <21.2.0 | | ~16.2.0 | **~20.1.0** | >=16.7.0 <20.2.0 | | ~16.1.0 | **~20.1.0** | >=16.4.0 <20.2.0 | | ~16.0.0 | **~20.1.0** | >=16.1.0 <20.2.0 | | ~15.2.0 | **~19.0.0** | >=15.8.0 <19.1.0 | | ~15.1.0 | **~19.0.0** | >=15.5.0 <19.1.0 | | ~15.0.0 | **~19.0.0** | >=15.2.0 <=15.4.8 \|\| >=15.7.0 <19.1.0 | | ~14.2.0 | **~17.0.0** | >=14.6.0 <=15.1.1 \|\| >=15.7.0 <17.1.0 | | ~14.1.0 | **~17.0.0** | >=14.5.0 <=14.5.10 \|\| >=15.7.0 <17.1.0 | | ~14.0.0 | **~17.0.0** | >=14.2.1 <=14.4.3 \|\| >=15.7.0 <17.1.0 | | ^13.0.0 | **14.1.9** | >=13.2.0 <=14.1.9 | | ^12.0.0 | **13.1.4** | >=12.3.0 <=13.1.4 | | ^11.0.0 | **12.2.0** | >=11.0.0 <=12.2.0 | | ^10.0.0 | **10.4.15** | >=9.7.0 <=10.4.15 | | ^9.0.0 | **9.6.0** | >=8.12.4 <=9.6.0 | | ^8.0.0 | **8.12.2** | >=8.7.0 <=8.12.2 | Additionally, you can check the supported versions of Node and Typescript for the version of Angular you are using in the [Angular docs](https://angular.dev/reference/versions#actively-supported-versions). ## Nx and Angular Rspack version compatibility matrix Below is a reference table that matches versions of [Angular Rspack](/docs/technologies/angular/angular-rspack/introduction) to the versions of Angular (lower than 20.2.0) and Nx that is compatible with it. {% aside type="note" title="Newer Angular versions" %} The table below only shows the version of Angular Rspack that is compatible with Angular versions lower than 20.2.0. Starting with Angular 20.2.0, the Angular Rspack version to install is aligned with the Nx version, so refer to [the table above](#nx-and-angular-version-compatibility-matrix). {% /aside %} | Angular Version | Angular Rspack Version | Nx Version | | --------------- | ---------------------- | ------------------- | | ~20.1.0 | **~21.2.0** | >= 21.1.0 <= 21.5.0 | | ~20.0.0 | **~21.1.0** | >= 21.1.0 <= 21.5.0 | | ~19.2.0 | **~20.8.0** | >= 20.8.1 <= 21.1.0 | | ~19.2.0 | **~20.7.0** | >= 20.8.1 <= 21.1.0 | | ~19.2.0 | **~20.6.0** | >= 20.6.0 <= 21.1.0 | --- ## Advanced Angular Micro Frontends with Dynamic Module Federation Dynamic Module Federation is a technique that allows an application to determine the location of its remote applications at runtime. It helps to achieve the use case of **"Build once, deploy everywhere"**. "Build once, deploy everywhere" is the concept of being able to create a single build artifact of your application and deploy it to multiple environments such as staging and production. The difficulty in achieving this with a Micro Frontend Architecture using Static Module Federation is that our Remote applications will have a different location (or URL) in each environment. Previously, to account for this, we would have had to specify the deployed location of the Remote applications and rebuild the application for the target environment. Walk through how the concept of "Build once, deploy everywhere" can be easily achieved in a Micro Frontend Architecture that uses Dynamic Module Federation. ## Aim The aim of this guide is three-fold. We want to be able to: - Set up a Micro Frontend with Static Module Federation - Transform an existing Static Module Federation setup to use Dynamic Federation - Generate a new Micro Frontend application that uses Dynamic Federation ## What we'll build To achieve the aims, we will do the following: - Create a Static Federation Micro Frontend Architecture - Change the **Dashboard** application to use Dynamic Federation - Generate a new **Employee Dashboard** application that will use Dynamic Federation - It should use the existing **Login** application. - It should use a new **Todo** application. ## Final code Here's the source code of the final result for this guide. {% github_repository url="https://github.com/Coly010/nx-ng-dyn-fed" /%} ## First steps ### Create an Nx workspace To start with, we need to create a new Nx Workspace and add the Nx Angular plugin. We can do this easily with: {% tabs %} {% tabitem label="npm" %} ```text {% title="npx create-nx-workspace@latest ng-mf --preset=apps" frame="terminal" %} NX Let's create a new workspace [https://nx.dev/getting-started/intro] ✔ Which CI provider would you like to use? · skip ✔ Would you like remote caching to make your build faster? · skip ``` Next run: ```shell cd ng-mf npx nx add @nx/angular ``` {% /tabitem %} {% tabitem label="yarn" %} ```text {% title="yarn create nx-workspace ng-mf --preset=apps" frame="terminal" %} NX Let's create a new workspace [https://nx.dev/getting-started/intro] ✔ Which CI provider would you like to use? · skip ✔ Would you like remote caching to make your build faster? · skip ``` Next run: ```shell cd ng-mf yarn nx add @nx/angular ``` {% /tabitem %} {% tabitem label="pnpm" %} ```text {% title="pnpx create-nx-workspace@latest ng-mf --preset=apps" frame="terminal" %} NX Let's create a new workspace [https://nx.dev/getting-started/intro] ✔ Which CI provider would you like to use? · skip ✔ Would you like remote caching to make your build faster? · skip ``` Next run: ```shell cd ng-mf pnpx nx add @nx/angular ``` {% /tabitem %} {% /tabs %} ### Creating our applications We need to generate two applications that support Module Federation. We'll start with the **Admin Dashboard** application which will act as a host application for the Micro-Frontends (_MFEs_): ```shell nx g @nx/angular:host apps/dashboard --prefix=ng-mf ``` {% aside type="note" title="Running nx commands" %} The terminal examples will show `nx` being run as if it is installed globally. If you have not installed Nx globally (not required), you can use your package manager to run the `nx` local binary: - NPM: `npx nx ...` - Yarn: `yarn nx ...` - PNPM: `pnpm nx ...` {% /aside %} The `host` generator will create and modify the files needed to set up the Angular application. Now, let's generate the **Login** application as a remote application that will be consumed by the **Dashboard** host application. ```shell nx g @nx/angular:remote apps/login --prefix=ng-mf --host=dashboard ``` Note how we provided the `--host=dashboard` option. This tells the generator that this remote application will be consumed by the **Dashboard** application. The generator performed the following changes to automatically link these two applications together: - Added the remote to the `apps/dashboard/module-federation.config.ts` file - Added a TypeScript path mapping to the root tsconfig file - Added a new route to the `apps/dashboard/src/app/app.routes.ts` file ## What was generated? Let's take a closer look after generating each application. For both applications, the generators did the following: - Created the standard Angular application files - Added a `module-federation.config.ts` file - Added a `webpack.config.ts` and `webpack.prod.config.ts` - Added a `src/bootstrap.ts` file - Moved the code that is normally in `src/main.ts` to `src/bootstrap.ts` - Changed `src/main.ts` to dynamically import `src/bootstrap.ts` _(this is required for the Module Federation to load versions of shared libraries correctly)_ - Updated the `build` target in the `project.json` to use the `@nx/angular:webpack-browser` executor _(this is required to support passing a custom Webpack configuration to the Angular compiler)_ - Updated the `serve` target to use `@nx/angular:dev-server` _(this is required as we first need Webpack to build the application with our custom Webpack configuration)_ The key differences reside within the configuration of the Module Federation Plugin within each application's `module-federation.config.ts`. We can see the following in the **Login** micro frontend configuration: ```ts // apps/login/module-federation.config.ts import { ModuleFederationConfig } from '@nx/module-federation'; const config: ModuleFederationConfig = { name: 'login', exposes: { './Routes': 'apps/login/src/app/remote-entry/entry.routes.ts', }, }; export default config; ``` Taking a look at each property of the configuration in turn: - `name` is the name that Webpack assigns to the remote application. It **must** match the name of the project. - `exposes` is the list of source files that the remote application exposes to consuming shell applications for their own use. This config is then used in the `webpack.config.ts` file: ```ts // apps/login/webpack.config.ts import { withModuleFederation } from '@nx/module-federation/angular'; import config from './module-federation.config'; export default withModuleFederation(config, { dts: false }); ``` We can see the following in the **Dashboard** micro frontend configuration: ```ts // apps/dashboard/module-federation.config.ts import { ModuleFederationConfig } from '@nx/module-federation'; const config: ModuleFederationConfig = { name: 'dashboard', remotes: ['login'], }; export default config; ``` The key difference to note with the **Dashboard** configuration is the `remotes` array. This is where you list the remote applications you want to consume in your host application. You give it a name that you can reference in your code, in this case `login`. Nx will find where it is served. Now that we have our applications generated, let's move on to building out some functionality for each. ## Adding functionality We'll start by building the **Login** application, which will consist of a login form and some very basic and insecure authorization logic. ### User library Let's create a user data-access library that will be shared between the host application and the remote application. This will be used to determine if there is an authenticated user as well as providing logic for authenticating the user. ```shell nx g @nx/angular:lib libs/shared/data-access-user ``` This will scaffold a new library for us to use. We need an Angular Service that we will use to hold state: ```shell nx g @nx/angular:service user --project=data-access-user ``` This will create the `libs/shared/data-access-user/src/lib/user-auth.ts` file. Change its contents to match: ```ts // libs/shared/data-access-user/src/lib/user-auth.ts import { Injectable } from '@angular/core'; import { BehaviorSubject } from 'rxjs'; @Injectable({ providedIn: 'root' }) export class UserAuth { private isUserLoggedIn = new BehaviorSubject(false); isUserLoggedIn$ = this.isUserLoggedIn.asObservable(); checkCredentials(username: string, password: string) { if (username === 'demo' && password === 'demo') { this.isUserLoggedIn.next(true); } } logout() { this.isUserLoggedIn.next(false); } } ``` Now, export the service in the library's entry point file: ```ts // libs/shared/data-access-user/src/index.ts ... export * from './lib/user.service'; ``` ### Login application Let's set up our `entry.ts` file in the **Login** application so that it renders a login form. We'll import `FormsModule` and inject our `UserService` to allow us to sign the user in: ```ts // apps/login/src/app/remote-entry/entry.ts import { Component } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FormsModule } from '@angular/forms'; import { UserService } from '@ng-mf/data-access-user'; import { inject } from '@angular/core'; @Component({ standalone: true, imports: [CommonModule, FormsModule], selector: 'ng-mf-login-entry', template: ` `, styles: [ ` .login-app { width: 30vw; border: 2px dashed black; padding: 8px; margin: 0 auto; } .login-form { display: flex; align-items: center; flex-direction: column; margin: 0 auto; padding: 8px; } label { display: block; } `, ], }) export class RemoteEntry { private userService = inject(UserService); username = ''; password = ''; isLoggedIn$ = this.userService.isUserLoggedIn$; login() { this.userService.checkCredentials(this.username, this.password); } } ``` {% aside type="note" title="More details" %} This could be improved with things like error handling, but for the purposes of this tutorial, we'll keep it simple. {% /aside %} Now let's serve the application and view it in a browser to check that the form renders correctly. ```shell nx run login:serve ``` We can see if we navigate a browser to `http://localhost:4201` that we see the login form rendered. If we type in the correct username and password _(demo, demo)_, then we can also see the user gets authenticated! Perfect! Our **Login** application is complete. ### Dashboard application Now let's update our **Dashboard** application. We'll hide some content if the user is not authenticated, and present them with the **Login** application where they can log in. For this to work, the state within `UserService` must be shared across both applications. Usually, with Module Federation in Webpack, you have to specify the packages to share between all the applications in your Micro Frontend solution. However, by taking advantage of Nx project graph, Nx will automatically find and share the dependencies of your applications. {% aside type="note" title="Single version policy" %} This helps to enforce a single version policy and reduces the risk of [Micro Frontend Anarchy](https://www.thoughtworks.com/radar/techniques/micro-frontend-anarchy). {% /aside %} Start by deleting the `app.html`, `app.css`, and `nx-welcome.ts` files from the **Dashboard** application. They will not be needed for this tutorial. Next, let's add our logic to the `app.ts` file. Change it to match the following: ```ts // apps/dashboard/src/app/app.ts import { CommonModule } from '@angular/common'; import { Component, inject, OnInit } from '@angular/core'; import { Router, RouterModule } from '@angular/router'; import { UserService } from '@ng-mf/data-access-user'; import { distinctUntilChanged } from 'rxjs/operators'; @Component({ standalone: true, imports: [CommonModule, RouterModule], selector: 'ng-mf-root', template: `
Admin Dashboard
You are authenticated so you can see this content.
`, }) export class App implements OnInit { private router = inject(Router); private userService = inject(UserService); isLoggedIn$ = this.userService.isUserLoggedIn$; ngOnInit() { this.isLoggedIn$ .pipe(distinctUntilChanged()) .subscribe(async (loggedIn) => { // Queue the navigation after initialNavigation blocking is completed setTimeout(() => { if (!loggedIn) { this.router.navigateByUrl('login'); } else { this.router.navigateByUrl(''); } }); }); } } ``` Finally, make sure the application routes are correctly set up: ```ts // apps/dashboard/src/app/app.routes.ts import { Route } from '@angular/router'; import { App } from './app'; export const appRoutes: Route[] = [ { path: 'login', loadChildren: () => import('login/Routes').then((m) => m.remoteRoutes), }, { path: '', component: App, }, ]; ``` We can now run both the **Dashboard** and **Login** applications: ```shell nx serve dashboard --devRemotes=login ``` Navigating to `http://localhost:4200` should show the **Dashboard** application with the **Login** application embedded within it. If you log in, you should see the content change to show that you are authenticated. This concludes the setup required for a Micro Frontend approach using Static Module Federation. {% aside type="caution" title="Do not fret!" %} When serving module federation apps locally in dev mode, there'll be an error output to the console: `import.meta cannot be used outside of a module`. You'll see the error originates from the `styles.js` script. It's a known error output, and as far as our testing has shown, it doesn't cause any breakages. It happens because the Angular compiler attaches the `styles.js` file to the `index.html` in a `