FRONTEND2026-03-25📖 10 min read

Vitest: A Complete Guide to the Next-Generation Testing Framework

Vitest: A Complete Guide to the Next-Generation Testing Framework

A comprehensive guide to Vitest, the fast Vite-based testing framework. Covers everything from the basics to practical usage, including Jest-compatible APIs, configuration, mocking, and coverage reporting.

髙木 晃宏

代表 / エンジニア

👨‍💼

If you've ever started writing tests for a frontend project and gotten stuck just choosing a framework, you're not alone. Jest has been the go-to choice for years, but in Vite-based projects, maintaining separate transpile configurations quickly becomes a burden. That's the problem Vitest was built to solve — a testing framework developed by the official Vite team. This article walks through Vitest from the ground up, covering core concepts through practical usage.

What Is Vitest — and How Does It Differ from Jest?

Vitest is a testing framework built on top of the Vite ecosystem. Since its debut in 2022, it has grown rapidly in adoption, earning high satisfaction scores in the testing tools category of the State of JS 2024 survey.

Its defining feature is shared configuration with Vite. With Jest, you needed separate transpile setup via babel.config.js or ts-jest. With Vitest, your vite.config.ts configuration applies directly to your tests as well — no more maintaining aliases and plugins in two places. Having lived through that migration firsthand, the relief is greater than you'd expect.

The API is designed with Jest compatibility in mind, so familiar functions like describe, it, and expect work the same way. That said, it's not a perfect drop-in replacement — there are some differences in matchers and configuration worth knowing about. Native ESM support means import/export syntax works without any special handling, which is a real advantage for modern projects.

Performance Differences

The speed gap between Jest and Vitest becomes more pronounced as your project scales. Jest transpiles each test file individually, so startup time grows as the number of TypeScript and JSX files increases. Vitest leverages Vite's on-demand transpilation and module caching, which means the initial run may be comparable, but re-run speed in watch mode is where Vitest clearly pulls ahead.

On one project migrated from Jest to Vitest, total test suite execution time dropped by around 40%. Projects that rely heavily on path aliases like @/components benefit doubly — not only does moduleNameMapper configuration disappear, but transpilation overhead decreases as well.

Specific API Differences from Jest

"Jest-compatible" doesn't mean identical. Here are the key differences to know before migrating:

  • jest.fn()vi.fn(): The namespace shifts from jest to vi. Same goes for jest.mockvi.mock.
  • done callback: Older Jest tests sometimes used a done callback for async tests. Vitest encourages async/await instead.
  • Snapshot serializers: Jest's snapshotSerializers config works in Vitest too, but it belongs under the test property in vite.config.ts.
  • jest.setTimeout: In Vitest, use vi.setConfig({ testTimeout: 10000 }) globally or pass { timeout: 10000 } as a per-test option.

Most test code will work with minimal changes, but knowing these differences in advance makes migration troubleshooting much smoother.

Setup and Basic Usage

Getting started is straightforward. For any Vite project, you just install the package and add a few lines of configuration.

npm install -D vitest

Add the test configuration to vite.config.ts:

import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globals: true, environment: 'jsdom', }, })

Setting globals: true lets you use describe and it without explicit imports. Early on I missed this setting and wrote import { describe, it, expect } from 'vitest' everywhere — enabling globals cuts down on boilerplate significantly.

If you use globals: true, also configure TypeScript types so your editor provides proper autocompletion. Add this to tsconfig.json:

{ "compilerOptions": { "types": ["vitest/globals"] } }

Without this, you'll see red underlines under describe and expect in your editor. Tests still run fine, but the friction adds up — it's worth configuring this upfront.

Here's what a basic test file looks like:

describe('formatPrice', () => { it('returns a number formatted with comma separators', () => { expect(formatPrice(1000)).toBe('1,000') }) it('returns "0" for zero', () => { expect(formatPrice(0)).toBe('0') }) it('applies comma formatting to negative values', () => { expect(formatPrice(-1500)).toBe('-1,500') }) })

By default, Vitest picks up files named *.test.ts or *.spec.ts. For file organization, you can co-locate tests next to source files or group them under a __tests__ directory — pick whichever fits your project's scale and preferences.

Choosing a Test Environment — jsdom, happy-dom, or node

Vitest lets you specify the runtime environment via the environment option. The main choices are:

  • node: The default. Best for utility functions and API logic that doesn't touch the DOM.
  • jsdom: The most widely used option, emulating browser DOM in Node.js. Standard for React component rendering tests.
  • happy-dom: A lighter-weight alternative to jsdom gaining traction for its speed. Faster in practice, though some Web APIs may be partially or not implemented.
export default defineConfig({ test: { environment: 'happy-dom', }, })

On one project with heavy DOM-based component testing, switching from jsdom to happy-dom cut test execution time by about 20%. However, I did run into cases where APIs like window.getComputedStyle didn't behave as expected, so choose based on what your tests actually need.

Vitest also lets you override the environment per file using a magic comment at the top:

// @vitest-environment jsdom import { render } from '@testing-library/react' describe('Button', () => { it('fires a click event', () => { // runs in jsdom environment }) })

Being able to run utility tests in node and component tests in jsdom on a file-by-file basis is a genuinely useful level of flexibility.

Mocking and Spies — Controlling External Dependencies

Mocking is unavoidable in testing. Vitest provides Jest-equivalent functionality through the vi object.

import { vi } from 'vitest' // Mock a function const mockFn = vi.fn() mockFn.mockReturnValue(42) // Mock an entire module vi.mock('./api', () => ({ fetchUser: vi.fn().mockResolvedValue({ name: 'Taro' }), })) // Mock timers vi.useFakeTimers() vi.advanceTimersByTime(1000)

vi.spyOn lets you monitor method calls on existing objects while preserving the original implementation. When testing API fetches, you'll face a choice between injecting a mock function with vi.fn() or intercepting at the network layer with MSW (Mock Service Worker). After going back and forth on this, I've settled on vi.mock for unit tests and MSW for integration tests. There's no universal right answer — it comes down to test granularity.

One important nuance: vi.mock is hoisted to the top of the file. If you need to vary mock behavior across tests, use vi.hoisted or override the return value with mockReturnValue inside each test.

Practical Use of vi.hoisted

vi.hoisted was introduced in Vitest 1.0 to safely reference variables inside vi.mock callbacks. Since vi.mock is hoisted to the top of the file, it can't access variables in the normal scope — vi.hoisted initializes variables at the same time as the mock.

const { mockFetchUser } = vi.hoisted(() => ({ mockFetchUser: vi.fn(), })) vi.mock('./api', () => ({ fetchUser: mockFetchUser, })) describe('UserProfile', () => { it('displays the username', async () => { mockFetchUser.mockResolvedValue({ name: 'Taro' }) // test body }) it('shows fallback UI on error', async () => { mockFetchUser.mockRejectedValue(new Error('Network Error')) // test body }) })

This pattern lets you vary the mock return value per test case, making it easy to cover both happy and error paths in a single file. Before discovering vi.hoisted, I used to cram all the logic into the vi.mock callback — this approach is much cleaner.

Mocking Dates and Handling Timezones

Tests involving date/time logic are a common source of unexpected failures. vi.useFakeTimers lets you freeze the system clock, and combined with setSystemTime, you can reproduce specific moments in time.

describe('isBusinessHour', () => { beforeEach(() => { vi.useFakeTimers() }) afterEach(() => { vi.useRealTimers() }) it('returns true on weekdays between 9am and 5pm', () => { // Monday, April 7, 2025 at 10:00 JST vi.setSystemTime(new Date('2025-04-07T01:00:00Z')) expect(isBusinessHour()).toBe(true) }) it('returns false on weekends', () => { // Saturday, April 5, 2025 at 10:00 JST vi.setSystemTime(new Date('2025-04-05T01:00:00Z')) expect(isBusinessHour()).toBe(false) }) })

Forgetting vi.useRealTimers() in afterEach will leak the fake timer into subsequent tests. Tests that pass individually but fail when run as a suite are often caused by exactly this kind of missing cleanup.

Testing React Components

Vitest pairs well with Testing Library for React component tests. Here's the basic setup using @testing-library/react and a jsdom environment.

npm install -D @testing-library/react @testing-library/jest-dom @testing-library/user-event

Create a setup file to register the custom matchers:

// vitest.setup.ts import '@testing-library/jest-dom/vitest'
// vite.config.ts export default defineConfig({ test: { globals: true, environment: 'jsdom', setupFiles: ['./vitest.setup.ts'], }, })

Here's an example component test:

import { render, screen } from '@testing-library/react' import userEvent from '@testing-library/user-event' import { Counter } from './Counter' describe('Counter', () => { it('displays an initial count of 0', () => { render(<Counter />) expect(screen.getByText('Count: 0')).toBeInTheDocument() }) it('increments the count on button click', async () => { const user = userEvent.setup() render(<Counter />) await user.click(screen.getByRole('button', { name: 'Increment' })) expect(screen.getByText('Count: 1')).toBeInTheDocument() }) it('disables the button when the max is reached', async () => { const user = userEvent.setup() render(<Counter max={2} />) await user.click(screen.getByRole('button', { name: 'Increment' })) await user.click(screen.getByRole('button', { name: 'Increment' })) expect(screen.getByRole('button', { name: 'Increment' })).toBeDisabled() }) })

Importing from @testing-library/jest-dom/vitest (rather than @testing-library/jest-dom) activates DOM-specific matchers like toBeInTheDocument and toBeDisabled with more accurate type definitions in Vitest.

For simulating user interactions, prefer userEvent over fireEvent. userEvent more closely replicates real user behavior — focus transitions, keyboard events, and so on — which means your tests stay closer to how things actually work in a browser.

Coverage Reporting and CI Integration

Beyond writing tests, measuring coverage gives you an objective view of how much of your code is exercised. Vitest supports two coverage providers: @vitest/coverage-v8 and @vitest/coverage-istanbul.

npm install -D @vitest/coverage-v8
// vite.config.ts export default defineConfig({ test: { coverage: { provider: 'v8', reporter: ['text', 'html', 'json-summary'], include: ['src/**/*.ts'], exclude: ['src/**/*.test.ts', 'src/**/*.d.ts'], }, }, })

Running npx vitest run --coverage prints a summary to the terminal and generates an HTML report in the coverage directory. In CI environments like GitHub Actions, the json-summary format is easy to pipe into coverage badge workflows.

For CI runs, use vitest run (no watch mode) and consider --reporter=junit to output JUnit-format reports that most CI platforms can parse. One mistake I made early on was forgetting to configure exclude, which caused test files themselves to be counted toward coverage — leading to inflated and inaccurate numbers. It's a small detail worth getting right from the start.

A Practical GitHub Actions Workflow

Here's a concrete CI setup automating test runs and coverage reporting with GitHub Actions:

name: Test on: pull_request: branches: [main] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 cache: 'npm' - run: npm ci - run: npx vitest run --coverage --reporter=junit --outputFile=test-results.xml - uses: actions/upload-artifact@v4 if: always() with: name: coverage-report path: coverage/

The if: always() condition ensures the coverage report is saved as an artifact even when tests fail — which is exactly when you most want to inspect it.

Setting Coverage Thresholds

For team projects, you can configure CI to fail if coverage drops below a minimum threshold:

export default defineConfig({ test: { coverage: { provider: 'v8', thresholds: { statements: 80, branches: 75, functions: 80, lines: 80, }, }, }, })

That said, coverage numbers are a means, not an end. The goal isn't to hit 80% by writing assertion-free tests — it's to identify untested code paths. Treat coverage as a diagnostic tool, not a scorecard.

Watch Mode and DX — Features That Improve the Development Experience

One of Vitest's strongest qualities as a developer experience tool is the speed of its default watch mode. By leveraging Vite's module graph, it re-runs only the tests related to changed files — keeping the feedback loop tight even in large projects.

The browser-based UI dashboard launched with vitest --ui is also worth trying. It gives you a visual overview of test status, execution results, and coverage, with the ability to filter and run individual tests. If terminal output feels hard to scan, this is a natural alternative.

Inline snapshots (toMatchInlineSnapshot) are another useful feature carried over from Jest. Unlike regular snapshots stored in separate files, the expected value is written directly in the test code — so you can see what the test expects without leaving the file.

it('formats user information', () => { expect(formatUser({ name: 'Taro', age: 30 })).toMatchInlineSnapshot(` "Taro (age 30)" `) })

Filtering and Controlling Test Execution

Needing to run only a subset of tests is a common scenario during development. Vitest provides several ways to do this.

The simplest is filtering by file name pattern from the command line:

# Run a specific file npx vitest src/utils/format.test.ts # Filter by test name npx vitest --testNamePattern="comma separator"

To temporarily focus on a specific test within a file, use it.only or describe.only:

describe('formatPrice', () => { it.only('only this test runs', () => { expect(formatPrice(1000)).toBe('1,000') }) it('this test is skipped', () => { expect(formatPrice(0)).toBe('0') }) })

To skip tests whose implementation isn't ready yet, use it.skip or it.todo. The it.todo form lets you declare a test name without a body — a useful way to document cases you intend to write:

it.todo('handles values with decimal points') it.todo('prepends currency symbol')

I use it.todo as a note-taking mechanism during code review — when I notice a test case that should exist, I add a todo entry so it doesn't get forgotten.

VSCode Extension Integration

Vitest provides an official VSCode extension that integrates with the Test Explorer. You can run and debug tests directly from the sidebar without switching to the terminal.

The inline run buttons next to individual test cases in an open test file are especially convenient. And because it supports breakpoint-based step debugging, tracking down a failing test becomes much more efficient than adding console logs.

Parallel Execution and Performance Tuning

Vitest runs test files in parallel by default, which reduces overall suite execution time. Understanding how this works matters, because parallel execution can also introduce flakiness in certain scenarios.

Test files run in parallel with each other, but test cases within a single file run serially by default. To parallelize tests within a file, use concurrent:

describe('async operations', () => { it.concurrent('fetches user list from API', async () => { const users = await fetchUsers() expect(users).toHaveLength(3) }) it.concurrent('fetches product list from API', async () => { const products = await fetchProducts() expect(products).toHaveLength(5) }) })

However, tests sharing global state — database operations, filesystem reads and writes — can collide when run concurrently. For those cases, use the --sequence option or describe.sequential to enforce serial execution:

describe.sequential('database CRUD operations', () => { it('creates a record', async () => { /* ... */ }) it('reads the created record', async () => { /* ... */ }) it('updates the record', async () => { /* ... */ }) it('deletes the record', async () => { /* ... */ }) })

Order-dependent tests are a design smell, but integration tests sometimes make them unavoidable. It's reassuring that Vitest handles these cases flexibly.

Summary: When to Choose Vitest

Vitest is the most natural testing choice for Vite-based projects, with three clear strengths: native ESM support, fast watch mode, and a Jest-compatible API. Migrating from Jest is usually gradual and manageable given how small the API surface differences are.

That said, if you're not using Vite, or if you rely heavily on Jest-specific ecosystem features — custom matchers, specialized transformers — you'll need to weigh the migration cost against the benefits.

As a rough decision guide:

  • Good cases for adopting Vitest: Vite-based projects, new projects starting from scratch, codebases actively migrating to ESM
  • Cases to migrate carefully: Large webpack-centric projects, heavy dependence on Jest custom transformers, test suites in the thousands of files where a bulk migration carries real risk

If you're struggling with test environment setup or strategy, feel free to reach out via the aduce contact page. We're happy to help you find the right testing approach for your project's specific situation.