BACKEND2026-03-17📖 13 min read

What Are Claude Code Skills? How Custom Commands Can Dramatically Boost Your Development Efficiency

What Are Claude Code Skills? How Custom Commands Can Dramatically Boost Your Development Efficiency

A practical guide to creating custom commands with Claude Code Skills, complete with real-world examples. Learn how to set up Skills and supercharge your development workflow.

髙木 晃宏

代表 / エンジニア

👨‍💼

Have you ever caught yourself giving Claude the exact same instructions over and over again? Claude Code Skills is designed to solve exactly that problem — it lets you define those repetitive instructions as custom commands, transforming your development workflow in the process.

In this article, I'll walk through everything from the fundamentals of Claude Code Skills to hands-on setup steps you can apply to real projects, along with key considerations for a smooth rollout — all grounded in my own experience using it day-to-day.

What Are Claude Code Skills and How Do They Work?

Claude Code Skills is a feature that lets you register reusable prompt templates as slash commands for Claude Code, Anthropic's coding agent. Instead of typing out the same lengthy instructions, you can invoke project-specific operations with short commands like /commit or /review-pr.

Under the hood, a Skill is just a Markdown file placed inside the .claude/skills directory. The filename becomes the command name, and the file body contains your instructions to Claude Code written in plain Markdown. A frontmatter section holds metadata, and the body holds the prompt itself — a clean, simple structure.

Honestly, I nearly missed this feature entirely at first. But once I started using it, I couldn't go back. The ability to standardize code review practices and commit message conventions as shared Skills has been particularly valuable for team development.

Tackling the "Tribal Knowledge" Problem with Skills

Every development team has unwritten rules — knowledge that lives in people's heads rather than documentation. "Include the issue number in commit messages." "Mirror the source file's directory structure in test files." "Attach a screenshot to every PR description." These rules get passed along verbally or in chat, and every time a new person joins, someone has to explain them all over again.

With Skills, you can codify that tribal knowledge as commands and store them directly in the repository. From day one, new team members can follow team conventions without needing a manual walkthrough. On my team, the time spent on onboarding dropped noticeably once we had this in place.

Understanding the Skills File Structure

Let's take a closer look at how a Skills file is structured. The frontmatter supports the following metadata fields:

--- name: skill-name description: A concise description of what this Skill does ---

name is what appears as the slash command. If omitted, the filename is used instead. description shows up in the Skills list and helps other team members quickly understand what the Skill is for.

The body contains your instructions to Claude Code in free-form Markdown. Numbered lists are the most common way to lay out steps, but bullet points and plain prose work too. The key is clarity — instructions that Claude Code can interpret without ambiguity.

In practice, I've found that vague Skills produce inconsistent results. "Please refactor this appropriately" leads to unpredictable behavior. Something like "If a function exceeds 30 lines, separate its responsibilities" gives you stable, reproducible outcomes.

How to Create a Custom Command

Let's walk through actually creating a Skill. Start by creating the .claude/skills directory in your project root, then add a Markdown file inside it.

--- name: test-and-fix description: Run tests and automatically fix any failures --- 1. Run the test suite. 2. If any tests fail, analyze the root cause. 3. Propose a fix and, after confirmation, apply the changes.

That's all it takes — /test-and-fix is now a usable command. If you're not sure what granularity to aim for, a good starting heuristic is: if you've given the same instruction three or more times in a week, it's a candidate for a Skill.

You can also accept arguments using the $ARGUMENTS placeholder, which lets you call /my-skill some-args to pass dynamic values. Looking back, I wasted time creating near-duplicate Skills before I discovered how to leverage arguments properly.

Building Your First Skill Step by Step

Let's create a more practical example: a Skill that automates all the setup steps needed when starting a new feature.

First, create the directory:

mkdir -p .claude/skills

Then create .claude/skills/start-feature.md:

--- name: start-feature description: Scaffold the files needed to implement a new feature --- Starting implementation of the "$ARGUMENTS" feature. Please follow these steps: 1. Create a feature branch with `git switch -c feature/$ARGUMENTS`. 2. Create the necessary component files under `src/components/$ARGUMENTS/`. 3. Create corresponding test files under `__tests__/components/$ARGUMENTS/`. 4. Add any required type definition files under `src/types/` as needed. ※ Reference existing components and follow the project's coding conventions.

Now typing /start-feature user-profile will handle everything from branch creation to file scaffolding in one shot.

When I first rolled this out on my team, the first piece of feedback I got was: "I no longer have to look up the branch naming convention every time." It sounds minor, but reducing that kind of cognitive overhead adds up — developers can spend more time on what actually matters: designing and implementing logic.

Designing Flexible Skills with Arguments

Let's dig a bit deeper into how to use $ARGUMENTS. While arguments are passed as a single string, you can structure your Skill's instructions to effectively handle multiple pieces of information.

--- name: add-api description: Scaffold an API endpoint --- Please create an API endpoint based on the following specification: Target: $ARGUMENTS - Parse the endpoint path, HTTP method, and description from the input above. - Create a Route Handler following Next.js App Router conventions under `app/api/`. - Define request and response type definitions. - Include input validation. - Create a corresponding API test file.

With this setup, calling /add-api GET /users Fetch user list lets Claude Code interpret the argument and generate the appropriate files. Since Claude Code's natural language understanding handles the parsing, you don't need to enforce a rigid format. That said, documenting the expected argument format inside the Skill body will give you more consistent results.

Real-World Skills That Deliver Results

Here are some Skills I've found especially impactful in day-to-day work.

Automated commit conventions: Define your project's commit message format in a Skill and every team member maintains a consistent commit history automatically. If you've ever struggled to get Conventional Commits adopted consistently, this one will resonate.

Standardized PR reviews: Encoding your review criteria as a Skill prevents gaps in security checks and performance reviews. Results may vary, but the consistency improvement in review quality has been undeniable on my team.

Efficient test generation: A Skill that generates test code matching the project's existing patterns — just point it at a file. This directly improved our test coverage.

A Concrete Example: The Commit Skill

Let's look at a commit convention Skill more closely. Here's a simplified version of what my team actually uses:

--- name: commit description: Create a commit following project conventions --- Please create a commit following these rules. ## Commit Message Format - Format: `<type>(<scope>): <subject>` - type: one of feat, fix, docs, style, refactor, test, chore - scope: the module being changed (optional) - subject: a summary of the change (Japanese OK, max 50 characters) ## Steps 1. Review staged changes with `git diff --staged`. 2. Select the most appropriate type for the changes. 3. Draft a commit message following the format above. 4. Present the message to the user and wait for approval before committing. ## Notes - Do not bundle multiple concerns into a single commit. - If the changeset is large, suggest splitting into multiple commits.

Before this Skill, our commit history was a mix of "feat" and "feature," Japanese and English messages jumbled together. After introducing it, the history became far more scannable and git log searches became genuinely useful. It's a small change, but it meaningfully improves the daily development experience.

Raising the Bar on Code Quality with a PR Review Skill

A PR review Skill is one of the most effective ways to bring consistency to your team's code quality — particularly for reducing over-reliance on specific individuals for thorough reviews.

--- name: review-pr description: Review a PR against project standards --- Please review the pull request changes from the following perspectives. ## Security - Any SQL injection, XSS, or CSRF vulnerabilities? - Is authentication and authorization handled correctly? - Are secrets or credentials hardcoded anywhere? ## Performance - Any N+1 query issues? - Any implementation that causes unnecessary re-renders? - Any memory concerns when processing large datasets? ## Maintainability - Do names accurately convey intent? - Does each function or component have a single responsibility? - Are tests added or updated? ## Output Format - Classify each issue as "Critical," "Warning," or "Info." - Provide a specific, actionable suggestion for each finding.

By making review criteria explicit, even less experienced team members can perform solid reviews. Of course, a Skills-based review is a complement to human review, not a replacement. Offloading the mechanical checks to the Skill frees human reviewers to focus on business logic validity and architectural decisions.

Boosting Speed and Coverage with a Test Generation Skill

We all know we should write tests — yet it's easy to put off. A test generation Skill dramatically lowers that barrier.

--- name: gen-test description: Generate tests for a specified file --- Please generate test code for "$ARGUMENTS". ## Testing Approach - Follow the existing test directory structure and naming conventions. - Create test cases covering three categories: happy path, error cases, and edge cases. - Keep mocking to a minimum; prefer tests that reflect real behavior. - Write test names that clearly convey what is being tested, under what conditions, and what the expected outcome is. ## Verification - After creating the test file, run the tests and confirm they all pass. - If any tests fail, analyze the cause and suggest fixes to either the test code or the implementation.

The key detail here is the instruction to follow existing test patterns. Every project has its own testing conventions, and since Claude Code can reference files already in the repository, this single instruction ensures generated tests are consistent with the rest of the codebase.

On my project, test coverage improved by roughly 20 percentage points in the three months after introducing this Skill. Lowering the cost of writing tests helped foster a team culture of "let's add a test while we're at it."

A Practical Guide to Running Skills as a Team

Scaling from personal use to team-wide adoption requires a bit of planning. Here's what I learned through trial and error.

Establish a Naming Convention

As your library of Skills grows, consistent naming becomes critical. My team settled on these rules:

  • Verb-noun format: Names like gen-test, review-pr, and start-feature make the purpose immediately obvious.
  • Standardized abbreviations: Agree on abbreviations upfront — gen for generate, chk for check, and so on.
  • Category prefixes when needed: Use prefixes like db-migrate or db-seed to group related Skills.

If you skip naming conventions and just add Skills ad hoc, you end up with near-duplicates and nobody knows which one to use. Setting rules early prevents this.

Build a Continuous Improvement Cycle

A Skill isn't done the moment you create it. Real usage will reveal that "this instruction doesn't produce what I wanted" or "I should add one more step here."

My team handles Skill improvements through pull requests. If someone reports that a Skill's output was off, we open a PR to update the prompt, and the team reviews it together. A nice side effect: prompt engineering knowledge gets distributed across the team.

We also do a monthly "Skill audit" to review which Skills are actually being used and consider consolidating or deleting unused ones. It sounds tedious, but keeping your Skill library lean turns out to be more important than you'd think.

Global Skills vs. Project Skills

In addition to project-scoped Skills in .claude/skills/, Claude Code supports user-level global Skills stored in ~/.claude/skills/. These are available across all projects.

Global Skills are great for generic commands that don't depend on any particular project — general Git operations, documentation helpers, and similar utilities. Project-specific conventions and stack-specific workflows belong in the project directory and should be committed to the repository.

Keeping this distinction clear simplifies management considerably.

Best Practices and What to Watch Out For

Skills are powerful, but there are a few things worth keeping in mind as you adopt them.

Granularity: Skills defined at too broad a scope become too generic to be useful and get ignored. Too narrow and management becomes a burden. Aim for "one clear purpose per Skill." This is probably the most common point of confusion for teams getting started.

Version control integration: Committing .claude/skills to your repository lets the whole team share Skills automatically. The ability to review Skill changes through pull requests is a significant bonus.

CLAUDE.md vs. Skills: Rules that should always be in effect belong in CLAUDE.md. Operations you want to trigger explicitly at specific moments belong in Skills. That distinction became clearer to me through actual use.

Drawing the Line Between CLAUDE.md and Skills

This distinction is trickier in practice than it sounds. Here's the mental model I've settled on.

CLAUDE.md is for things Claude Code should "always" be aware of — the project's directory structure, the tech stack, the basic coding style. These aren't operations to invoke; they're foundational knowledge that underpins everything.

Skills are for step-by-step procedures tied to a specific action — committing, reviewing, generating tests, running pre-deploy checks. These have a clear start and end.

When in doubt, ask yourself: "Does this need to be loaded on every conversation?" If the answer is no, it belongs in Skills. Since CLAUDE.md content is always included in context, letting it grow too large risks burying important information.

Tips for Writing Effective Prompts

The quality of your prompt directly determines the quality of your output. Here are the lessons I've picked up through ongoing use.

Break steps down explicitly: Instead of "analyze and fix it," write "1. Analyze the issue. 2. Report your findings. 3. Propose a fix." This makes it easier for Claude Code to produce appropriate output at each stage.

Specify the expected output format: Tell Claude Code what format you want — how you want review results structured, which testing framework to use for generated tests. Concrete expectations produce stable results.

Use positive instructions over negatives: "Keep functions under 30 lines" works better than "don't write long functions." Telling Claude Code what to do gives it a concrete target.

Provide context: Explaining why a rule exists helps Claude Code make better judgment calls in edge cases. For example: "Keep commits small (reason: reduces review overhead and makes reverts easier)" gives Claude Code enough context to decide how to split changes on its own.

Start Small and Scale Up

Don't try to build out your entire Skills library at once. Here's the phased approach my team took:

Phase 1: One person creates 2–3 Skills and uses them daily for a week or two. This builds intuition for what should be a Skill and what's better left as an ad hoc instruction.

Phase 2: Skill-ify the highest-frequency team tasks (commits, test generation, etc.), commit them to the repository, and gather feedback from the team. Iterate on the prompts.

Phase 3: Document the Skills library and how to use it in your team's development guide. Integrate it into the onboarding flow for new members. Establish a regular audit cadence.

Flooding the team with Skills all at once won't work if nobody adopts them. Starting small, building confidence through early wins, and expanding gradually is the path that actually sticks.

Common Questions and Troubleshooting

Here are issues my team has actually run into, along with how to address them.

A Skill Isn't Being Recognized

If /skill-name isn't found, first check where the file is located. Skills files need to be directly inside .claude/skills/. Subdirectories are supported, but the directory structure will be reflected in the command name.

Also confirm the file extension is .md — only Markdown files are recognized.

Skill Output Doesn't Match Expectations

Ambiguous instructions give Claude Code too much latitude to interpret, leading to unexpected results. Review each step of your instructions and ask whether they're specific enough.

"Write appropriate tests" leaves too much room for interpretation. "Use Jest with describe/it blocks, and apply the Arrange-Act-Assert pattern in each test case" gives Claude Code a precise target.

You don't need to get the prompt perfect on the first try. Iterating based on real output is the practical approach.

Chaining Multiple Skills Together

You might want to run Skills in sequence — generate tests, then commit based on the results. There's currently no mechanism to call one Skill from within another, but you can combine multiple phases into a single Skill to get the same effect.

That said, be careful not to over-stuff a single Skill — readability and maintainability suffer. Keep combined Skills focused on closely related tasks.

Wrapping Up: Taking Team Productivity to the Next Level with Skills

Claude Code Skills transforms AI coding assistance from a personal productivity trick into a shared team asset. By systematizing your recurring instructions as Skills, you raise both the quality and velocity of your entire development process.

As I've shown in this article, getting started with Skills doesn't require a big upfront investment. Pick one task you do repeatedly, define it as a Skill, and try it out. Once you feel the difference, your library will naturally grow from there.

The most important thing is to treat Skills as a living system — not something you build once and forget. Incorporate team feedback, refine your prompts, retire Skills that are no longer useful, and keep your library aligned with how your team actually works today. That continuous improvement cycle is what makes Skills genuinely powerful over time.

At aduce Inc., we're constantly exploring ways to accelerate development with AI tools. If you'd like to discuss adopting tools like Claude Code or optimizing your development processes, feel free to reach out through aduce's contact page.