AI2026-03-21📖 14 min read

A Practical Guide to Automating Your Dev Workflow with Claude Code Skills

A Practical Guide to Automating Your Dev Workflow with Claude Code Skills

Learn how to use Claude Code Skills to automate repetitive development tasks. From creating your first skill to real-world use cases, this guide covers practical know-how you can apply right away.

髙木 晃宏

代表 / エンジニア

👨‍💼

If you've started using an AI coding assistant, you've probably noticed yourself typing the same instructions over and over. Code review criteria, commit message formats, testing conventions — every project has its own rules, and rewriting prompts from scratch each time is a real drag. Claude Code Skills solves this by letting you templatize recurring instructions and invoke them as slash commands. In this article, we'll walk through the fundamentals of Skills and how to put them to practical use, drawing on our experience adopting them at aduce.

What Are Claude Code Skills?

Claude Code Skills is a feature that lets you define frequently used prompts and workflows as Markdown files and invoke them instantly with a /skill-name slash command. Built into Claude Code — officially released by Anthropic in 2025 — it supports both personal use and project-wide sharing.

The mechanism is straightforward: just drop a Markdown file into the right directory. Personal skills go in ~/.claude/skills/, while project-shared skills live in .claude/skills/. The filename becomes the command name, so a file called review.md is invoked with /review.

When I first learned about this, my honest reaction was, "Can't I just put this in CLAUDE.md?" But after actually using it in practice, the distinction became clear: CLAUDE.md contains rules that are always loaded — persistent, project-wide instructions — while Skills are on-demand, invoked only when needed. This separation also matters for context window efficiency.

When to Use CLAUDE.md vs. Skills

To make this more concrete: CLAUDE.md is for rules that apply across all work, like "use TypeScript in this project" or "write commit messages in Japanese." Skills, on the other hand, are for task-specific instructions — things like "the steps for writing E2E tests" or "the format for generating API documentation."

In our project, CLAUDE.md declares "use Conventional Commits format" because that applies to every commit. But "generate release notes" happens only a few times a month, so it lives in a skill. This distinction in granularity naturally sharpens as you keep using the system.

A useful rule of thumb when you're unsure: ask yourself, "Would most of my work break without this instruction?" If yes — it belongs in CLAUDE.md. If no — it's a good candidate for a skill.

One more thing worth noting: CLAUDE.md can be placed in subdirectories to apply hierarchical rules. For example, frontend/CLAUDE.md can hold React-specific rules while backend/CLAUDE.md holds API-specific ones. Skills have no such hierarchy — they're flat by design. Understanding this structural difference makes it easier to decide where each instruction belongs.

How to Create a Skill

Creating a skill is as simple as writing a single Markdown file. No special CLI commands or build steps required. Here's the basic structure.

First, create a .claude/skills/ directory in your project root. Then place a file — say, test-gen.md — with content like this:

--- description: Generate test code for a specified module --- # Test Code Generation Please create tests following these rules: - Testing framework: Vitest - Always include three patterns: happy path, error cases, and edge cases - Keep mocks minimal; use real dependencies wherever possible - Write test names in Japanese, clearly describing what is being verified

The description field in the frontmatter helps Claude Code understand what the skill is for. It's optional, but including it makes the skill list much more readable and helps Claude infer relevance — so it's worth the effort.

Early on, our team made the mistake of cramming too many instructions into a single skill, which made the output inconsistent. In retrospect, keeping each skill focused on a single purpose — and combining multiple skills when needed — produces far more stable results. I suspect many teams go through the same trial and error.

Useful Frontmatter Settings

The main field you'll use is description:

--- description: Read a PR diff and generate review comments ---

Write description as a single sentence covering when and what the skill is for. Claude Code uses this to assess relevance, so specific verbs and objects work better than vague phrases. "Review code" is less effective than "Read a PR diff and generate review comments covering security, performance, and readability."

Tips for Writing Skill Content

Skill bodies are plain Markdown, but a few practices noticeably improve output consistency.

Use bullet points for conditions. Claude Code parses bullet lists more reliably than dense prose. Even complex conditional logic ("if A and B, then do C") is better expressed as individual bullet points.

Include concrete examples. Instead of "output an appropriate error message," include an actual example:

Error message examples: - OK: "Failed to fetch profile for user ID: 123 (status: 404)" - NG: "An error occurred"

Declare scope explicitly. State upfront what the skill applies to — for example, "This skill applies only to backend API route handlers." This prevents the skill from being misapplied to unrelated files.

Use arguments. Skills accept arguments when invoked as slash commands. For instance, /test-gen src/utils/validation.ts passes the target file directly, and the skill can reference it in its logic. Designing skills to accept arguments dramatically improves reusability. In our team, we also describe the fallback behavior for when no argument is provided, which makes skills friendlier to use.

Real-World Use Cases

Here are some of the ways we actually use Skills in production.

Standardizing Code Reviews

Defining your review criteria as a skill means every team member can request a review against the same consistent standard. It's especially useful for covering easy-to-miss areas like security, performance, and accessibility systematically.

Here's a simplified version of the review skill we use:

--- description: Review code changes for security, performance, and maintainability --- # Code Review Read the changed files and generate review comments covering the following areas. ## Security - Any risk of SQL injection, XSS, or CSRF? - Is user input properly validated? - Are authentication and authorization checks in place? ## Performance - Any N+1 query issues? - Any code triggering unnecessary re-renders? - Any implementations that could cause memory pressure under large data loads? ## Maintainability - Does each function or component have a single responsibility? - Are there magic numbers or unclear variable names? - Have tests been added or updated? ## Output Format - Classify each issue as "Critical / Warning / Suggestion" - Specify the filename and line number for each issue - Include a code example for suggested fixes where applicable

Before this, the depth and focus of reviews varied depending on who was reviewing. One team member might give thorough security feedback but miss performance issues. Since we codified the review criteria as a skill, coverage has been consistently comprehensive regardless of who runs it. That said, the output isn't taken at face value — it's a starting point for human judgment. The natural flow is: AI flags issues, then the team discusses.

Generating Commit Messages

Encoding your project's commit conventions — Conventional Commits format, issue number references, language requirements — into a skill prevents format drift. It's subtle, but it pays off significantly when you're reading through git log months later.

Here's a simplified version of our commit skill:

--- description: Generate a Conventional Commits-formatted message from staged changes --- # Commit Message Generation Analyze the git diff and generate a commit message following these rules. ## Format - Format: `<type>: <subject> refs #<issue-number>` - type: choose from feat / fix / docs / style / refactor / test / chore - subject: summarize the change in under 50 characters - Append `refs #number` if a related issue exists ## Type Selection Guide - feat: new feature - fix: bug fix - docs: documentation changes only - style: formatting, semicolons, etc. — no logic changes - refactor: code change that's neither a fix nor a feature - test: adding or modifying tests - chore: build process or tooling changes

After introducing this skill, commit messages across the team became uniform. Previously, the same change might be described as "修正", "fix", or "バグ直した" depending on who wrote it. Now everyone follows the same format. It sounds like a small thing — until you're scrolling through six months of git history and suddenly it's very readable.

Generating Blog Posts and Documentation

This very article was drafted using a workflow similar to what Skills enable. By templatizing tone, length, and structure rules, you can efficiently generate consistent-quality drafts just by changing the topic.

For example, our tech blog generation skill defines:

--- description: Generate a draft for a technical blog article --- # Blog Article Generation ## Tone and Style - Use polite, formal Japanese (です/ます form) - Briefly explain technical terms on first use - Include occasional phrases that speak directly to the reader ## Structure - Open with a lead paragraph that surfaces the reader's problem and previews what they'll learn - Aim for 4–6 H2 sections in a logical flow - Include a concrete example or code sample in each section - End with a summary section that suggests next actions ## Quality Criteria - Be technically accurate (flag speculation explicitly) - Avoid redundant repetition - Use code examples that actually work

The same logic applies to documentation. The more predictable the format — API docs, design specs, meeting notes — the more you gain from putting it in a skill.

Safety-Checking Database Migrations

This is a niche use case, but it's been invaluable for us. We use a skill to check whether a migration file is safe to run in production before executing it.

--- description: Check a database migration file for production safety --- # Migration Safety Check Analyze the migration file and verify safety across the following areas. ## Checklist - If columns are being dropped, verify no application code still references them - If NOT NULL constraints are added, confirm a default value is set - Could adding an index to a large table cause a lock? - Is the change reversible if a rollback is needed? ## Output Format - If no issues are found, state "Safe to execute" explicitly - If issues exist, describe the risk and recommend a course of action

This skill was born from a painful incident where a column type change caused a prolonged table lock in production. Since then, it's become standard practice to run this check every time a migration is created. There's rarely one objectively right approach, so start small and iterate based on team feedback.

Design Principles for Effective Skills

After building a number of skills, the difference between ones that work well and ones that don't becomes clear. Here are the principles we've distilled from our trial and error.

Principle 1: One Skill, One Responsibility

As noted above, skills that try to do too much produce inconsistent output. A single "do everything" skill — review, fix issues, write tests, generate a commit message — sounds appealing but delivers worse results than separate, focused skills.

Compare:

# Bad: all-in-one skill Review the code, fix any issues, add tests, and commit.
# Good: separated by responsibility - /review for code review - /test-gen for test generation - /commit for commit message generation

Separate skills can still be composed. Running /review, then fixing issues, then /test-gen might feel like more steps — but being able to verify each output along the way is genuinely reassuring.

Principle 2: Specify Output Format Explicitly

Vague instructions like "please review appropriately" produce different formats every time. Defining the output format, required fields, and classification criteria explicitly yields stable, predictable results.

## Output Format Use the following structure: ### [Critical / Warning / Suggestion] filename:line-number **Issue:** What is wrong **Why:** Why it is a problem **Fix:** How to address it (with code example)

With a consistent output format, you can paste review results directly into GitHub PR comments without reformatting. Stable structure also makes it easier to integrate into downstream workflows.

Principle 3: Prefer Positive Instructions Over Negative Ones

"Don't use magic numbers" is less effective than "define values as named constants with meaningful names." Just like with human instructions, telling Claude Code what to do is more effective than what not to do.

This was something we discovered through repeated rewrites. Negative instructions seem to leave too much room for interpretation — Claude Code sometimes finds unexpected ways to technically comply while missing the intent.

Principle 4: Refine Incrementally

Trying to perfect a skill from the start tends to create paralysis. What works better: start with the minimum viable instructions, use it in practice, and add conditions as you go.

Our first review skill was just three bullet points. Running it on real PRs quickly surfaced concrete gaps: "It's not checking type safety," "I'd like it to mention error handling." Real usage produces better feedback than upfront planning. Let the skill grow with the team.

Tips for Sharing Skills Across a Team

Skills reach their full potential when shared across a team. Committing .claude/skills/ to your Git repository gives everyone access to the same skill set.

A few operational notes: First, standardize skill naming. Formats like review, test-gen, and doc-gen — verb or verb-plus-object — make the purpose immediately scannable in a list.

Second, since skill changes are tracked in Git, leaving commit messages that explain why a rule was added makes it much easier to understand the history later. Skills sit somewhere between documentation and code, so recording the rationale for changes matters.

Finally, think about the split between personal skills (~/.claude/skills/) and project skills (.claude/skills/). Personal productivity optimizations belong in personal; team quality standards belong in the project repo.

Naming Convention in Practice

Here's the naming convention our team actually uses:

SkillPurposeLocation
reviewCode reviewProject
review-securitySecurity-focused reviewProject
test-genTest code generationProject
commitCommit message generationProject
doc-apiAPI documentation generationProject
migrate-checkMigration safety checkProject
blog-draftBlog article draft generationProject
memoPersonal note organizationPersonal
explainDetailed code explanationPersonal

Using prefixes to group related skills — review-* for review skills, doc-* for documentation — also helps maintain clarity as the list grows.

Rolling Out Skills to a Team

Trying to get everyone on board all at once often creates resistance. Here's the phased approach that worked for us:

Step 1: One person starts. A Claude Code user on the team creates personal skills and tries them out in daily work. At this stage, don't worry about polish — just find something genuinely useful.

Step 2: Share with the team and collect feedback. Once a skill proves its value, move it to the project directory and introduce it to the team. This is when you hear: "Can we add this angle too?" or "Is this instruction still necessary?"

Step 3: Manage skill changes through PRs. Treat skills like code: put them through a review process. Writing why a rule exists in the PR description ensures everyone understands the intent behind the skill, not just what it does.

This approach let skills become shared team knowledge organically — not by mandate, but by demonstrating real value.

Common Pitfalls to Watch Out For

Skills are easy to build, which means there are some easy-to-fall-into traps. Here's what we've encountered.

Skill Sprawl

Because skills are so easy to create, you can end up with dozens before you know it. Unused skills that pile up make the list hard to navigate — you end up not knowing which one to use, defeating the whole purpose.

We address this by doing a quarterly skill audit. Any skill that hasn't been used in the past three months is either archived or merged into another.

Over-Reliance

Avoid becoming so dependent on skills that you can't make decisions without them. Skills automate repetition — they don't replace thinking. Even when using a review skill, read the code yourself and apply your own judgment. The AI output is a starting point, not a verdict.

Conflicts with CLAUDE.md

If a skill contradicts something in CLAUDE.md, Claude Code's output can become inconsistent. For example, if CLAUDE.md says "use Jest" but a skill says "use Vitest," you'll get confused results. Get in the habit of checking for consistency with your existing CLAUDE.md whenever you add or modify a skill.

Cross-Version Compatibility

Claude Code updates can subtly change how skills are interpreted. We've had skills that worked fine for months produce different output after an update. The best defense is writing instructions that are explicit and structured rather than relying on implicit assumptions. For important skills, it's worth doing a quick smoke test after major updates.

Combining Skills with Hooks for Even More Automation

Claude Code also has a feature called Hooks — separate from Skills — that automatically runs shell commands in response to events like file saves or pre-commit. Combining Hooks with Skills lets you build more sophisticated automation workflows.

For example: a pre-commit Hook that automatically runs lint and tests, paired with a Skill that generates the commit message. Hooks control when things run; Skills define what Claude does. They complement each other naturally.

Hooks are configured in .claude/settings.json:

{ "hooks": { "PreToolUse": [ { "matcher": "Bash", "command": "echo 'Pre-execution check'" } ] } }

Hook event types include PreToolUse (before a tool runs), PostToolUse (after), and Notification (on notifications). You can use these to automatically run lint after file edits, send test results to Slack, and so on.

Here's how Skills and Hooks compare:

AspectSkillsHooks
TriggerManually invoked by the userAutomatically fired by events
FormatMarkdown (natural language prompts)JSON + shell commands
Primary useStandardizing and templating instructionsQuality gates and automated checks
FlexibilityHigh (free-form natural language)Limited (shell command scope)

That said, introducing a large number of Hooks and Skills at the same time makes it hard to track what's running when. A practical path: get your Skills in order first, then add Hooks once you've identified specific automation opportunities.

Conclusion: Start Small, Build Up Your Developer Experience

Claude Code Skills is a way to systematically raise the quality and efficiency of AI coding assistance in a reproducible way. The low barrier to entry — one Markdown file to start — combined with easy team sharing via Git, makes it practical to adopt in real development environments.

To recap what we covered:

  • Basic setup: Drop a Markdown file in .claude/skills/ and it becomes a slash command
  • CLAUDE.md vs. Skills: Always-on rules go in CLAUDE.md; task-specific instructions go in Skills
  • Design principles: One skill, one responsibility; explicit output formats; positive instructions; incremental refinement
  • Team operations: Consistent naming, Git-managed review and history, phased rollout
  • Pitfalls: Prune skill sprawl quarterly, avoid over-reliance, keep Skills and CLAUDE.md in sync
  • Going further: Combine with Hooks to pair manual standardization with automated quality gates

You don't need a big rollout plan. Start by picking one prompt you find yourself rewriting all the time and turning it into a skill. That small step is what builds toward a better team-wide development workflow.

Our first skill was fewer than ten lines. Today it's an indispensable part of how we work. Don't aim for perfection — the best time to create a skill is the moment you catch yourself thinking, "I've written this prompt before."

At aduce, we support development teams looking to leverage Claude Code and other cutting-edge AI technologies to build AI-driven workflows and advance their DX initiatives. If that sounds relevant to you, feel free to reach out via aduce's contact page.