AI2026-04-07📖 6 min read

Supercharging AI Agents with antigravity-awesome-skills: A Practical Guide

Supercharging AI Agents with antigravity-awesome-skills: A Practical Guide

Learn how to install and leverage antigravity-awesome-skills, a library of over 1,370 reusable skills. A practical guide to transforming AI coding tools like Claude Code and Gemini CLI into specialized agents.

髙木 晃宏

代表 / エンジニア

👨‍💼

Have you ever felt frustrated repeating the same instructions every time you use an AI coding assistant, wishing it could just act more precisely? I use Claude Code and Gemini CLI daily, and I've often hit the limits of prompt reusability. For example, having to re-explain things like "this project uses TypeScript strict mode" or "tests should be written with Vitest" every time I start a new conversation creates more friction than you might expect. The solution to this problem is "antigravity-awesome-skills," an open-source library with over 31,000 stars on GitHub.

What Is antigravity-awesome-skills?

antigravity-awesome-skills is an installable GitHub library containing over 1,370 reusable SKILL.md playbooks for AI coding assistants. It supports a wide range of tools including Claude Code, Cursor, Codex CLI, Gemini CLI, Google Antigravity, and GitHub Copilot.

A "skill" here refers to a structured instruction set that an agent loads when performing a specific task. For example, you can define database migration procedures, security review criteria, or test code conventions as SKILL.md files reflecting your team's best practices. The agent dynamically loads the relevant skills for each task, transforming from a generic text generator into a domain-specific expert.

Compared to conventional prompt templates, the differences are significant. First, the level of granularity is different. While typical prompt templates tend to be broad ("create a React component"), skills are structured down to the level of "when creating React components, separate custom hooks, export Props types, and include a Storybook file." Additionally, prerequisites and constraints are explicitly built in. Since the intended environment and prohibited actions are clearly stated, the risk of the agent misinterpreting context drops significantly.

What's especially noteworthy is composability. Skills are designed not only to work individually but to be combined with multiple other skills. For example, loading a "Next.js route design skill," a "Supabase integration skill," and a "security review skill" simultaneously lets you generate a security-conscious route implementation with data fetching in a single request. With prompt templates, you'd need to cram multiple perspectives into one prompt, and the agent's attention tends to scatter as the prompt grows longer. Since skills are each structured as independent files, combining them doesn't easily dilute instruction precision.

When I first learned about this system, I honestly thought it was "just another prompt collection." But after actually implementing it, I realized the granularity and structural sophistication set it well apart from typical prompt templates.

Installation and Basic Usage

Installation is completed with a single npx command. Here's a summary of installation commands for the major tools.

Target ToolInstallation Command
All tools (default)npx antigravity-awesome-skills
Claude Codenpx antigravity-awesome-skills --claude
Cursornpx antigravity-awesome-skills --cursor
Gemini CLI / Antigravitynpx antigravity-awesome-skills --gemini
Codex CLInpx antigravity-awesome-skills --codex

Specifying a tool-specific flag places skills directly in that tool's skill directory. For example, Google Antigravity uses the .agent/skills/ directory, while Claude Code uses .claude/skills/.

When I tried it with Claude Code, running npx antigravity-awesome-skills --claude in the project root directory deployed skill files under .claude/skills/ within seconds. After launching Claude Code, the agent automatically recognized the skills and started producing output aligned with the skill content for relevant tasks. No special configuration file editing or API integration is required.

After installation, starting with role-based "bundles" is the most efficient approach. Pre-assembled recommended skill sets are available for frontend developers, backend developers, DevOps engineers, and other roles. I found that starting with bundles and adding individual skills as needed was the smoothest path forward.

Post-Installation Verification and Directory Structure

You can verify that the installation completed correctly with the following commands.

# For Claude Code, check the skill file listing ls -la .claude/skills/ # For Gemini CLI / Antigravity ls -la .agent/skills/ # For Cursor ls -la .cursor/skills/

For example, when installed for Claude Code, the directory structure looks like this:

.claude/ └── skills/ ├── development/ │ ├── code-review.md │ ├── refactoring.md │ └── typescript-strict.md ├── testing/ │ ├── unit-testing.md │ └── integration-testing.md ├── security/ │ ├── owasp-review.md │ └── dependency-audit.md └── infrastructure/ ├── terraform-best-practices.md └── docker-optimization.md

The category-based subdirectory organization makes it easy to delete unnecessary skills individually or keep only specific categories. If you want to install only specific categories, use bundle specifications.

# Install only security-related skills npx antigravity-awesome-skills --claude --bundle security # Frontend development bundle npx antigravity-awesome-skills --claude --bundle frontend

.gitignore and Skill File Management Strategy

One surprisingly tricky decision during setup is whether to include skill files in your Git repository. In practice, committing team-shared skills to the repository while adding personal preference skills to .gitignore for local-only management works smoothly. Specifically, skills related to coding conventions and architecture go into the repository, while skills for editor assistance or commit message style remain personal. Setting this boundary early on prevents a lot of operational headaches later.

Here's an example .gitignore entry:

# Exclude personal skills, commit only team-shared skills .claude/skills/personal/ .cursor/skills/personal/

Skill Categories and Use Cases

The included skills are organized into six domains. Here's a breakdown of each area with practical use cases from real-world scenarios.

CategoryKey ContentExample Use Case
DevelopmentCoding conventions, architecture design, refactoringMaintaining code quality during new feature implementation
TestingTest strategy, coverage standards, test code generationTest automation in CI/CD pipelines
SecurityVulnerability scanning, security review, dependency auditingPre-release security checks
InfrastructureIaC, deployment, monitoring setupCloud environment setup and operations
ProductRequirements definition, user stories, specification writingDocumentation during the product planning phase
MarketingContent generation, SEO optimization, analyticsCreating technical blogs and documentation

Detailed Examples for Each Category

In the Development category, loading a "Next.js App Router route design skill" produces code that consistently follows policies for Server Component vs. Client Component usage, data fetch placement, and nested layout structure. Many teams have experienced the problem of mid-project joiners implementing patterns that differ from existing code, and sharing design policies through skills ensures consistent code output via the agent.

For example, when asking the agent to implement an API route, the output follows a consistent structure when skills are applied:

// Example agent output with skills applied // app/api/users/route.ts import { NextRequest, NextResponse } from 'next/server' import { z } from 'zod' import { createClient } from '@/lib/supabase/server' // Define validation schema at the top (following skill conventions) const createUserSchema = z.object({ name: z.string().min(1).max(100), email: z.string().email(), }) export async function POST(request: NextRequest) { try { const body = await request.json() const validated = createUserSchema.parse(body) const supabase = await createClient() const { data, error } = await supabase .from('users') .insert(validated) .select() .single() if (error) { return NextResponse.json({ error: error.message }, { status: 400 }) } return NextResponse.json(data, { status: 201 }) } catch (e) { if (e instanceof z.ZodError) { return NextResponse.json({ error: e.errors }, { status: 422 }) } return NextResponse.json({ error: 'Internal Server Error' }, { status: 500 }) } }

Without skills, validation presence, error handling style, and response format tend to vary each time. The consistency brought by skills is a major advantage.

In the Testing category, you can define not just test naming conventions and assertion styles, but an overall strategy for "which level of testing to write at which granularity." My team has codified a policy where unit tests cover only the logic layer, API routes use integration tests, and UI gets E2E coverage for critical paths only. Simply asking the agent to "write tests" generates test code that follows this policy, which is genuinely practical.

Specifically, instructions to the agent become much simpler:

# Before skills: need to specify test prerequisites every time # "Use Vitest, group with describe, # use arrange-act-assert pattern inside it, # use vi.mock for mocking..." # After skills: a single line suffices # "Write tests for the getUserById function"

Here's an example of test code generated following the skills:

// Test code generated following skills import { describe, it, expect, vi } from 'vitest' import { getUserById } from '@/lib/users' describe('getUserById', () => { it('returns user information when given an existing user ID', async () => { // Arrange const userId = 'user-123' // Act const result = await getUserById(userId) // Assert expect(result).toBeDefined() expect(result.id).toBe(userId) }) it('returns null when given a non-existent user ID', async () => { // Arrange const userId = 'non-existent-id' // Act const result = await getUserById(userId) // Assert expect(result).toBeNull() }) })

The Security category is particularly impressive, with OWASP Top 10-aligned review perspectives structured systematically. Check items are organized by perspective—SQL injection, XSS, CSRF, authentication and authorization flaws—and requesting a review with the skill specified produces a comprehensive examination across all these areas. Compared to when we maintained checklists manually, gaps in coverage have decreased significantly. That said, project-specific security requirements naturally aren't covered, so using skills as a baseline while customizing them is the most realistic approach.

The Infrastructure category includes skills summarizing best practices for Terraform and CloudFormation. Even when you simply say "create an S3 bucket," security fundamentals like public access blocking, versioning, and encryption are automatically included—which helps prevent accidental oversights.

The Product category is also highly practical. Requirements definitions and user stories often suffer from mismatches in granularity and expression between engineers and product managers. Skills in this category produce acceptance criteria and story point estimation guidelines in a standardized format, reducing communication costs between teams. I've been using the agent to draft initial requirements and then discussing them with stakeholders, and the quality of discussions has noticeably improved compared to starting from scratch.

Creating Custom Skills and Team Deployment

Beyond existing skills, the ability to create your own team-specific skills is another major strength of this library. Here's the basic structure of a SKILL.md:

# Skill Name ## Overview Description of the problem this skill solves and when to apply it ## Prerequisites - Required environment and tools - Settings that must be completed beforehand ## Execution Steps 1. Specific step 1 2. Specific step 2 3. Specific step 3 ## Constraints and Precautions - Rules and prohibitions to follow - Approaches for edge cases ## Deliverables - Expected output format and quality standards

The key is to avoid vague instructions and clearly specify concrete constraints and deliverable standards. Instead of "write clean code," use quantitative criteria like "comply with ESLint rule set X, with functions no longer than 30 lines."

Custom Skill Example

Here's an example of a custom skill we actually use on our team. For a project using Supabase, we created a database migration skill. The overview states "safely modify schemas while preserving existing data," and the execution steps specify "1. Create a new migration file," "2. Modify existing tables with ALTER TABLE (DROP TABLE is prohibited)," "3. Apply only the target file with psql," "4. Verify results with schema diff." The constraints section explicitly states "never execute supabase db reset" and "always present a backup procedure when changing data types."

Here's what the actual file looks like:

# Supabase Migration ## Overview Safely modify schemas while preserving existing data. Define careful procedures assuming application to production environments. ## Prerequisites - Supabase CLI is installed - psql command is available - Local Supabase instance is running ## Execution Steps 1. Create a migration file ```bash supabase migration new <migration_name>
  1. Write ALTER TABLE statements in the generated SQL file
  2. Apply only the target file with psql
    psql postgresql://postgres:postgres@127.0.0.1:54322/postgres \ -f supabase/migrations/<filename>.sql
  3. Add an application record to the history table
    psql postgresql://postgres:postgres@127.0.0.1:54322/postgres \ -c "INSERT INTO supabase_migrations.schema_migrations (version, name) VALUES ('<version>', '<name>');"
  4. Verify schema diff
    supabase db diff

Constraints and Precautions

  • Never execute supabase db reset (all data will be lost)
  • Do not use supabase migration up (unintended migrations may be applied)
  • DROP TABLE / DROP COLUMN is prohibited in principle. If necessary, always present a backup procedure
  • Data type changes must follow a 3-step process: add new column → migrate data → remove old column

Deliverables

  • Applied migration SQL file
  • Verification results of schema diff before and after application
Since introducing this skill, the agent no longer suggests dangerous commands like `db reset` when proposing migrations. While it might seem like a small thing, being able to preemptively prevent operations that could impact production is extremely reassuring in real-world operations. ### Common Pitfalls When Creating Skills Looking back, I initially made skill definitions too abstract, which led to detours when expected results weren't achieved. It was only after rewriting them with a focus on specificity and reproducibility that team members started getting consistent results. Here are some common failure patterns. First, **instructions that are too broad**. "Write good code" gives the agent too much room for interpretation, so break it down into specific rules like "functions should have a single responsibility" and "no more than 3 parameters." Next, **omitting prerequisites**. When skills fail to mention things humans take for granted—like Node.js version or package manager—the agent may operate under different assumptions. Finally, **unclear deliverable standards**. Instead of just "write tests," specify expectations like "aim for 80% or higher coverage and always include boundary value tests." Created skills should be managed in a Git repository and shared across the team as a recommended workflow. Establishing a skill review culture, similar to code review, can prevent knowledge silos while raising the overall AI utilization level across the organization. ## Distinguishing Skills from CLAUDE.md and Rule Files When considering antigravity-awesome-skills, a natural question is how it relates to rule files that each tool provides natively, such as Claude Code's CLAUDE.md or Cursor's .cursorrules. In short, these are not mutually exclusive but **complementary**. CLAUDE.md and rule files are well-suited for defining global policies that apply constantly across the entire project. Skills, on the other hand, represent localized expertise that's dynamically loaded only when performing specific tasks. For example, project-wide rules like "write commit messages in Conventional Commits format" or "respond in Japanese" go in CLAUDE.md, while task-specific deep knowledge like "specific database migration procedures" or "profiling steps for performance tuning" gets extracted as skills. Cramming everything into CLAUDE.md bloats the file and scatters the agent's attention, but extracting skills means the right knowledge loads only when needed. Here's a table summarizing the distinction: | Content | Placement | Rationale | |---|---|---| | Commit message format | CLAUDE.md | Applies to all tasks universally | | Response language (Japanese) | CLAUDE.md | Applies to all tasks universally | | DB migration procedures | SKILL.md | Needed only during schema changes | | Security review criteria | SKILL.md | Needed only during reviews | | Performance optimization steps | SKILL.md | Needed only during tuning | | Coding conventions (ESLint settings) | CLAUDE.md | Common to all code generation | | RESTful API design conventions | SKILL.md | Needed only during API implementation | Being mindful of this organization lets you prevent CLAUDE.md bloat while still providing deep knowledge to the agent when needed—achieving a well-balanced operation. ## Key Considerations for Adoption While it's a useful library, there are several points worth noting before adoption. **Selective installation is essential.** Installing all 1,370+ skills causes the agent to load an enormous amount of context, which can actually decrease response accuracy. It's best to install only skills that match your project's tech stack. I initially installed everything, which resulted in rules for irrelevant languages and frameworks mixing in, and I quickly switched to selective installation by bundle. You can audit unnecessary skills with the following commands: ```bash # Check total number of installed skills find .claude/skills -name "*.md" | wc -l # Check skill count by category for dir in .claude/skills/*/; do echo "$(basename $dir): $(find $dir -name '*.md' | wc -l) skills" done # Identify skills for languages not used in the project grep -rl "Python" .claude/skills/ --include="*.md" grep -rl "Ruby" .claude/skills/ --include="*.md"

Regular update checks are also important. As an open-source project, skills are continuously being added and improved. Using outdated skills could lead to divergence from current best practices. I recommend checking the repository for updates about once a month and updating skills as needed.

Updating is as simple as re-running the npx command. Existing custom skills won't be overwritten, so you can run it with confidence.

# Update skills (custom skills are preserved) npx antigravity-awesome-skills@latest --claude

Establishing team-wide operational rules is also essential. If each team member freely adds and modifies custom skills, skills may end up containing contradictory instructions. Making pull requests mandatory for skill additions and changes, with at least one review, prevents this kind of confusion.

Measuring skill effectiveness is worth keeping in mind too. Rather than just being satisfied with implementation, I recommend periodically reflecting on how agent output quality has actually changed. Our team compares code review comment counts and CI failure rates before and after skill adoption. When the effects are visible as numbers, it motivates team members to invest in skill operations, and it provides evidence for judging which skills are working and which have become perfunctory.

Conclusion and Future Outlook

antigravity-awesome-skills is a library that evolves AI coding assistants from "instruction-waiting tools" into "specialized agents embodying team knowledge." The combination of over 1,370 battle-tested skills ready for immediate use and the ability to accumulate team-specific knowledge through custom skills sets it apart from other prompt management approaches.

As AI agent adoption becomes standardized, skill design and operations are becoming a new layer of the software development process. The tacit knowledge that teams have traditionally documented as coding conventions and review guidelines is now being shared with agents in the form of skills. When you think about it, the ability to share knowledge with both humans and AI through the same mechanism is a very natural evolution.

While it may not be a perfect fit for every team, starting with bundles is well worth trying. Start small, verify the effects, and gradually cultivate custom skills. That accumulation will steadily elevate your entire team's AI utilization capabilities.

At aduce, we support development process optimization and DX advancement leveraging AI agents. If you have questions about designing AI skills or adoption strategies tailored to your organization, feel free to contact us.