BACKEND2026-03-28📖 8 min read

How to Build Your Own AI Secretary with LINE Messaging API

How to Build Your Own AI Secretary with LINE Messaging API

A practical guide to building a personal AI secretary by combining the LINE Messaging API with ChatGPT. Learn how small business owners can automate schedule management and information organization step by step.

髙木 晃宏

代表 / エンジニア

👨‍💼

Running a business means making decisions constantly. Responding to emails, coordinating schedules, gathering information — many owners find these tasks eating into the time they should be spending on the decisions that actually matter. What you may not realize is that you can automate a good chunk of this work by embedding an "AI secretary" directly into LINE, the messaging app you already use every day. This article walks you through how to build your own personal AI secretary using the LINE Messaging API.

Why LINE Is the Right Home for Your AI Secretary

When people hear "AI chatbot," they usually picture a dedicated app or web service. But having built one myself, what struck me most was how powerful it is to integrate AI into a tool you already use without thinking.

LINE has over 97 million monthly active users in Japan, making it one of the most familiar communication tools for business owners. Because there's no need to open a separate app, you can fire off a question or instruction to your AI the moment a thought crosses your mind.

I considered integrating with Slack or email at first, but the reality is that LINE usage is overwhelmingly dominant among small and medium-sized businesses in Japan. Customer and employee communication often happens on LINE, so having an AI secretary on the same platform makes it much easier to keep information in one place.

The Mental Overhead of "Opening Yet Another App"

When I talk to business owners, I'm consistently surprised by how many people know about ChatGPT or Claude but say something like, "It's just too much hassle to open a browser and log in." That's not laziness — it's a completely rational response when you're already drowning in daily decisions.

With LINE, you can finish sending a message to a client and immediately ask your AI secretary, "Can you bullet-point the key takeaways from that meeting?" The cost of switching tools is nearly zero, which means you naturally consult the AI more often — and the cumulative productivity gains become significant.

Once I started using it, I noticed the AI secretary was most useful during taxi rides and the brief moments waiting before a dinner meeting. Even when I couldn't open a laptop, I could pull out my phone and issue instructions through LINE. That kind of frictionless access is hard to replicate on any other platform.

What You Can Actually Delegate to an AI Secretary

If you're not sure what an "AI secretary" actually does for you in practice, here are some examples from my own daily use:

  • Drafting business emails: Type "Write an email to Tanaka-san at ◯◯ company asking to reschedule our meeting next week — keep it polite" and you get a ready-to-use draft in seconds. An email that would take me ten minutes to write from scratch is done in thirty seconds.
  • Summarizing meeting notes: Paste your rushed post-meeting scribbles or a long voice-to-text transcript into LINE and ask "Summarize the key points in five bullets" — and you have a draft you can share right away.
  • Research and information gathering: Requests like "Give me five examples of DX adoption in the restaurant industry" or "Break down the eligibility requirements for this subsidy" can be handled entirely from LINE while you're on the move.
  • Proofreading and rewriting: Send a passage from a proposal or presentation and say "Make this sound more formal" or "Cut down the jargon" — tone adjustments for any audience, instantly.
  • Sounding board for ideas: Pitch a new business idea and ask "Point out three weaknesses in this concept" for objective feedback. Having someone you can consult without hesitation, at midnight or at dawn, is genuinely reassuring as a business owner.

Each of these tasks might seem small on its own, but when you semi-automate ten or twenty of them per day, you recover a meaningful amount of time each month. One business owner I spoke with said, "I can feel that I'm saving three to four hours a week."

The Technical Building Blocks

The AI secretary system has three main components.

LINE Messaging API (The Interface)

You create a Messaging API channel in the LINE Developers Console and configure a Webhook. This component receives messages from the user and sends the AI's response back to LINE. The free plan for official accounts supports up to 200 push messages per month — more than enough for personal use.

The setup process is straightforward: log into LINE Developers Console, create a new provider, and add a Messaging API channel inside it. One thing to watch out for: you'll receive two credentials — a channel secret and a channel access token. These are critical for the security checks described later, so keep them safe.

Also, don't forget to hit the "Verify" button when setting your Webhook URL to confirm the connection. It's a surprisingly common stumbling block — I once spent thirty minutes hunting for a bug that turned out to be a missing trailing slash in the URL.

Backend Server (The Bridge)

A web server built in Node.js, Python, or similar acts as the go-between for LINE and the AI. I originally planned to use AWS Lambda, but in hindsight Google Cloud Functions was easier to set up. Either works fine functionally. What matters is having a stable environment where you can expose a public HTTPS Webhook URL.

The biggest advantage of serverless is that you pay nothing when there are no requests. A personal AI secretary doesn't receive constant traffic, so serverless is far more economical than renting a server that runs 24/7. AWS Lambda, Google Cloud Functions, and Vercel Functions all let you expose an HTTPS endpoint with minimal setup.

One serverless-specific behavior to be aware of is "cold starts" — after a period of inactivity, the function goes dormant, and the next request triggers a startup delay of a few seconds. In practice, this means your first message of the morning might respond a bit slower, but it's rarely a real problem. If it bothers you, add a periodic warm-up request to keep the function active.

LLM API (The Brain)

This is where you call a large language model API — OpenAI's ChatGPT API, Anthropic's Claude API, and so on — to generate responses. By defining a role in the system prompt, such as "You are a business owner's personal secretary, skilled at schedule management, meeting summarization, and drafting business emails," you transform a general-purpose AI into a dedicated assistant.

Tips for Designing Your System Prompt

The system prompt is what defines your AI secretary's personality and capabilities. Getting this right makes a huge difference in day-to-day usability. Here's what I've landed on after a lot of trial and error.

First, define the role narrowly. Rather than "an AI that can do anything," specify something like "a secretary responsible for business document creation, information organization, and schedule management." Narrowing the scope dramatically improves response quality.

Second, specify the tone. Instructing it to "reply in polite Japanese, concisely, using bullet points where possible" produces responses that are easy to read on a smartphone screen. Long prose paragraphs are hard to parse on a small display.

Third, embed domain-specific knowledge. Include context like "Our company is in manufacturing; our main clients are ◯◯ and △△" or "The approval flow for quotes is: department head → executive → CEO." With this context built in, responses become far more actionable.

Choosing an LLM Model

As of 2025, the main options are OpenAI's GPT-4o, Anthropic's Claude Sonnet 4.6, and Google's Gemini, each with different strengths and pricing. Choose based on your needs.

For high-quality Japanese text generation, Claude tends to be consistently strong. If cost is a priority, lightweight models like GPT-4o mini or Claude Haiku are solid options. For an AI secretary, one practical approach is using a lightweight model for everyday exchanges and switching to a more capable model only for important document drafting.

The Full Request Flow

Here's how the whole thing works end to end:

  1. User sends a message in LINE
  2. LINE Platform forwards the request to the backend via Webhook
  3. Backend verifies the request signature
  4. Retrieve conversation history from the database
  5. Send system prompt + conversation history + new message to the LLM API
  6. Store the LLM response in the database
  7. Send the reply to the user via the LINE Reply API

This straightforward architecture is precisely what makes it approachable as a first project.

Implementation Details Worth Getting Right

When I actually built this, a few things caught me off guard. Here's what to watch out for.

Managing Conversation History

LLM APIs are stateless by design — they don't automatically remember previous exchanges. To make the AI function as a secretary, you need to save recent conversation history to a database or cache and include it with every request. Redis or DynamoDB are common choices, but for personal use, starting with JSON file storage is entirely workable.

That said, sending unlimited conversation history will inflate your token usage and push up costs quickly. A practical balance is the "sliding window" approach: keep the most recent 10–20 exchanges and discard older ones as new messages come in.

For a more sophisticated implementation, you can summarize older conversations to compress them. For example, when the history exceeds 20 exchanges, prompt the LLM to "summarize the first ten exchanges in three lines," then store that summary alongside the most recent ten exchanges. This preserves past context while keeping token costs under control.

Optimizing Response Time

The LINE Messaging API requires your server to return an HTTP 200 response within one second of receiving a Webhook request. Since LLM responses take several seconds to generate, you must return an immediate response to the Webhook and then process the LLM call asynchronously before replying via the Reply API. I missed this requirement early on and wasted a lot of time debugging timeout errors.

In Node.js, the pattern is to use Promise-based async handling while the Webhook handler itself returns immediately. In Python, you'd use asyncio, or in AWS Lambda, invoke a separate Lambda function asynchronously.

Also note that Reply API tokens expire. If LLM processing takes too long, the token becomes invalid and you can't send the response. In that case you'd need to fall back to the Push API, but the Push API is limited to 200 messages per month on the free plan — so it's worth designing for the Reply API's time window whenever possible.

Security Considerations

Since you'll be sending business-sensitive information to the AI, validating request signatures is non-negotiable. Always verify the X-Line-Signature header sent by LINE Platform and reject any requests that fail validation. And as a basic rule: store your LLM API keys as environment variables — never hardcode them in source code.

Handling Confidential Information

The question business owners care most about is: "What happens to the information I send?" This is worth taking seriously.

Major LLM API providers explicitly state in their terms of service that data sent via the API will not be used to train their models. That said, terms can change — always review the current terms of service before going live.

Some practical safeguards to consider:

  • Limit what you send: Establish a policy of abstracting or omitting specific figures and personal details before sending information to the AI.
  • Set a data retention period: Build automatic deletion into your conversation history database so that data beyond a certain age is purged.
  • Restrict access: Make the LINE official account accessible only to yourself. Whitelist control by user ID, or authentication during the friend-add flow, works well for this.

These measures let you balance convenience and security appropriately.

Error Handling and Reliable Operation

Once you're in production, you'll eventually hit temporary LLM API outages or rate limits. If your AI secretary goes silent during those moments, it creates a poor user experience.

Configure a fallback message for errors — something like "I'm having trouble right now. Please wait a moment or try sending your message again." It seems like a small thing, but this kind of polish is often the difference between a tool you stick with and one you abandon.

Implementing retry logic is also recommended. LLM APIs often recover successfully after one or two retries on transient errors. Use exponential backoff for retry intervals (1 second → 2 seconds → 4 seconds) to avoid hammering the API during an outage.

Running Costs and Where You Can Take This

Cost is probably on your mind. For personal use, the LINE Messaging API fits within the free plan, the backend runs within serverless free tiers, and the LLM API usage typically lands in the range of a few hundred to a few thousand yen per month.

A Concrete Cost Breakdown

Here's a more detailed estimate based on typical 2025 pricing:

  • LINE Messaging API: The free plan covers 200 push messages per month. Replies via the Reply API don't count against this limit, so one-on-one conversations are effectively unlimited in practice.
  • Serverless infrastructure: AWS Lambda's free tier covers up to 1 million requests per month — far more than you'll ever use personally.
  • LLM API: This is where costs vary most. Using GPT-4o mini at around 20 exchanges per day typically runs 500–1,000 yen per month. Stepping up to GPT-4o or Claude Sonnet puts you in the 2,000–5,000 yen per month range.

In other words, you can have a "24-hour personal secretary" for a few thousand yen a month. The cost-effectiveness compared to human staffing speaks for itself.

A Roadmap for Expanding Capabilities

That said, there's plenty of room to grow. Once the basic version is running, adding features incrementally turns your AI secretary into an increasingly powerful tool.

Step 1: Integrate external services

Connect to Google Calendar or Notion via API and you can automate schedule creation and task management. Type "Schedule a meeting with ◯◯ company for 2pm next Tuesday" in LINE and have it appear on your calendar automatically. Once you experience that, it's hard to go back.

Step 2: Add multimodal support

Voice message transcription is especially useful for business owners who are frequently on the move. Send a voice message in LINE, and the AI secretary transcribes it, summarizes it, and drafts a response. You can also send photos of business cards or documents and ask "Extract the text from this" or "Give me the key points from this contract."

Step 3: Proactive reports and reminders

Add push-style notifications — a daily schedule summary delivered at 8am, or a monthly wrap-up of your meeting notes at the end of each month — and your AI secretary evolves from a reactive responder to a proactive assistant.

Step 4: Connect to an internal knowledge base

Store company manuals and past proposals in a vector database and build a RAG (Retrieval-Augmented Generation) pipeline, and your AI secretary can answer questions like "What did we propose in the ◯◯ project last year?" At this stage, you have something far beyond a chatbot — a genuine assistant with cross-cutting access to institutional knowledge.

While results will vary, the most reliable approach is to start with the minimal version — "send a message in LINE, get an AI response" — and add features based on what you actually need as you use it.

Real-World Examples

At aduce, we help clients adopt AI in their businesses. Here are a few ways LINE AI secretaries have been put to work.

Case 1: A Manufacturing Business Owner in a Regional City

With limited time at a desk, this owner established a workflow of dictating email drafts to LINE during transit and copy-pasting them when back at the office. "I used to handle emails in a batch at the end of the day. Now I process them in real time," they told us.

Case 2: A Professional Services Firm Principal

This owner built a workflow where the AI secretary drafts initial responses to general legal and tax questions, which are then reviewed and edited before being sent to clients. Initial response times improved significantly, leading to measurable gains in client satisfaction. The principle that all final judgments rest with a qualified professional was never compromised.

Case 3: A Multi-Location Restaurant Owner

This owner receives sales reports from each location via LINE and asks the AI secretary to "compile all store sales reports for the week in a table format." The time previously spent on manual aggregation dropped dramatically.

What all of these cases have in common is a clear boundary between what the AI handles and what the human decides. The AI secretary handles drafts and organization; final decisions belong to the human. That division is what makes it safe and sustainable to rely on.

Build It Yourself or Bring In Help?

After reading this far, you might be thinking, "I follow the technical concept, but I'm not sure I can build this myself." That depends honestly on your in-house technical resources.

If you have engineers on staff, building in-house — using this article as a guide — gives you the most flexibility and the lowest long-term costs. An experienced engineer can have a working version running in a matter of days.

If you don't have engineers, or if you have concerns about the security side of things, consulting a specialist is a sound choice. The core build is simple, but because this system handles business-sensitive information, security design and operational procedures do require expertise. A hybrid approach — outsourcing the initial build and handling ongoing operations in-house — is also a perfectly reasonable path.

Wrapping Up

An AI secretary built on the LINE Messaging API and an LLM is not a technically complicated project. Yet it has real potential to meaningfully improve how you work every day — freeing up more of your time for the decisions and creative work that only you can do.

The key is not to aim for perfection from day one. Start with the bare minimum: a LINE message goes in, an AI response comes back. Once you're actually using it, concrete needs will emerge — "I wish it could also do X" or "This part should work differently." Add those features one by one in response to real usage. That iterative approach, in my experience, is what produces an AI secretary you'll actually keep using.

At aduce, we're happy to consult on AI adoption and system implementation. If you'd like to build an AI secretary tailored to your business, or aren't sure where to start, please reach out through aduce's contact page. We'd love to work through the right solution for your organization together.