cal/apps/ai
Peer Richelsen df4aa24913
chore: improve cal.ai not-installed message (#12022)
2023-10-23 15:45:26 +03:00
..
src chore: improve cal.ai not-installed message (#12022) 2023-10-23 15:45:26 +03:00
.env.example feat: cal.ai v1.2.0 (#11868) 2023-10-14 09:52:24 +01:00
next-env.d.ts feat: cal ai (#10992) 2023-08-30 16:10:59 -07:00
next.config.js perf: Removed unused queries for user event types (#10568) 2023-09-05 11:35:02 -07:00
package.json feat: add rate limiting and more error handling to Cal.ai (#11898) 2023-10-17 18:26:49 +00:00
README.md chore: updated AI readme (#11926) 2023-10-17 16:08:20 +01:00
tsconfig.json feat: cal ai (#10992) 2023-08-30 16:10:59 -07:00

Cal.ai

Welcome to Cal.ai!

This app lets you chat with your calendar via email:

  • Turn informal emails into bookings eg. forward "wanna meet tmrw at 2pm?"
  • List and rearrange your bookings eg. "clear my afternoon"
  • Answer basic questions about your busiest times eg. "how does my Tuesday look?"

The core logic is contained in agent/route.ts. Here, a LangChain Agent Executor is tasked with following your instructions. Given your last-known timezone, working hours, and busy times, it attempts to CRUD your bookings.

The AI agent can only choose from a set of tools, without ever seeing your API key.

Emails are cleaned and routed in receive/route.ts using MailParser.

Incoming emails are routed by email address. Addresses are verified by DKIM record, making them hard to spoof.

Recognition

Cal.ai - World's first open source AI scheduling assistant | Product Hunt Cal.ai - World's first open source AI scheduling assistant | Product Hunt

Getting Started

Development

If you haven't yet, please run the root setup steps.

Before running the app, please see env.mjs for all required environment variables. Run cp .env.example .env in this folder to get started. You'll need:

  • An OpenAI API key with access to GPT-4
  • A SendGrid API key
  • A default sender email (for example, me@dev.example.com)
  • The Cal.ai app's ID and URL (see add.ts)
  • A unique value for PARSE_KEY with openssl rand -hex 32

To stand up the API and AI apps simultaneously, simply run yarn dev:ai.

Agent Architecture

The scheduling agent in agent/route.ts calls an LLM (in this case, GPT-4) in a loop to accomplish a multi-step task. We use an OpenAI Functions agent, which is fine-tuned to output text suited for passing to tools.

Tools (eg. createBooking) are simply JavaScript methods wrapped by Zod schemas, telling the agent what format to output.

Here is the full architecture:

Cal.ai architecture

Email Router

To expose the AI app, run ngrok http 3005 (or the AI app's port number) in a new terminal. You may need to install nGrok.

To forward incoming emails to the serverless function at /agent, we use SendGrid's Inbound Parse.

  1. Ensure you have a SendGrid account
  2. Ensure you have an authenticated domain. Go to Settings > Sender Authentication > Authenticate. For DNS host, select I'm not sure. Click Next and add your domain, eg. example.com. Choose Manual Setup. You'll be given three CNAME records to add to your DNS settings, eg. in Vercel Domains. After adding those records, click Verify. To troubleshoot, see the full instructions.
  3. Authorize your domain for email with MX records: one with name [your domain].com and value mx.sendgrid.net., and another with name bounces.[your domain].com and value feedback-smtp.us-east-1.amazonses.com, both with priority 10 if prompted.
  4. Go to Settings > Inbound Parse > Add Host & URL. Choose your authenticated domain.
  5. In the Destination URL field, use the nGrok URL from above along with the path, /api/receive, and one param, parseKey, which lives in this app's .env under PARSE_KEY. The full URL should look like https://abc.ngrok.io/api/receive?parseKey=ABC-123.
  6. Activate "POST the raw, full MIME message".
  7. Send an email to [anyUsername]@example.com. You should see a ping on the nGrok listener and server.
  8. Adjust the logic in receive/route.ts, save to hot-reload, and send another email to test the behaviour.

Please feel free to improve any part of this architecture!