Getting Started

Beginner

Make your first LLM API call in 10 minutes — completely free. No credit card needed.

Last updated: Feb 10, 2026

Why Start Here?

Large Language Models are powerful, but getting started can feel overwhelming. This guide cuts through the noise: you'll pick a free API provider, paste in your key, and talk to an AI model — all in your browser.

⏱️ You'll have a working LLM call in under 10 minutes. No credit card required.

Get a Free API Key

All three providers offer free tiers — no credit card needed. Pick whichever appeals to you (you can always try the others later):

OpenRouter

Widest free model selection. Access Llama 3.3 70B, DeepSeek R1, Qwen3, Gemma 3, and Mistral Small 3.1 — all free. Great for exploring.

Get API Key →

Groq

Blazing fast inference on custom LPU hardware. Free developer tier includes Llama 3.3 70B, Qwen3 32B, and GPT OSS 120B with generous rate limits.

Get API Key →

Cerebras

The fastest inference available. Wafer-scale chips deliver incredible speed. Free tier includes all models: Llama 3.3 70B, Qwen3 32B, and GPT OSS 120B.

Get API Key →
🚀

Your First LLM Call

Enter your API key above, type a message, and hit Send

API Key

Your API key never leaves your browser — all requests go directly from your browser to the provider's API. The key is only kept in memory and is automatically cleared when you leave or refresh the page.

Verify in source code
Model:
Temperature0.7
Max Tokens256
curl https://api.groq.com/openai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "llama-3.3-70b-versatile",
    "messages": [{"role": "user", "content": "Hello, who are you?"}],
    "temperature": 0.7,
    "max_tokens": 256
  }'

Understanding the Response

choices[0].message.content

The actual text the model generated. This is what you'd show to a user.

usage.prompt_tokens / completion_tokens

How many tokens were used. Prompt tokens = your input, completion tokens = the model's output. This is how billing works (though free tiers don't charge).

finish_reason

"stop" means the model finished naturally. "length" means it hit the max_tokens limit and was cut off.

model

The exact model that processed your request. Some providers may route to different versions.

Your Learning Path

A curated roadmap from zero to advanced. Follow these topics in order for the best learning experience.

Overall Progress0/18

Next Steps

Now that you've made your first LLM call, explore these topics to go deeper: