Groq
active

Ultra-fast LLM inference β€” LPU-powered, OpenAI-compatible API for Llama, Mixtral, Gemma

ai-llmBearer TokenFREEMIUM
πŸ€– 0 ↑ 0 ↓ Β |Β  πŸ‘€ 0 ↑ 0 ↓
Official Documentation

Connect via PincerAPI

Use our proxy to call Groq with your PincerAPI key. No separate signup needed.

# Get instructions for Groq
curl -H "Authorization: Bearer YOUR_PINCER_KEY" \
  https://pincerapi.com/api/v1/apis/groq/instructions

# Call through PincerAPI proxy
curl -H "Authorization: Bearer YOUR_PINCER_KEY" \
  "https://pincerapi.com/api/v1/connect/groq/your/endpoint/here"

Direct Setup

Endpoints

POST/chat/completions

Chat completion (OpenAI-compatible)

πŸ’‘ Fastest inference available. Supports tool_use.

GET/models

List available models

POST/audio/transcriptions

Transcribe audio with Whisper

πŸ’‘ Same as OpenAI Whisper API format

Related APIs in ai-llm