meta-llama
Llama 4 Maverick
meta-llama/llama-4-maverick
Input$0.500 /1M tokens
·Output$0.700 /1M tokens
Code Examples
import requests
response = requests.post(
"https://neurongate.net/v1/chat/completions",
headers={
"Authorization": "Bearer ng-your-api-key",
"Content-Type": "application/json"
},
json={
"model": "meta-llama/llama-4-maverick",
"messages": [
{"role": "user", "content": "Hello!"}
]
}
)
print(response.json()["choices"][0]["message"]["content"])Pricing Details
| Example | Cost |
|---|---|
| 1K input tokens (short prompt) | $0.00050 |
| 1K in + 500 out (typical response) | $0.00085 |
| 10K in + 2K out (document analysis) | $0.00640 |
| 100K in + 10K out (large context) | $0.0570 |
Prices in USD. Billed per actual token usage. Prepay with USDT, USDC, ETH, or BTC.
Frequently Asked Questions
Capabilities
Streaming
YesFunction Calling / Tools
YesVision (Image Input)
YesAudio Processing
NoContext Window
1.0M tokens
~786K words of text
Max output
66K tokens