The problem with search APIs today

Search APIs for developers charge between $3 and $10 per 1,000 queries. At that price point, you get results from the open web: a mix of primary sources, summaries, SEO articles, and AI-generated rewrites with no way to tell them apart. The credibility of the source is not part of the result.

This is not a minor quality concern. When you ask a search API for recent cancer research, it has no idea whether the result comes from Ugur Sahin at BioNTech or a health blog paraphrasing his paper from three years ago. Both look identical to a semantic similarity function. The infrastructure required to crawl and index the entire web is enormous, and those costs are what you are paying for.

What the /search endpoint does

/search is the fifth endpoint in the Amygdala API, alongside /index, /detail, /peers, and /match. It searches across content in the Amygdala authority index and returns results ranked by a combination of query relevance and authority score. Every result includes the title, URL, snippet, and publication date of the content, plus the verified authority who produced it: their name, institution, SDU identifier, and authority score.

The endpoint works in two modes. Raw search requires only a query: GET /search?q=...&limit=25. No prior setup, no SDU list. Results are drawn from the full index and ranked by relevance times authority score. Scoped search adds a list of SDU identifiers: GET /search?q=...&sdus=sdu_abc,sdu_def&limit=25. Results are restricted to content from those specific authorities. Use raw search when you want the best available content across all domains. Use scoped search when you already know which authorities matter for your query, typically from a prior /index call.

What is in the index

The index contains content exclusively from sources verified through the Amygdala Authority Index. That includes personal websites, blogs, academic publications, video transcripts, and social posts by authorities whose identity and expertise have been verified. Content is collected through targeted collection from authority-verified sources, not broad web crawling. Every document in the index traces back to a named human with a known authority profile.

The index spans every domain Amygdala tracks. Andrej Karpathy's deep learning tutorials, Ugur Sahin's oncology publications, and Huda Kattan's formulation reviews are all in the same index. The same /search call works whether you are searching for transformer architecture research or evidence-based skincare recommendations, because the authority graph is not siloed by domain.

Ranking combines query relevance with authority score and content freshness. The exact formula is not published, but the architecture is: content that ranks in /search comes from a verified source with a scored authority profile, and recency is factored in so that older high-authority content does not systematically displace fresh publications.

Why it costs less, and why that's sustainable

Exa, Tavily, and Google Custom Search all maintain massive general-purpose web indexes. Billions of pages, most of which are SEO content, AI-generated rewrites, aggregator noise, and duplicate content. Crawling, storing, embedding, and serving that index is genuinely expensive. Those infrastructure costs get passed directly to the developer in the per-query price.

Amygdala's index is curated by design. It only contains content from authority-verified sources. A smaller, higher-quality index requires dramatically less infrastructure to maintain. There is no need to crawl billions of pages and then filter down to the useful ones. The filtering happens at collection time, not at query time. The index runs on Hetzner (Helsinki) rather than AWS or GCP, which removes another layer of hyperscaler margin from the cost structure.

The practical framing: with a general-purpose search API, you are paying for an index full of SEO spam and AI-generated rewrites that gets filtered toward the good results. With /search, you are paying for an index that only contains the good results to begin with. The cost difference is structural, not a promotional rate.

See the API docs for current rates.

Code examples

Raw search: best content across all authorities

The simplest usage. Pass a query, get back the top results ranked by authority score combined with relevance. Every result comes with the producing authority already attached.

Python: raw search across all authorities
import requests

AMYGDALA_API_KEY = "amyg_..."

results = requests.get(
    "https://api.amygdala.eu/api/v1/search",
    params={"q": "neoantigen vaccine clinical trials 2026", "limit": 10},
    headers={"Authorization": f"Bearer {AMYGDALA_API_KEY}"},
).json()["results"]

for r in results:
    print(f"{r['title']}")
    print(f"  Authority: {r['author']} ({r['institution']})")
    print(f"  Score:     {r['authority_score']}")
    print(f"  URL:       {r['url']}")
    print(f"  Date:      {r['date']}")
    print()

Scoped search: two-step pattern with /index

When the topic is specific enough that you want to search within a defined set of authorities, call /index first to get the top verified authorities on that topic, then pass their SDUs to /search. The result is content that is both relevant to your query and attributable to the authorities who matter most for it.

Python: /index to identify authorities, /search to get their content
import requests

AMYGDALA_API_KEY = "amyg_..."
query = "retinoids, peptides, and niacinamide: what the research actually says"

# Step 1: get the top verified authorities in skincare science
authorities = requests.get(
    "https://api.amygdala.eu/api/v1/index",
    params={"query": query, "limit": 50},
    headers={"Authorization": f"Bearer {AMYGDALA_API_KEY}"},
).json()["results"]

sdus = [a["sdu"] for a in authorities]
print(f"Found {len(sdus)} verified skincare authorities")

# Step 2: search within their published content specifically
results = requests.get(
    "https://api.amygdala.eu/api/v1/search",
    params={"q": query, "sdus": ",".join(sdus), "limit": 25},
    headers={"Authorization": f"Bearer {AMYGDALA_API_KEY}"},
).json()["results"]

for r in results:
    print(f"{r['title']} — {r['author']}")
    print(f"  {r['url']}")
    print()

Full RAG pipeline: search, build context, answer

In a RAG pipeline, /search replaces the loop of per-authority web searches that the previous pattern required. One call returns 25 authority-attributed documents. Fetch the snippets, build the context block, call Mistral. Every source in the context has a name, institution, and authority score attached by default.

Python: full pipeline — search → context → answer
import requests
from mistralai import Mistral

AMYGDALA_API_KEY = "amyg_..."
MISTRAL_API_KEY  = "..."

mistral = Mistral(api_key=MISTRAL_API_KEY)

def rag_answer(query: str) -> str:
    # One call returns authority-attributed content, ranked by relevance x authority score
    results = requests.get(
        "https://api.amygdala.eu/api/v1/search",
        params={"q": query, "limit": 25},
        headers={"Authorization": f"Bearer {AMYGDALA_API_KEY}"},
    ).json()["results"]

    context_parts = []
    for r in results:
        context_parts.append(
            f"Source: {r['title']}\n"
            f"Author: {r['author']} ({r['institution']})\n"
            f"Authority score: {r['authority_score']}\n"
            f"URL: {r['url']}\n\n"
            f"{r.get('snippet', '')}"
        )

    context = "\n\n---\n\n".join(context_parts)

    response = mistral.chat.complete(
        model="mistral-large-latest",
        messages=[{
            "role": "user",
            "content": (
                f"Answer this query based on the content below.\n\n"
                f"Query: {query}\n\n"
                f"{context}"
            ),
        }],
    )
    return response.choices[0].message.content

print(rag_answer(
    "What are leading oncologists saying about mRNA cancer vaccines in early 2026?"
))

What this replaces in your stack

For most RAG pipelines and research agents, /search is a direct replacement for Exa, Tavily, or Google Custom Search. The integration surface is identical: one HTTP call, a list of results with titles, URLs, and snippets. The difference is that every result carries authority attribution by default. There is no post-processing step to figure out who produced the content.

For agent tool configs in LangChain, LlamaIndex, or custom MCP setups, the swap is straightforward. Replace the search tool endpoint, update the result parsing to include the authority fields, and the pipeline is upgraded. The authority score is available as a ranking signal for downstream filtering if you want to apply a floor on which results enter the context window.

The index is growing

The index currently covers the content authorities publish on their websites, blogs, and in academic databases. Content from YouTube transcripts, podcast audio, preprints, and structured social platforms is being added as collection expands across the formats authorities actually use to publish. The goal is to make the index comprehensive across publishing formats, not just web pages, so that a /search query for what Vinod Balachandran has published on pancreatic cancer returns his conference talks and preprints alongside his journal papers.

Try the Amygdala Authority Index

$50 in free credits. No credit card required.