Skip to content

Migrate from Algolia

Flapjack implements the Algolia REST API. Your existing code, client libraries, and InstantSearch.js setup work with Flapjack — you just change where requests are sent.

  • API paths: /1/indexes/{indexName}/query, /1/indexes/{indexName}/batch, etc.
  • Auth headers: X-Algolia-Application-Id and X-Algolia-API-Key
  • Request format: Same JSON bodies for search and indexing
  • Response format: hits, nbHits, processingTimeMS, query, params
  • Client libraries: The official algoliasearch JavaScript client works with Flapjack
  • InstantSearch.js: Works out of the box with a custom host

Flapjack is an Algolia-compatible search engine, not an Algolia clone. A few things work differently:

  • Application ID: Always flapjack. Algolia uses a unique ID per project — Flapjack doesn’t need one because each instance is dedicated to you.
  • Single admin key: Flapjack uses one API key for all operations. Algolia has separate search-only and admin keys with configurable scopes. If you need read-only access, put Flapjack behind a proxy that strips write endpoints.
  • Ranking: Flapjack uses Tantivy (BM25-based) for relevance ranking. Results may differ from Algolia’s tie-breaking algorithm, but customRanking and searchableAttributes work the same way.
  • No analytics: Flapjack doesn’t track search queries, click-through rates, or conversion events. If you need search analytics, log queries in your application layer.
  • No personalization or A/B testing: These are Algolia SaaS-specific features that require a managed ML pipeline.

None of these differences affect your search code or InstantSearch widgets. They’re operational details.

Self-hosted:

Terminal window
docker run -d -p 7700:7700 \
-e FLAPJACK_ADMIN_KEY=your-secret-key \
-v flapjack-data:/var/lib/flapjack \
ghcr.io/flapjack-search/flapjack

Or use Flapjack Cloud for a managed instance.

Before (Algolia):

import algoliasearch from 'algoliasearch';
const client = algoliasearch('YOUR_APP_ID', 'YOUR_API_KEY');
const index = client.initIndex('movies');

After (Flapjack):

import { liteClient as algoliasearch } from 'algoliasearch/lite';
const client = algoliasearch('flapjack', 'YOUR_FLAPJACK_ADMIN_KEY', {
hosts: [{ url: 'localhost:7700', protocol: 'http' }],
});

That’s it. One line changes: the host. All your index.search(), index.saveObjects(), and index.getObject() calls work the same way.

Before:

const searchClient = algoliasearch('YOUR_APP_ID', 'YOUR_API_KEY');

After:

const searchClient = algoliasearch('flapjack', 'YOUR_FLAPJACK_ADMIN_KEY', {
hosts: [{ url: 'localhost:7700', protocol: 'http' }],
});

Everything else — searchBox, hits, refinementList, pagination — works unchanged. See the full InstantSearch.js guide.

from algoliasearch.search.client import SearchClientSync
# Create client pointing at Flapjack instead of Algolia
client = SearchClientSync.create(
app_id="flapjack",
api_key="YOUR_FLAPJACK_ADMIN_KEY",
)
# Override the host
client._transporter._hosts = [Host("localhost", 7700, "http")]

Just change the URL and credentials:

Terminal window
# Before (Algolia)
curl -X POST 'https://YOUR_APP_ID-dsn.algolia.net/1/indexes/movies/query' \
-H 'X-Algolia-Application-Id: YOUR_APP_ID' \
-H 'X-Algolia-API-Key: YOUR_API_KEY' \
-d '{"query":"matrix"}'
# After (Flapjack)
curl -X POST 'http://localhost:7700/1/indexes/movies/query' \
-H 'X-Algolia-Application-Id: flapjack' \
-H 'X-Algolia-API-Key: YOUR_FLAPJACK_ADMIN_KEY' \
-d '{"query":"matrix"}'

Option A — Algolia Dashboard:

  1. Log into your Algolia Dashboard
  2. Go to Search → select your index
  3. Click Manage indexExportDownload JSON
  4. Save the file (e.g., movies.json)

Option B — Algolia API (recommended for large datasets):

import algoliasearch from 'algoliasearch';
const client = algoliasearch('YOUR_ALGOLIA_APP_ID', 'YOUR_ALGOLIA_ADMIN_KEY');
const index = client.initIndex('movies');
// browseObjects fetches all records, handling pagination automatically
const records = [];
await index.browseObjects({
batch: (batch) => records.push(...batch),
});
// Save to file
const fs = require('fs');
fs.writeFileSync('movies.json', JSON.stringify(records, null, 2));
console.log(`Exported ${records.length} records`);

Once you have your JSON export, send it to Flapjack using the batch API. Every record must have an objectID field (Algolia exports include this automatically).

Terminal window
# For small datasets (< 1,000 records) — send directly
curl -X POST 'http://localhost:7700/1/indexes/movies/batch' \
-H 'X-Algolia-Application-Id: flapjack' \
-H 'X-Algolia-API-Key: YOUR_FLAPJACK_ADMIN_KEY' \
-H 'Content-Type: application/json' \
-d '{
"requests": [
{"action": "addObject", "body": {"objectID": "1", "title": "The Matrix", "year": 1999}},
{"action": "addObject", "body": {"objectID": "2", "title": "Inception", "year": 2010}}
]
}'

For larger datasets, use this script to batch the import:

import { liteClient as algoliasearch } from 'algoliasearch/lite';
import fs from 'fs';
const client = algoliasearch('flapjack', 'YOUR_FLAPJACK_ADMIN_KEY', {
hosts: [{ url: 'localhost:7700', protocol: 'http' }],
});
const records = JSON.parse(fs.readFileSync('movies.json', 'utf8'));
const BATCH_SIZE = 1000;
for (let i = 0; i < records.length; i += BATCH_SIZE) {
const batch = records.slice(i, i + BATCH_SIZE);
await client.batch('movies', {
requests: batch.map((record) => ({
action: 'addObject',
body: record,
})),
});
console.log(`Imported ${Math.min(i + BATCH_SIZE, records.length)} / ${records.length}`);
}

The batch API supports the same actions as Algolia: addObject, updateObject, deleteObject, clear.

Run a search to confirm your data imported correctly:

Terminal window
curl -X POST 'http://localhost:7700/1/indexes/movies/query' \
-H 'X-Algolia-Application-Id: flapjack' \
-H 'X-Algolia-API-Key: YOUR_FLAPJACK_ADMIN_KEY' \
-H 'Content-Type: application/json' \
-d '{"query": "matrix"}'

You should see the same response format you’re used to from Algolia:

{
"hits": [
{
"objectID": "1",
"title": "The Matrix",
"year": 1999,
"_highlightResult": {
"title": {
"value": "The <em>Matrix</em>",
"matchLevel": "full"
}
}
}
],
"nbHits": 1,
"page": 0,
"hitsPerPage": 20,
"processingTimeMS": 0,
"query": "matrix"
}

Verification checklist:

  • Record count matches: compare nbHits for an empty query ("") against your Algolia record count
  • Highlighting works: check _highlightResult in responses
  • Facets return: if you use faceted search, verify facet counts match
  • InstantSearch renders: if you use InstantSearch.js, confirm widgets load and filter correctly
FeatureStatus
Search (/1/indexes/{index}/query)Supported
Multi-index search (/1/indexes/*/queries)Supported
Batch indexing (/1/indexes/{index}/batch)Supported
Get/delete individual objectsSupported
Index settings (searchableAttributes, customRanking)Supported
Highlighting and snippetingSupported
Faceted search and facet countsSupported
Filtering (numeric, string, boolean)Supported
PaginationSupported
Geo searchPlanned
SynonymsPlanned
Query RulesPlanned
AnalyticsNot planned
A/B TestingNot planned
PersonalizationNot planned
Algolia ParameterFlapjackNotes
querySupportedFull-text search
pageSupported0-indexed pagination
hitsPerPageSupportedMax 1000
attributesToRetrieveSupportedFilter returned fields
attributesToHighlightSupportedControl highlighting
highlightPreTag / highlightPostTagSupportedCustom highlight tags
filtersSupportedAlgolia filter syntax
facetsSupportedCompute facet counts
facetFiltersSupportedFilter by facet value
numericFiltersSupportedNumeric range filters
typoToleranceImplicitTantivy handles fuzzy matching automatically
distinctNot yetDeduplication planned
getRankingInfoNot yetRanking debug info planned
analyticsNoLog queries in your app instead
clickAnalyticsNoNo click tracking
enablePersonalizationNoNo ML personalization

Features marked “Not planned” are Algolia-specific SaaS features. Flapjack focuses on the core search functionality that the vast majority of Algolia customers actually use.

  • Cost: Algolia bills per search operation and scales aggressively. Flapjack is free (self-hosted) or flat-fee (Cloud).
  • Data ownership: Your data stays on your infrastructure. No vendor lock-in.
  • Simplicity: Single binary, no cluster management, no configuration servers.
  • Open source: MIT licensed. Audit, fork, modify, contribute.