Skip to main content
Replication assigns the same shard to multiple remotes in your Meilisearch network. This guide covers how to configure replication, common patterns, and scaling read throughput.
Replication requires the Meilisearch Enterprise Edition v1.37 or later and a configured network.

How replication works

When you configure shards, each shard can be assigned to one or more remotes. If a shard is assigned to multiple remotes, Meilisearch replicates the data to each of them. During a search with useNetwork: true, Meilisearch queries each shard exactly once, picking one of the available remotes for each shard. This avoids duplicate results.

Assign shards to multiple remotes

To replicate a shard, list multiple remotes in its configuration:
curl \
  -X PATCH 'MEILISEARCH_URL/network' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  --data-binary '{
    "shards": {
      "shard-a": { "remotes": ["ms-00", "ms-01"] },
      "shard-b": { "remotes": ["ms-01", "ms-02"] },
      "shard-c": { "remotes": ["ms-02", "ms-00"] }
    }
  }'
In this configuration, every shard exists on two remotes. If any single instance goes down, all shard data still exists on another instance.

Common replication patterns

Full replication (every shard on every remote)

Best for small datasets where you want maximum availability and read throughput:
{
  "shards": {
    "shard-a": { "remotes": ["ms-00", "ms-01", "ms-02"] }
  }
}
All three remotes hold the same data. This is effectively a read-replica setup: you get 3x the search capacity, and any two instances can go down without affecting availability.

N+1 replication

Each shard on two remotes, spread across the cluster:
{
  "shards": {
    "shard-a": { "remotes": ["ms-00", "ms-01"] },
    "shard-b": { "remotes": ["ms-01", "ms-02"] },
    "shard-c": { "remotes": ["ms-02", "ms-00"] }
  }
}
This is the recommended pattern for most use cases. It balances data redundancy, search throughput, and storage efficiency. Each instance holds 2 shards, and losing any single instance still leaves all shards available.

Geographic replication

Place replicas in different regions to reduce latency for geographically distributed users:
{
  "shards": {
    "shard-a": { "remotes": ["us-east-01", "eu-west-01"] },
    "shard-b": { "remotes": ["us-east-02", "eu-west-02"] }
  }
}
Route search requests to the closest cluster. Both regions hold all data, so either can serve a full result set.

Remote availability

Replication ensures that your data exists on multiple remotes. However, Meilisearch does not currently provide automatic search failover between replicas. During a network search, each shard is assigned to a randomly chosen replica. If that replica is unreachable, Meilisearch retries the same remote up to 3 times but does not automatically try another replica holding the same shard. To handle remote failures, route search traffic away from unhealthy instances at the infrastructure level (for example, using a load balancer with health checks). When the failed remote comes back online, it can start serving searches again without manual intervention.

Scaling read throughput

Replication is the primary way to scale search throughput in Meilisearch. Each replica can independently handle search requests, so adding more replicas increases the total number of concurrent searches your cluster can handle. To add a new replica for an existing shard, add the new remote and update the shard assignment in a single request:
curl \
  -X PATCH 'MEILISEARCH_URL/network' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  --data-binary '{
    "remotes": {
      "ms-03": {
        "url": "http://ms-03.example.com:7703",
        "searchApiKey": "SEARCH_KEY_03",
        "writeApiKey": "WRITE_KEY_03"
      }
    },
    "shards": {
      "shard-a": { "addRemotes": ["ms-03"] }
    }
  }'
Use addRemotes inside a shard definition to add the new remote to that shard without rewriting the full list. Remotes not included in the remotes object are left unchanged.

The leader instance

The leader is responsible for all write operations (document additions, settings changes, index management). Non-leader instances reject writes with a not_leader error. If the leader goes down:
  • Search may be affected: if search requests are routed to the downed leader, they will fail. Route search traffic to healthy instances using a load balancer
  • Writes are blocked: no documents can be added or updated until a leader is available
  • Manual promotion: you must designate a new leader by updating the network topology with PATCH /network and setting "leader" to another instance
There is no automatic leader election. If your leader goes down, you must manually promote a new one. Plan for this in your deployment strategy.

Monitoring replica health

Check the current network topology to see which remotes are configured:
curl \
  -X GET 'MEILISEARCH_URL/network' \
  -H 'Authorization: Bearer MEILISEARCH_KEY'
To verify a specific remote is responding, query it directly or use the health endpoint:
curl 'http://ms-01.example.com:7701/health'

Next steps

Set up a sharded cluster

Start from scratch with a full cluster setup guide.

Manage the network

Add and remove remotes, update shard assignments.

Replication and sharding overview

Understand the concepts and feature compatibility.

Data backup

Configure snapshots and dumps for your cluster.