Replication requires the Meilisearch Enterprise Edition v1.37 or later and a configured network.
How replication works
When you configure shards, each shard can be assigned to one or more remotes. If a shard is assigned to multiple remotes, Meilisearch replicates the data to each of them. During a search withuseNetwork: true, Meilisearch queries each shard exactly once, picking one of the available remotes for each shard. This avoids duplicate results.
Assign shards to multiple remotes
To replicate a shard, list multiple remotes in its configuration:Common replication patterns
Full replication (every shard on every remote)
Best for small datasets where you want maximum availability and read throughput:N+1 replication
Each shard on two remotes, spread across the cluster:Geographic replication
Place replicas in different regions to reduce latency for geographically distributed users:Remote availability
Replication ensures that your data exists on multiple remotes. However, Meilisearch does not currently provide automatic search failover between replicas. During a network search, each shard is assigned to a randomly chosen replica. If that replica is unreachable, Meilisearch retries the same remote up to 3 times but does not automatically try another replica holding the same shard. To handle remote failures, route search traffic away from unhealthy instances at the infrastructure level (for example, using a load balancer with health checks). When the failed remote comes back online, it can start serving searches again without manual intervention.Scaling read throughput
Replication is the primary way to scale search throughput in Meilisearch. Each replica can independently handle search requests, so adding more replicas increases the total number of concurrent searches your cluster can handle. To add a new replica for an existing shard, add the new remote and update the shard assignment in a single request:addRemotes inside a shard definition to add the new remote to that shard without rewriting the full list. Remotes not included in the remotes object are left unchanged.
The leader instance
The leader is responsible for all write operations (document additions, settings changes, index management). Non-leader instances reject writes with anot_leader error.
If the leader goes down:
- Search may be affected: if search requests are routed to the downed leader, they will fail. Route search traffic to healthy instances using a load balancer
- Writes are blocked: no documents can be added or updated until a leader is available
- Manual promotion: you must designate a new leader by updating the network topology with
PATCH /networkand setting"leader"to another instance
Monitoring replica health
Check the current network topology to see which remotes are configured:Next steps
Set up a sharded cluster
Start from scratch with a full cluster setup guide.
Manage the network
Add and remove remotes, update shard assignments.
Replication and sharding overview
Understand the concepts and feature compatibility.
Data backup
Configure snapshots and dumps for your cluster.