Mobile Monitoring Solutions

Search
Close this search box.

MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial]

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Sharding was one of the features that MongoDB offered from an early stage, since version 1.6 was released in August 2010. Sharding is the ability to horizontally scale out our database by partitioning our datasets across different servers—the shards. Foursquare and Bitly are two of the most famous early customers for MongoDB that were also using sharding from its inception all the way to the general availability release.

In this article we will learn how to design a sharding cluster and how to make the single most important decision around it of choosing the unique shard key.

Sharding setup in MongoDB

Sharding is performed at the collection level. We can have collections that we don’t want or need to shard for several reasons. We can leave these collections unsharded.

These collections will be stored in the primary shard. The primary shard is different for each database in MongoDB.

The primary shard is automatically selected by MongoDB when we create a new database in a sharded environment. MongoDB will pick the shard that has the least data stored at the moment of creation.

If we want to change the primary shard at any other point, we can issue the following command:

> db.runCommand( { movePrimary : "mongo_books", to : "UK_based" } )

We thus move the database named mongo_books to the shard named UK_based.

Choosing the shard key

Choosing our shard key is the most important decision we need to make. The reason is that once we shard our data and deploy our cluster, it becomes very difficult to change the shard key.

First, we will go through the process of changing the shard key.

Changing the shard key

There is no command or simple procedure to change the shard key in MongoDB. The only way to change the shard key involves backing up and restoring all of our data, something that may range from being extremely difficult to impossible in high-load production environments.

The steps if we want to change our shard key are as follows:

  1. Export all data from MongoDB.
  2. Drop the original sharded collection.
  3. Configure sharding with the new key.
  4. Presplit the new shard key range.
  5. Restore our data back into MongoDB.

From these steps, step 4 is the one that needs some more explanation.

MongoDB uses chunks to split data in a sharded collection. If we bootstrap a MongoDB sharded cluster from scratch, chunks will be calculated automatically by MongoDB. MongoDB will then distribute the chunks across different shards to ensure that there are an equal number of chunks in each shard.

The only case in which we cannot really do this is when we want to load data into a newly sharded collection.

The reasons are threefold:

  1. MongoDB creates splits only after an insert operation.
  2. Chunk migration will copy all of the data in that chunk from one shard to another.
  3. The floor(n/2) chunk migrations can happen at any given time, where n is the number of shards we have. Even with three shards, this is only a floor(1.5)=1 chunk migration at a time.

These three limitations combined mean that letting MongoDB to figure it out on its own will definitely take much longer and may result in an eventual failure. This is why we want to presplit our data and give MongoDB some guidance on where our chunks should go.

Considering our example of the mongo_books database and the books collection, this would be:

> db.runCommand( { split : "mongo_books.books", middle : { id : 50 } } )

The middle command parameter will split our key space in documents that have id50 and documents that have id>50. There is no need for a document to exist in our collection with id=50 as this will only serve as the guidance value for our partitions.

In this example, we chose 50 assuming that our keys follow a uniform distribution (that is, the same count of keys for each value) in the range of values from 0 to 100.

We should aim to create at least 20-30 chunks to grant MongoDB flexibility in potential migrations. We can also use bounds and find instead of middle if we want to manually define the partition key, but both parameters need data to exist in our collection before applying them.

Choosing the correct shard key

After the previous section, it’s now self-evident that we need to take into great consideration the choice of our shard key as it is something that we have to stick with.

A great shard key has three characteristics:

  • High cardinality
  • Low frequency
  • Non-monotonically changing in value

We will go over the definitions of these three properties first to understand what they mean.

High cardinality means that the shard key must have as many distinct values as possible. A Boolean can take only values of true/false, and so it is a bad shard key choice.

A 64-bit long value field that can take any value from −(2^63) to 2^63 − 1 and is a good example in terms of cardinality.

Low frequency directly relates to the argument about high cardinality. A low-frequency shard key will have a distribution of values as close to a perfectly random / uniform distribution.

Using the example of our 64-bit long value, it is of little use to us if we have a field that can take values ranging from −(2^63) to 2^63 − 1 only to end up observing the values of 0 and 1 all the time. In fact, it is as bad as using a Boolean field, which can also take only two values after all.

If we have a shard key with high frequency values, we will end up with chunks that are indivisible. These chunks cannot be further divided and will grow in size, negatively affecting the performance of the shard that contains them.

Non-monotonically changing values mean that our shard key should not be, for example, an integer that always increases with every new insert. If we choose a monotonically increasing value as our shard key, this will result in all writes ending up in the last of all of our shards, limiting our write performance.

If we want to use a monotonically changing value as the shard key, we should consider using hash-based sharding.

In the next section, we will describe different sharding strategies and their advantages and disadvantages.

Range-based sharding

The default and the most widely used sharding strategy is range-based sharding. This strategy will split our collection’s data into chunks, grouping documents with nearby values in the same shard.

For our example database and collection, mongo_books and books respectively, we have:

> sh.shardCollection("mongo_books.books", { id: 1 } )

This creates a range-based shard key on id with ascending direction. The direction of our shard key will determine which documents will end up in the first shard and which ones in the subsequent ones.

This is a good strategy if we plan to have range-based queries as these will be directed to the shard that holds the result set instead of having to query all shards.

Hash-based sharding

If we don’t have a shard key (or can’t create one) that achieves the three goals mentioned previously, we can use the alternative strategy of using hash-based sharding.

In this case, we are trading data distribution with query isolation.

Hash-based sharding will take the values of our shard key and hash them in a way that guarantees close to uniform distribution. This way we can be sure that our data will evenly distribute across shards.

The downside is that only exact match queries will get routed to the exact shard that holds the value. Any range query will have to go out and fetch data from all shards.

For our example database and collection (mongo_books and books respectively), we have:

> sh.shardCollection("mongo_books.books", { id: "hashed" } )

Similar to the preceding example, we are now using the id field as our hashed shard key.

Suppose we use fields with float values for hash-based sharding. Then we will end up with collisions if the precision of our floats is more that 2^53. These fields should be avoided where possible.

Coming up with our own key

Range-based sharding does not need to be confined to a single key. In fact, in most cases, we would like to combine multiple keys to achieve high cardinality and low frequency.

A common pattern is to combine a low-cardinality first part (but still having as distinct values more than two times the number of shards that we have) with a high-cardinality key as its second field. This achieves both read and write distribution from the first part of the sharding key and then cardinality and read locality from the second part.

On the other hand, if we don’t have range queries, we can get away by using hash-based sharding on a primary key as this will exactly target the shard and document that we are going after.

To make things more complicated, these considerations may change depending on our workload. A workload that consists almost exclusively (say 99.5%) of reads won’t care about write distribution. We can use the built-in _id field as our shard key and this will only add 0.5% load in the last shard. Our reads will still be distributed across shards.

Unfortunately, in most cases, this is not simple.

Location-based data

Due to government regulations and the desire to have our data as close to our users as possible, there is often a constraint and need to limit data in a specific data center. By placing different shards at different data centers, we can satisfy this requirement.

To summarize we learned about MongoDB sharding and got to know techniques to choose the correct shard key. Get the expert guide Mastering MongoDB 3.x  today to build fault-tolerant MongoDB application.

Read Next:

MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more

MongoDB going relational with 4.0 release

Indexing, Replicating, and Sharding in MongoDB [Tutorial]

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.