Schema Design Considerations in MongoDB

I’ve previously touched on some of the benefits and a few examples of how to do schema design in MongoDB. One often raised question when it comes to modeling data in MongoDB is how best to handle data schema in a non-relational database. I’d like to explore in more depth some of the considerations required for effective schema design for MongoDB implementations.

One of the key things to remember when modeling your data in MongoDB is how the application is going to use the data. Your data access patterns should be of foremost thought when designing your data model. Unlike data normalization concerns in relational databases, embedding data in a document often provides better performance.

When, however, does one decide to embed documents inside another document? What are some of the considerations for doing so when thinking about schema design?

Types of Relationships

In the relational database world modeling different relationships comes down to examining how to model “One-to-N” relationships and the normalization of data. In MongoDB, there are a variety of ways to model these relationships. When doing schema design in MongoDB there is more to consider than a blanket model for a “One-to-N” relationship model.

We need to consider the size of “N” for our modeling because in this instance, size matters. One-to-one relationships can easily be handled with embedding a document inside another document. But what happens if “N” grows? Let’s have a look at the following cases, “One-to-few”, “One-to-Many”, and “One-to-Tons”.

One-to-Few

This is a pretty common occurrence, even in the relational database world. A single record that needs to be associated with a relatively small number of other data points. Something like keeping customer information and their associated phone numbers or addresses. We can embed an array of information inside the document for the customer.

{ 
  "_id" : ObjectId("56cb1cfb72d245023179fda4"),
  "name" :  "Harvey Waldrip",
  "phone" : [
     { "type" : "mobile", "number" : "503-555-5555" }, 
     { "type" : "home", "number" : "503-555-1111"}
  ]
}

This showcases the benefits, and drawbacks, of embedding. We can easily get the embedded information with a single query. The downside, however, is that the embedded can’t be accessed as autonomous data.

One-to-Many

“Many” here covers up to a few thousand or so in number. Say that we are modeling a product made up of smaller parts. For example, if we had an electronic parts kit each part in the kit could be referenced as a separate part.

{ 
  "_id" : ObjectId("AAAA"),
  "part_no" : "150ohm-0.5W"
  "name" : "150ohm 1/2 Watt Resistor"
  "qty" : 1
  "cost" : { NumberDecimal("0.13"), currency: "USD" }
}

Each piece in the kit would have its own document. Notice the format of the “cost” value, I discussed that in a post on Modeling Monetary Data in MongoDB. Each final product, or kit in our example, will contain an array of references to the necessary parts.

{
  "_id" : ObjectId("57d7a121fa937f710a7d486e"),
  "manufacturer" : "Elegoo",
  "catalog_number" : 123789,
  "parts" : [
     ObjectID("AAAA"),
     ObjectID("AAAB"),
     ObjectID("G9D6"),
     ...
  ]
}

Now we can utilize an application level join or depending on the use case the $lookup aggregation pipeline operator to get information about specific parts in a kit. For best performance, we need to make sure we have proper indexes in place on our collections as well.

This style of reference allows for quick and easy search and updating of the parts in the kit. It has basically become an “N-to-N” schema design without needing a separate join table. Pretty slick.

One-to-Tons

As I mentioned, “One-to-Many” is great for up to several thousand references. But what about cases when that isn’t enough?  Further, what if the referencing poses schema design concerns around the document limitation of 16MB? This is where parent referencing becomes very useful.

Let’s imagine an event log situation. We would have a document for the host machine and store that host machine in the log message documents.

Host

{ "_id" : "Bunyan", 
  "name" : "logger.lumberjack.com", 
  "ip_address" : "127.55.55.55"
}

Message

{ "_id" : "MongoDB", 
  "time" : ISODate("2017-08-29T17:25:00.000Z"),
  "message" : "Timber!!!", 
  "host" : ObjectId("Bunyan")
}

Again, for optimum searching, we would want to make sure indexes are properly in place.

Schema Design – Key Considerations

Now that we have seen some of the schema design options, how do we determine which is the best one to utilize? There are a few things to think about before choosing and have somewhat become the standard questions to ask when doing schema design in MongoDB.

Golden Rules for MongoDB Schema Design
  1. Unless there is a compelling reason to do so, favor embedding.
  2. Needing to access an object on its own is a compelling reason to not embed the object in another document.
  3. Unbounded array growth is a bad design.
  4. Don’t be afraid of joins on the application side. A properly indexed collection and query can have highly performant results.
  5. When denormalizing your data, consider the read to write ratio of the application.
  6. Finally, how you model your data depends on your application’s data access patterns. Match your schema design to how your application reads and writes the data.

Wrap Up

There are some great references available for designing your schemas in MongoDB. Some of my favorites are MongoDB Applied Design Patterns and MongoDB in Action. While I have not seen or read it, The Little Mongo DB Schema Design Book looks like a promising resource as well.

Juan Roy has a nice slide deck available on this topic as well. Definitely worth having a look.

There are several MongoDB specific terms in this post. I created a MongoDB Dictionary skill for the Amazon Echo line of products. Check it out and you can say “Alexa, ask MongoDB what is a document?” and get a helpful response.


Follow me on Twitter @kenwalger to get the latest updates on my postings.

Facebooktwitterredditlinkedinmail

Data Durability in MongoDB

When designing a database we want to make sure the data that we want to be stored actually gets stored. Data durability is a key factor in applications.  On local servers and test environments, this typically isn’t a huge issue. We can pretty easily tell when and if our environment crashes. What happens though as our system grows? What happens when we move to a distributed environment with many different pieces to our application puzzle?

In addition to many of the performance considerations that MongoDB has improved upon in recent versions, data durability is another improvement. It is also a topic from previous versions for which the product took some heat. Let’s take a look at some ways which we can design our applications around the idea of data durability.

Data Durability

There are a couple of different scenarios we need to consider when dealing with data durability, reads and writes. Let’s take a look at these two different considerations. In doing so we’ll see some ways to ensure our system is doing what we intend it to be doing.

Writes

Write operations in MongoDB follow a pretty clear path in MongoDB. At least in theory. From an application, they get sent to the primary server, in a replica set situation. The data goes into an in-memory store and oplog. At this point, the server, by default, sends back an “okey dokey” to the application.

Notice that I haven’t mentioned anything about writing data to disk yet. It hasn’t happened yet. The primary then writes the data to the journal file and then to disk. The secondaries are writing data to the disk during this process as well, at some point after the data has been written to the journal.

This can be all well and good in many situations as we are talking about small time frames between the application getting an “okay” and the data being persisted to disk. But there is still some latency there. Should something go wrong with the distributed system during that time, extra steps have to be taken to get the data. The application thinks is there, it did get a confirmation of it after all, and what actually took place with the disk writes.

I’m not going to go into the background of what actually goes on behind the scenes during an unexpected shutdown or failure. It is a bit beyond the scope of this particular post. I will show how to instruct MongoDB to wait to send our “okey dokey” signal to the application until the data is indeed on disk.

Write Acknowledgment

MongoDB has provided the functionality to set a level of acknowledgment for write operations with the write concern options. There are a few different options available for us here.

  • We can request an acknowledgment that data has been written to a specific number of servers in a replica set with the w option.
  • The j option requests acknowledgment of the data being written to the journal. This is a boolean value.
  • There is also a wtimeout option which, as the name might lead you to deduce, sets a timeout, in milliseconds, for the acknowledgment to occur.

With the w option, we can choose to tell MongoDB a specific number of servers that must confirm the write operation. Or, there is a handy “majority” option that allows for the write acknowledgment to occur when a majority of the data bearing members of the replia set have performed the write.

If, for example, we want to insert a document in the mongo shell and wait for a response from two members of our replica set with a two and a half second timeout period, we could do the following:

db.blogs.insert(
   { title: "Data Durability in MongoDB", length: 1099, topic: "MongoDB" },
   { writeConcern: { w: 2, wtimeout: 2500 } }
)
Uses

So why not just always set a write concern? The main reason is latency. The more servers that must respond with an “okay”, the longer it will take for the application to get that response. In a distributed environment, the physical servers may be located all over a given country, or around the world. It is a trade off between responsiveness for the application and data durability.

A good compromise, however, for application performance and data durability is to set w: 2 for your write concern. For writes that must be acknowledged, however, choose w: "majority".

Reads

What about reading our data and making sure that our application has the most recent data? How can we prevent dirty reads, reads that occur during the time frame between the in-memory storage of the data and the actual writing of the data to disk? Reads that might be affected by rollback with a failure occurring?

Similar to write concern, MongoDB offers, as of version 3.2, a read concern. Based on our knowledge of write concern, we can extrapolate that read concern allows us to specify which data to return from a replica set. There are three options we can choose when selecting a read concern level.

  • local – this default setting returns the most recent data with no guarantee that the data will be impacted by a rollback.
  • majority – returns data that has been written to a majority of the data bearing members of the replica set.
  • linearizable – returns data that reflects all successful writes issued with a write concern of “majority” and acknowledged prior to the start of the read operation. Linearizable was introduced in version 3.4 and is another great feature of that release.

Dirty reads may seem like a huge concern. In practice, however, we want to design our application to properly handle the write operations so that we can negate these concerns. There are times, though, such as reading passwords, that making sure we are reading the most recent and durable data is critical.

Wrap Up

MongoDB continues to listen to the community and address the concerns (no pun intended) of their users. The data durability issues of old shouldn’t be a reason to not give MongoDB a try.

There is also a great talk from MongoDB World 2017 by Alex Komyagin on ReadConcern and WriteConcern. I would recommend having a look at that talk for additional information and use cases.


There are several MongoDB specific terms in this post. I created a MongoDB Dictionary skill for the Amazon Echo line of products. Check it out and you can say “Alexa, ask MongoDB what is a document?” and get a helpful response.


Follow me on Twitter @kenwalger to get the latest updates on my postings.

Facebooktwitterredditlinkedinmail