Data Durability in MongoDB

When designing a database we want to make sure the data that we want to be stored actually gets stored. Data durability is a key factor in applications.  On local servers and test environments, this typically isn’t a huge issue. We can pretty easily tell when and if our environment crashes. What happens though as our system grows? What happens when we move to a distributed environment with many different pieces to our application puzzle?

In addition to many of the performance considerations that MongoDB has improved upon in recent versions, data durability is another improvement. It is also a topic from previous versions for which the product took some heat. Let’s take a look at some ways which we can design our applications around the idea of data durability.

Data Durability

There are a couple of different scenarios we need to consider when dealing with data durability, reads and writes. Let’s take a look at these two different considerations. In doing so we’ll see some ways to ensure our system is doing what we intend it to be doing.


Write operations in MongoDB follow a pretty clear path in MongoDB. At least in theory. From an application, they get sent to the primary server, in a replica set situation. The data goes into an in-memory store and oplog. At this point, the server, by default, sends back an “okey dokey” to the application.

Notice that I haven’t mentioned anything about writing data to disk yet. It hasn’t happened yet. The primary then writes the data to the journal file and then to disk. The secondaries are writing data to the disk during this process as well, at some point after the data has been written to the journal.

This can be all well and good in many situations as we are talking about small time frames between the application getting an “okay” and the data being persisted to disk. But there is still some latency there. Should something go wrong with the distributed system during that time, extra steps have to be taken to get the data. The application thinks is there, it did get a confirmation of it after all, and what actually took place with the disk writes.

I’m not going to go into the background of what actually goes on behind the scenes during an unexpected shutdown or failure. It is a bit beyond the scope of this particular post. I will show how to instruct MongoDB to wait to send our “okey dokey” signal to the application until the data is indeed on disk.

Write Acknowledgment

MongoDB has provided the functionality to set a level of acknowledgment for write operations with the write concern options. There are a few different options available for us here.

  • We can request an acknowledgment that data has been written to a specific number of servers in a replica set with the w option.
  • The j option requests acknowledgment of the data being written to the journal. This is a boolean value.
  • There is also a wtimeout option which, as the name might lead you to deduce, sets a timeout, in milliseconds, for the acknowledgment to occur.

With the w option, we can choose to tell MongoDB a specific number of servers that must confirm the write operation. Or, there is a handy “majority” option that allows for the write acknowledgment to occur when a majority of the data bearing members of the replia set have performed the write.

If, for example, we want to insert a document in the mongo shell and wait for a response from two members of our replica set with a two and a half second timeout period, we could do the following:

   { title: "Data Durability in MongoDB", length: 1099, topic: "MongoDB" },
   { writeConcern: { w: 2, wtimeout: 2500 } }

So why not just always set a write concern? The main reason is latency. The more servers that must respond with an “okay”, the longer it will take for the application to get that response. In a distributed environment, the physical servers may be located all over a given country, or around the world. It is a trade off between responsiveness for the application and data durability.

A good compromise, however, for application performance and data durability is to set w: 2 for your write concern. For writes that must be acknowledged, however, choose w: "majority".


What about reading our data and making sure that our application has the most recent data? How can we prevent dirty reads, reads that occur during the time frame between the in-memory storage of the data and the actual writing of the data to disk? Reads that might be affected by rollback with a failure occurring?

Similar to write concern, MongoDB offers, as of version 3.2, a read concern. Based on our knowledge of write concern, we can extrapolate that read concern allows us to specify which data to return from a replica set. There are three options we can choose when selecting a read concern level.

  • local – this default setting returns the most recent data with no guarantee that the data will be impacted by a rollback.
  • majority – returns data that has been written to a majority of the data bearing members of the replica set.
  • linearizable – returns data that reflects all successful writes issued with a write concern of “majority” and acknowledged prior to the start of the read operation. Linearizable was introduced in version 3.4 and is another great feature of that release.

Dirty reads may seem like a huge concern. In practice, however, we want to design our application to properly handle the write operations so that we can negate these concerns. There are times, though, such as reading passwords, that making sure we are reading the most recent and durable data is critical.

Wrap Up

MongoDB continues to listen to the community and address the concerns (no pun intended) of their users. The data durability issues of old shouldn’t be a reason to not give MongoDB a try.

There is also a great talk from MongoDB World 2017 by Alex Komyagin on ReadConcern and WriteConcern. I would recommend having a look at that talk for additional information and use cases.

There are several MongoDB specific terms in this post. I created a MongoDB Dictionary skill for the Amazon Echo line of products. Check it out and you can say “Alexa, ask MongoDB what is a document?” and get a helpful response.

Follow me on Twitter @kenwalger to get the latest updates on my postings.


MongoDB Plugin for PyCharm

There are many different options available when developing to look at and examine your MongoDB collections. MongoDB’s Compass is a great example of a tool that allows for the viewing and interaction with a database, collection, or document. However, when developing it is often useful to have the ability to see your data inside your development environment. Let’s take a look at a useful MongoDB Plugin for PyCharm for viewing collections.

MongoDB Plugin

While I will be discussing the Mongo Plugin specifically as it relates to PyCharm, the plugin itself works with the vast majority of IDEs provided by JetBrains. After downloading and installing the plugin we need to set a few things up. I’ll walk through setting up connections for a local installation of a MongoDB server as well as a connection to their Database as a Service, Atlas. For testing the connection we will want to make sure both of these servers are up and running.

MongoDB Plugin Settings

Local Server

For the local server, the settings are relatively straight forward. Assuming that we are working with a server on the default port of 27107, let’s take a look at our settings.

MongoDB Plugin - Initial Setup
File -> Settings -> Other Settings

We see here that there is a place to input the path to our Path to Mongo Shell. Be sure to put the location to the mongo executable and not the one for mongod. You can hit the test button next to the path name to make sure the plugin is happy with the correct file.

We next need to add a server to use and connect with. By clicking on the + symbol we are presented with an option to configure our server connection.

MongoDB Plugin - Localhost setup
Localhost setup configuration.

Here we see that we are able to label, or name, our connection and put in the server location in the format of host:port. For our example, we can use localhost:27017, as displayed above. For a single server without any authentication in place, these settings will connect to the database and you can see all of the databases on the server.

What if, however, you do have some authentication in place and want to establish a connection to a specific database? Let’s examine that with a connection to an Atlas configuration.

Atlas Server

We will need our Atlas connection URL that is available within our Atlas dashboard. Feel free to use your own server’s host name or IP address. For my server settings, I want to set a read preference for the Primary node and to connect to the travel collection in the database. I also selected that I’d like it to use SSL for the connection.

MongoDB Plugin - Atlas Connection
Connection to an Atlas database.

Since my Atlas server does require authentication, let’s take a look at that tab.

MongoDB Plugin Authentication
Setting up connection Authentication

We put in an appropriately established username and password along with the name of the authentication database. In this case, I am using the admin database. For Atlas, we want to choose the SCRAM-SHA-1 authorization mechanism. And then we can test this connection. If everything is configured correctly, we should get the good news pop up.

MongoDB Plugin Successful Configuration
Successful Connection Test.

Starting the Plugin

With our connections established, we can use the Mongo Explorer by navigating to View ->  Tool Windows -> Mongo Explorer. It will show our configured connections and when opening the connection up we see our databases listed.

MongoDB Plugin Enabling
Enabling the Plugin in PyCharm

Upon selecting a given database we are given a list of the collections. We can then choose a given collection and see a list of the documents in the collection.

MongoDB Plugin Examination
Examining a collection with the Mongo Plugin.

MongoDB Plugin ToolBar

If we have a look at the toolbar that appears above our collection:

MongoDB Plugin Toolbar callout

There are some great features in there.

MongoDB Plugin Toolbar

We see that we have a find option, an option to toggle the aggregation mode, and the ability to add and edit documents directly from PyCharm. We are given options to run queries with FilterProjection, and Sort parameters as well.  A group of very useful tools included with this plugin.

Wrap Up

With successfully configured connections to MongoDB servers, we can now utilize the Mongo Plugin to see what our data looks like as we develop. I personally find this to be a huge benefit and time saver when developing. If you use a JetBrains IDE for your development, I would highly encourage you to have a look at this very useful plugin.

There are several MongoDB specific terms in this post. I created a MongoDB Dictionary skill for the Amazon Echo line of products. Check it out and you can say “Alexa, ask MongoDB what is a document?” and get a helpful response.

Follow me on Twitter @kenwalger to get the latest updates on my postings.